开发者

Get full site clone

开发者 https://www.devze.com 2023-02-23 00:19 出处:网络
Is there a way to crawl a website and have it开发者_如何学C download every single file and make it relative? I have a site that has a lot of absolute url\'s and when I use wget it only downloads the i

Is there a way to crawl a website and have it开发者_如何学C download every single file and make it relative? I have a site that has a lot of absolute url's and when I use wget it only downloads the index.html page and doesn't get the files that are absolute urls nor does it turn them into relative links. Is this possible?

Thanks.


This isn't a programming question ... but you could try HTTrack

Its free and open source and available on both windows and linux.


There's a Firefox extension called DownloadThemAll! that can crawl for sure a single page. Not sure if you can force it to crawl an entire website.


Httrak is a decent cloning free app, be it a .com or .onion You have to change proxies and a few tweaks, cant fully remember now, you can find the relevant info on YouTube. Yet if a non DW site, can download all files using Httrak.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号