We have a requirement to cache web pages as accurately as possible, so that we can go back and view a version of a page at any previous point in time. We'd like to be able to view the page as it really was - with the right css, javascript, images etc.
Are there any OS libraries (any language) that will fetch a page, download all externally-linked assets and re-write the links such they they point to the locally-cached assets?
Or is this a case of rolling our own?
Thanks
Edit: I realise that without rendering dynamically generated links etc that this is not going t开发者_开发技巧o be 100% possible unless we do DOM rendering. However for the time being we can probably live without this.
I suggest HTTrack: http://www.httrack.com/
Because the software is free, open source, and supports both visual interface and command line, I believe that you can integrate it or customize it to your needs smoothly.
See the description:
"HTTrack allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer.
It arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online.
It can also update an existing mirrored site, and resume interrupted downloads."
In what OS you can run it:
WebHTTrack for Linux/Unix/BSD: Debian, Ubuntu, Gentoo, RPM package (Mandriva & RedHat), OSX (MacPorts), Fedora and FreeBSD i386 packages.
WinHTTrack for Windows 2000/XP/Vista/Seven
--
Update: the project is active and the latest version was submitted in 04/01/2017
why dont apply a base href to the pages, replace internal absolute links with relative absolutes and keep the structure?
You could use the mht/mhtml format to save as a unified document.
Wiki description: http://en.wikipedia.org/wiki/MHTML
A quick search will reveal some sources of code to do this.
精彩评论