If I have say 20 HTML pages and I want to extract out the shared/similar portions of the documents,开发者_开发知识库 what are some efficient ways to do that?
So say for StackOverflow, comparing 10 pages I'd find that the top bar and the main menu bar are the same across each page, so I could extract them out.
It seems like I'd need either a diff program or some complex regexps, but assume I don't have any knowledge of the page/text/html structure beforehand.
Is this possible?
You should consider a clone detector such as CloneDR. Good ones compare the structure of thousands of files at once regardless of the formatting, and will tell you what the files have as common elements and how those common elements vary.
CloneDR has been applied to many programming langauges. Its foundation, the DMS Software Reengeering Toolkit, already handles (dirty) HTML, so it would be pretty easy to build an HMTL CloneDR.
You don't need any complex regexps; just a simple diff analyzer will do. Just do an (Enumerable) injection, keeping only similar parts as your memo.
Here are some in Ruby:
- ruby-diff -- Implements the text-diff algorithm from Perl
- diff-lcs
- HTMLdiff -- Finds the diff of two strings, and renders with pretty formatting (HTML) (Probably not exactly what you want, unless you can strip away all non-diff material from the output)
Hope this helps!
精彩评论