开发者

How to get the html content from nutch

开发者 https://www.devze.com 2023-02-13 18:10 出处:网络
Is there is any way to get t开发者_如何学Pythonhe html content of each webpage in nutch while crawling the web page?Yes, you can acutally export the content of the crawled segments. It is not straight

Is there is any way to get t开发者_如何学Pythonhe html content of each webpage in nutch while crawling the web page?


Yes, you can acutally export the content of the crawled segments. It is not straightforward, but it works well for me. First, create a java project with the following code:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.Text;
import org.apache.nutch.protocol.Content;
import org.apache.nutch.util.NutchConfiguration;

import java.io.File;
import java.io.FileOutputStream;

public class NutchSegmentOutputParser {

public static void main(String[] args) {

    if (args.length != 2) {
        System.out.println("usage: segmentdir (-local | -dfs <namenode:port>) outputdir");
        return;
    }

    try {
        Configuration conf = NutchConfiguration.create();
        FileSystem fs = FileSystem.get(conf);


        String segment = args[0];

        File outDir = new File(args[1]);
        if (!outDir.exists()) {
            if (outDir.mkdir()) {
                System.out.println("Creating output dir " + outDir.getAbsolutePath());
            }
        }

        Path file = new Path(segment, Content.DIR_NAME + "/part-00000/data");
        SequenceFile.Reader reader = new SequenceFile.Reader(fs, file, conf);


        Text key = new Text();
        Content content = new Content();

        while (reader.next(key, content)) {
            String filename = key.toString().replaceFirst("http://", "").replaceAll("/", "___").trim();

            File f = new File(outDir.getCanonicalPath() + "/" + filename);
            FileOutputStream fos = new FileOutputStream(f);
            fos.write(content.getContent());
            fos.close();
            System.out.println(f.getAbsolutePath());
        }
        reader.close();
        fs.close();
    } catch (Exception e) {
        e.printStackTrace();
    }

}

}

I recommend using Maven; add the following dependencies:

     <dependency>
      <groupId>org.apache.nutch</groupId>
        <artifactId>nutch</artifactId>
        <version>1.5.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-common</artifactId>
        <version>0.23.1</version>
    </dependency>

and create a jar package (i.e. NutchSegmentOutputParser.jar)

You need Hadoop to be installed on your machine. Then run:

$/hadoop-dir/bin/hadoop --config \
NutchSegmentOutputParser.jar:~/.m2/repository/org/apache/nutch/nutch/1.5.1/nutch-1.5.1.jar \
NutchSegmentOutputParser nutch-crawled-dir/2012xxxxxxxxx/ outdir

where nutch-crawled-dir/2012xxxxxxxxx/ is the crawled directory you want to extract content from (it contains 'segment' subdirectory) and outdir is an output dir. The output file names are generated from URI, however, the slashes are replaced by "_".

Hope it helps.


Try this:

public ParseResult filter(Content content, ParseResult parseResult, HTMLMetaTags
 metaTags, DocumentFragment doc) 
{
 Parse parse = parseResult.get(content.getUrl());
 LOG.info("parse.getText: " +parse.getText());
 return parseResult;
}

Then check the content in hadoop.log.


Its super basic.

public ParseResult getParse(Content content) {
   LOG.info("getContent: " + new String(content.getContent()));

The Content object has a method getContent(), which returns a byte array. Just have Java create a new String() with the BA, and you've got the raw html of whatever nutch had fetched.

I'm using Nutch 1.9

Here's the JavaDoc on org.apache.nutch.protocol.Content https://nutch.apache.org/apidocs/apidocs-1.2/org/apache/nutch/protocol/Content.html#getContent()


Yes there is a way. Have a look at cache.jsp to see how it displays the cached data.

0

精彩评论

暂无评论...
验证码 换一张
取 消