开发者

I can't get Hadoop to start using Amazon EC2/S3

开发者 https://www.devze.com 2023-01-23 15:47 出处:网络
I have created an AMI image and installed Hadoop from the Cloudera CDH2 build. I configured my core-site.xml as so:

I have created an AMI image and installed Hadoop from the Cloudera CDH2 build. I configured my core-site.xml as so:

<property>
   <name>fs.default.name</name>
   <value>s3://<BUCKET NAME>/</value>
 </property>
 <property>
    <name>fs.s3.awsAccessKeyId</name>
    <value><ACCESS ID></value>
 </property>
 <property>
    <name>fs.s3.awsSecretAccessKey</name>
    <value><SECRET KEY></value>
 </property>
 <property>
    <name>hadoop.tmp.dir</name>
    <value>/var/lib/hadoop-0.20/cache/${user.name}</value>
 </property>

But I get the following error message when I start up the hadoop daemons in the namenode log:

2010-11-03 23:45:21,680 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:      java.lang.IllegalArgumentException: Invalid URI for NameNode address (check     fs.default.name): s3://<BUCKET NAME>/ is not of scheme 'hdfs'.
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:177)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.ini开发者_如何学JAVAtialize(NameNode.java:198)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:306)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1006)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)

2010-11-03 23:45:21,691 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

However, I am able to execute hadoop commands from the command line like so:

hadoop fs -put sun-javadb-common-10.5.3-0.2.i386.rpm s3://<BUCKET NAME>/

 hadoop fs -ls s3://poc-jwt-ci/
Found 3 items
drwxrwxrwx   -          0 1970-01-01 00:00 /
-rwxrwxrwx   1      16307 1970-01-01 00:00 /sun-javadb-common-10.5.3-0.2.i386.rpm
drwxrwxrwx   -          0 1970-01-01 00:00 /var

You will notice there is a / and a /var folders in the bucket. I ran the hadoop namenode -format when I first saw this error, then restarted all services, but still receive the weird Invalid URI for NameNode address (check fs.default.name): s3://<BUCKET NAME>/ is not of scheme 'hdfs'.

I also notice that the file system created looks like this:

 hadoop fs -ls s3://<BUCKET NAME>/var/lib/hadoop-0.20/cache/hadoop/mapred/system
Found 1 items
-rwxrwxrwx   1          4 1970-01-01 00:00 /var/lib/hadoop0.20/cache/hadoop/mapred/system/jobtracker.info

Any ideas of what's going on?


First I suggest you just use Amazon Elastic MapReduce. There is zero configuration required on your end. EMR also has a few internal optimizations and monitoring that works in your benefit.

Second, do not use s3: as your default FS. First, s3 is too slow to be used to store intermediate data between jobs (a typical unit of work in hadoop is a dozen to dozens of MR jobs). it also stores the data in a 'proprietary' format (blocks etc). So external apps can't effectively touch the data in s3.

Note that s3: in EMR is not the same s3: in the standard hadoop distro. The amazon guys actually alias s3: as s3n: (s3n: is just raw/native s3 access).


You could also use Apache Whirr for this workflow like this:

  1. Start by downloading the latest release (0.7.0 at this time) from http://www.apache.org/dyn/closer.cgi/whirr/

  2. Extract the archive and try to run ./bin/whirr version. You need to have Java installed for this to work.

  3. Make your Amazon AWS credentials available as environment variables:

    export AWS_ACCESS_KEY_ID=... 
    export AWS_SECRET_ACCESS_KEY=...

  4. Update the Hadoop EC2 config to match your needs by editing recipes/hadoop-ec2.properties. Check the Configuration Guide for more info.

  5. Start a cluster Hadoop by running:

    ./bin/whirr launch-cluster --config recipes/hadoop-ec2.properties

  6. You can see verbose logging output by doing tail -f whirr.log

  7. Now you can login to your cluster and do your work.

    ./bin/whirr list-cluster --config recipes/hadoop-ec2.properties
    ssh namenode-ip
    start jobs as needed or copy data from / to S3 using distcp
    

For more explanations you should read the Quick Start Guide and the 5 minutes guide.

Disclaimer: I'm one of the committers.


I think you should not execute bin/hadoop namenode -format, because it is used for format the hdfs. In the later version, hadoop has move these functions in a separate scripts file which called "bin/hdfs". After you set the configuration parameters in core-site.xml and other configuration files, you can use S3 as the underlying file system directly.


Use

fs.defaultFS = s3n://awsAccessKeyId:awsSecretAccessKey@BucketName in your /etc/hadoop/conf/core-site.xml

Then do not start your datanode or namenode, if you have services that need your datanode and namenode this will not work..

I did this and can access my bucket using commands like sudo hdfs dfs -ls /

Note if you have awsSecretAccessKey's with "/" character then you will have to url encode this.


Use s3n instead of s3.

hadoop fs -ls s3n://<BUCKET NAME>/etc
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号