开发者

Using s3 as fs.default.name or HDFS?

开发者 https://www.devze.com 2023-03-11 21:16 出处:网络
I\'m setting up a Hadoop cluster on EC2 and I\'m wondering how to do the DFS. All my data is currently in s3 and all map/reduce applications use s3 file pa开发者_运维百科ths to access the data. Now I\

I'm setting up a Hadoop cluster on EC2 and I'm wondering how to do the DFS. All my data is currently in s3 and all map/reduce applications use s3 file pa开发者_运维百科ths to access the data. Now I've been looking at how Amazons EMR is setup and it appears that for each jobflow, a namenode and datanodes are setup. Now I'm wondering if I really need to do it that way or if I could just use s3(n) as the DFS? If doing so, are there any drawbacks?

Thanks!


in order to use S3 instead of HDFS fs.name.default in core-site.xml needs to point to your bucket:

<property>
        <name>fs.default.name</name>
        <value>s3n://your-bucket-name</value>
</property>

It's recommended that you use S3N and NOT simple S3 implementation, because S3N is readble by any other application and by yourself :)

Also, in the same core-site.xml file you need to specify the following properties:

  • fs.s3n.awsAccessKeyId
  • fs.s3n.awsSecretAccessKey

fs.s3n.awsSecretAccessKey


Any intermediate data of your job goes to HDFS, so yes, you still need a namenode and datanodes


https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/core-default.xml

fs.default.name is deprecated, and maybe fs.defaultFS is better.


I was able to get the s3 integration working using

<property>
        <name>fs.default.name</name>
        <value>s3n://your-bucket-name</value>
</property> 

in the core-site.xml and get the list of the files get using hdfs ls command.but should also should have namenode and separate datanode configurations, coz still was not sure how the data gets partitioned in the data nodes.

should we have local storage for namenode and datanode?

0

精彩评论

暂无评论...
验证码 换一张
取 消