开发者

How do I make an external reference table or database available to a Hadoop MapReduce job?

开发者 https://www.devze.com 2023-03-22 08:50 出处:网络
I am analyzing a large amount of files in a Hadoop MapReduce job, with the input files being in .txt format. Both my mapper and my reducer are written in Python.

I am analyzing a large amount of files in a Hadoop MapReduce job, with the input files being in .txt format. Both my mapper and my reducer are written in Python.

However, my mapper module requires access to the contents of an external csv-file, which is basically just a large table to look up reference values for a transformation that the mapper is performing.

Up until now, I just had the mapper load the file into memory from a local directory to make it available as a Python variable. Since the file is quite large, though (several thousand rows and columns), it takes a relatively long time to be loaded (about 10 seconds, too long for my purposes). The problem is that Hadoop seems to re-execute the mapper-script for every new input-file or it splits large input files into smaller ones, causing my csv-file to be unnecessarily loaded into memory again and again each time a new input-file is processed.

Is there a way to have Hadoop load the file only once and somehow make it "globally" available? Upon googling names like Hive, Pig, sqlite were popping up, but I never saw any examples to check if these are 开发者_StackOverflow中文版actually useful for this purpose.

Basically, I would just need some kind of database or dictionary to be accessed quickly while running my Hadoop job. The format of my reference table, doesn't have to be CSV, I am pretty flexible in transforming that data to different formats.


Yes, look into the -files option to your hadoop streaming command line. That will take a file you have loaded into HDFS and cache one copy of it locally on each tasktracker node and make a softlink to each mapper and reducer task's CWD.

There is also the -archives option if you have jars that you want to bundle with you job.


You should probably take a look at Sqoop. It imports you data from a database into HDFS so that you can process it using Map Reduce..

0

精彩评论

暂无评论...
验证码 换一张
取 消