开发者

Pulling data from MySQL into Hadoop

开发者 https://www.devze.com 2023-01-03 20:31 出处:网络
I\'m just getting started with learning Hadoop, and I\'m wondering the following: suppose I have a bunch of large MySQL production tables that I want to analyze.

I'm just getting started with learning Hadoop, and I'm wondering the following: suppose I have a bunch of large MySQL production tables that I want to analyze.

  1. It seems like I have to dump all the tables in开发者_如何学运维to text files, in order to bring them into the Hadoop filesystem -- is this correct, or is there some way that Hive or Pig or whatever can access the data from MySQL directly?
  2. If I'm dumping all the production tables into text files, do I need to worry about affecting production performance during the dump? (Does it depend on what storage engine the tables are using? What do I do if so?)
  3. Is it better to dump each table into a single file, or to split each table into 64mb (or whatever my block size is) files?


Importing data from mysql can be done very easily. I recommend you to use Cloudera's hadoop distribution, with it comes program called 'sqoop' which provides very simple interface for importing data straight from mysql (other databases are supported too). Sqoop can be used with mysqldump or normal mysql query (select * ...). With this tool there's no need to manually partition tables into files. But for hadoop it's much better to have one big file.

Useful links:
Sqoop User Guide


2)
Since I dont know your environment I will aire on the safe, side - YES, worry about affecting production performance.

Depending on the frequency and quantity of data being written, you may find that it processes in an acceptable amount of time, particularly if you are just writing new/changed data. [subject to complexity of your queries]

If you dont require real time or your servers have typically periods when they are under utilized (overnight?) then you could create the files at this time.

Depending on how you have your environment setup, you could replicate/log ship to specific db server(s) who's sole job is to create your data file(s).


3)
No need for you to split the file, HDFS will take care of partitioning the data file into bocks and replicating over the cluster. By default it will automatically split into 64mb data blocks.
see - Apache - HDFS Architecture

re: Wojtek answer - SQOOP clicky (doesn't work in comments)

If you have more questions or specific environment info, let us know HTH Ralph

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号