开发者

Importing multi-level directories of logs in hadoop/pig

开发者 https://www.devze.com 2023-02-16 19:59 出处:网络
We store our logs in S3, and one of our (Pig) queries would grab three different log types. Each log type is in sets of subdirectories based upon type/date. For instance:

We store our logs in S3, and one of our (Pig) queries would grab three different log types. Each log type is in sets of subdirectories based upon type/date. For instance:

/logs/<type>/<year>/<month>/<day>/<hour>/lots_of_logs_for_this_hour_and_type.log*

my query would want to load all three types of logs, for a give time. For instance:

type1 = load 's3:/logs/type1/2011/03/08' as ...
type2 = load 's3:/logs/type2/2011/03/08' as ...
type3 = load 's3:/logs/type3/2011/03/08' as ...
result = join type1 ..., type2, etc...

my queries would then run against all of these logs.

What is the most efficient way to handle this?

  1. Do we need use the bash script expansion? Not sure if this works with multi directories, and I doubt it would be efficient (or even possible) if there were 10k logs to load.
  2. Do we create a service to aggregate all of the 开发者_如何学运维logs and push them to hdfs directly?
  3. Custom java/python importers?
  4. Other thoughts?

If you could leave some example code, if appropriate, as well, that would be helpful.

Thanks


Globbing is supported by default with PigStorage so you could just try:

type1 = load 's3:/logs/type{1,2,3}/2011/03/08' as ..

or even

type1 = load 's3:/logs/*/2011/03/08' as ..


I had a similiar log system like yours the only difference is I actually analyze the logs not by date but by type so I would use:

type1 = load 's3:/logs/type1/2011/03/' as ...

to analyze that months logs for type1 and don't mix it with with type2. since you are analyzing not by type but by date I would recommend you to change your structure to:

/logs/<year>/<month>/<day>/<hour>/<type>/lots_of_logs_for_this_hour_and_type.log*

so you can load the daily(or monthly) data then filter them by type, would be more convenient.


If like me you are using Hive and your data is partitioned, you could use some of the loaders in PiggyBank (e.g. AllLoader) that support partitioning so long the directory structure you want to filter on is looking like:

.../type=value1/...
.../type=value2/...
.../type=value3/...

You should then be able to LOAD the file and then FILTER BY type = 'value1'.

Example:

REGISTER piggybank.jar;
I = LOAD '/hive/warehouse/mytable' using AllLoader() AS ( a:int, b:int );
F = FILTER I BY type = 1 OR type = 2;
0

精彩评论

暂无评论...
验证码 换一张
取 消