Is there a way to specify multiple input files for a hadoop job? I've tried separation using ',' but that didnt' work...any other suggestions?
I was able to do so...by writing my own method to do a split based off of the splitter 开发者_StackOverflow社区selected and then adding the newly created paths to the job conf
You can specify a directory as the input path and it will process all files in that directory.
精彩评论