开发者

"Child Error" in Executing stream Job on multi node Hadoop cluster (cloudera distribution CDH3u0 Hadoop 0.20.2)

开发者 https://www.devze.com 2023-03-20 06:01 出处:网络
I am working on 8 node Hadoop cluster, and I am trying to execute a simple streaming Job with the specified configuration.

I am working on 8 node Hadoop cluster, and I am trying to execute a simple streaming Job with the specified configuration.

hadoop jar /usr/lib/hadoop-0.20/contrib/streaming/hadoop-streaming-0.20.2-cdh3u0.jar \-D mapred.map.max.tacker.failures=10 \-D mared.map.max.attempts=8 \-D mapred.skip.attempts.to.start.skipping=8 \-D mapred.skip.map.max.skip.records=8 \-D mapred.skip.mode.enabled=true \-D mapred.max.map.failures.percent=5  \-input /user/hdfs/ABC/ \-output "/user/hdfs/output1/" \-mapper "perl -e 'while (<>) { chomp; print; }; exit;" \-reducer "perl -e 'while (<>) { ~s/LR\>/LR\>\n/g; print ; }; exit;" 

I am using cloudera's distribution for hadoop CDH3u0 with hadoop 0.20.2. The problem in execution of this job is that the job is getting failed everytime. The job is giving the error:

 java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:242)
Caused by: java.io.IOException开发者_Python百科: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:229)

-------
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:242)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:229)

 STDERR on the datanodes: 
    Exception in thread "main" java.io.IOException: Exception reading file:/mnt/hdfs/06/local/taskTracker/hdfs/jobcache/job_201107141446_0001/jobToken
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:146)
    at org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(TokenCache.java:159)
    at org.apache.hadoop.mapred.Child.main(Child.java:107)
Caused by: java.io.FileNotFoundException: File file:/mnt/hdfs/06/local/taskTracker/hdfs/jobcache/job_201107141446_0001/jobToken does not exist.

For the cause of the error I have checked the following things and still it is crashing for which I am unable to understand the reason.

1. All the temp directories are in place
2. Memory is way more than it might be required for job (running a small job)
3. Permissions verified. 
4. Nothing Fancier done in the configuration just usual stuff.

The most weird thing is that job runs successfully sometime and fails most of the time. Any guidance/Help regarding the issues would be really helpful. I am working on this error from last 4 days and I am not able to figure out anything. Please Help!!!

Thanks & Regards, Atul


I have faced the same problem, it happens if task tracker is not able to allocates specified memory to the child JVM for the task.

Try executing same job again when cluster is not busy running many other jobs along with this one, it will go through or have speculative execution to true, in that case hadoop will execute the same task in another task tracker.

0

精彩评论

暂无评论...
验证码 换一张
取 消