开发者

Hadoop reduce task gets hung

开发者 https://www.devze.com 2023-03-15 11:59 出处:网络
I set up a hadoop cluster with 4 nodes, When running a map-reduce task, the map task finishes quickly, while the reduce task hangs at 27% percent. I checked the log, it\'s that the reduce task fails t

I set up a hadoop cluster with 4 nodes, When running a map-reduce task, the map task finishes quickly, while the reduce task hangs at 27% percent. I checked the log, it's that the reduce task fails to fetch map output from map nodes.

The job tracker log of master shows messages like this:

---------------------------------
2011-06-27 19:55:14,748 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE)
'attempt_201106271953_0001_r_000000_0' to tip task_201106271953_0001_r_000000, for 
tracker 'tracker_web30.bbn.com.cn:localhost/127.0.0.1:56476'

And the name node log of master shows messages like this:

2011-06-27 14:00:52,898 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 
54310, call register(Datano开发者_运维知识库deRegistration(202.106.199.39:50010, storageID=DS-1989397900-
202.106.199.39-50010-1308723051262, infoPort=50075, ipcPort=50020)) from 
192.168.225.19:16129: error: java.io.IOException: verifyNodeRegistration: unknown 
datanode 202.106.199.3     9:50010

However, neither the "web30.bbn.com.cn" or 202.106.199.39, 202.106.199.3 is the slave node. I think such ip/domains appear because hadoop fails to resolve a node(first in the Intranet DNS server), then it goes to a higher-level DNS server, later to the top, still fails, then the "junk" ip/domains are returned.

But I checked my config, it goes like this:

---------------------------------
/etc/hosts:
127.0.0.1       localhost.localdomain localhost
::1     localhost6.localdomain6 localhost6
192.168.225.16 master
192.168.225.66 slave1
192.168.225.20 slave5
192.168.225.17 slave17

conf/core-site.xml:

---------------------------------
<?xml version="2.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/root/hadoop_tmp/hadoop_${user.name}</value>
    </property> 
    <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
     </property> 
    <property>
            <name>io.sort.mb</name>
            <value>1024</value>
        </property>
</configuration>

hdfs-site.xml:

---------------------------------
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>

masters:

---------------------------------
master

slaves:

---------------------------------
master
slave1
slave5
slave17

Also, all firewalls(iptables) are turned off, and ssh between each 2 nodes is ok. so I don't know where exact the error comes from. Please help. Thanks a lot.


Well finally found the problem. I made a test earlier to add a new node to the cluster and later removed the node. However, I forgot to kill the task tracker of the new node, causing the new node to send heartbeats all the time. While the hosts file was modified, the new node was commented off. So the master got confused as it could not find who was the node, then tried to ask DNS server... After killing the task tracker of new node, everything goes ok

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号