开发者

Hadoop pseudo-distributed mode error

开发者 https://www.devze.com 2022-12-26 21:19 出处:网络
I have set-up Hadoop on a OpenSuse 11.2 VM using Virtualbox.I have made the prerequisite configs. I ran this example in the Standalone mode successfully.

I have set-up Hadoop on a OpenSuse 11.2 VM using Virtualbox.I have made the prerequisite configs. I ran this example in the Standalone mode successfully. But in psuedo-distributed mode I get the following error:


$./bin/hadoop fs -put conf input
10/04/13 15:56:25 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available                                                              
10/04/13 15:56:25 开发者_如何学CINFO hdfs.DFSClient: Abandoning block blk_-8490915989783733314_1003        
10/04/13 15:56:31 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available                                                              
10/04/13 15:56:31 INFO hdfs.DFSClient: Abandoning block blk_-1740343312313498323_1003        
10/04/13 15:56:37 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available                                                              
10/04/13 15:56:37 INFO hdfs.DFSClient: Abandoning block blk_-3566235190507929459_1003        
10/04/13 15:56:43 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available                                                              
10/04/13 15:56:43 INFO hdfs.DFSClient: Abandoning block blk_-1746222418910980888_1003        
10/04/13 15:56:49 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.                                                                           
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)                                                                                    
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) 
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)                                                                                         

10/04/13 15:56:49 WARN hdfs.DFSClient: Error Recovery for block blk_-1746222418910980888_1003 bad datanode[0] nodes == null                                                               
10/04/13 15:56:49 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/max/input/core-site.xml" - Aborting...                                                           
put: Protocol not available                                                                  
10/04/13 15:56:49 ERROR hdfs.DFSClient: Exception closing file /user/max/input/core-site.xml : java.net.SocketException: Protocol not available
java.net.SocketException: Protocol not available
        at sun.nio.ch.Net.getIntOption0(Native Method)
        at sun.nio.ch.Net.getIntOption(Net.java:178)
        at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
        at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
        at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156)
        at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286)
        at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129)
        at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)


It seems like there are no live data nodes in the cluster. Did you check whether status page shows live nodes? http://localhost:50070/

Start all Hadoop daemons using command $ bin/start-all.sh .


Did you start the Hadoop demons. This needs to be done in psuedo-dist mode unlike the standalone mode. You start them using something like:

$bin\start-all.sh

Documentation for the steps required can be found here.

Did you follow all these steps? Can you browse the NameNode and JobTracker web interfaces?


Maybe try using preconfigured virtual machine? http://www.cloudera.com/developers/downloads/virtual-machine/ I think this is probably the best way to start learning hadoop, and those problems should not happened there.

0

精彩评论

暂无评论...
验证码 换一张
取 消