开发者

How can I troubshoot this Hadoop filesystem installation error?

开发者 https://www.devze.com 2023-02-20 03:07 出处:网络
I\'m trying to install Hadoop on a non-Cloudera Ubuntu test image. Everything seems to have been going well until I ran ./bin/start-all.sh. The name node never comes up so I can\'t even run a hadoop f

I'm trying to install Hadoop on a non-Cloudera Ubuntu test image. Everything seems to have been going well until I ran ./bin/start-all.sh. The name node never comes up so I can't even run a hadoop fs -ls to connect to the filesystem.

Here's the namenode log:

2011-03-24 11:38:00,256 INFO org.apache.hadoop.ipc.Server: Stopping server on 54310
2011-03-24 11:38:00,257 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop-datastore/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.had开发者_Go百科oop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:88)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:312)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:293)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:224)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:306)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1006)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)

2011-03-24 11:38:00,258 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Brash/192.168.1.5
************************************************************/

I've chmod -R 755 on the root directory and even gone so far as to make sure the directory exists by creating it with mkdir -p.

hadoop@Brash:/usr/lib/hadoop$ ls -la /usr/local/hadoop-datastore/hadoop-hadoop/dfs/
total 16
drwxr-xr-x 4 hadoop hadoop 4096 2011-03-24 11:41 .
drwxr-xr-x 4 hadoop hadoop 4096 2011-03-24 11:31 ..
drwxr-xr-x 2 hadoop hadoop 4096 2011-03-24 11:31 data
drwxr-xr-x 2 hadoop hadoop 4096 2011-03-24 11:41 name

Here's my /conf/hdfs-site.xml:

hadoop@Brash:/usr/lib/hadoop$ cat conf/hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>


You should never have to create the directory yourself. It will create it on its own. Did you forget to format namenode? Delete the existing directory, then reformat the namenode (bin/hadoop namenode -format) and try again.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号