1

几天前我曾问过这个问题,但不知道日志文件的位置。

我在 core-site.xml 中有如下配置设置

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>blah blah....</description>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>

</configuration>

mapred-site.xml

<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

</configuration>

Namenode 的日志文件如下

2013-01-29 02:12:30,078 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = dheerajvc-ThinkPad-T420/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.1.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012
************************************************************/
2013-01-29 02:12:30,184 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-01-29 02:12:30,192 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-01-29 02:12:30,193 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-01-29 02:12:30,193 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-01-29 02:12:30,326 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-01-29 02:12:30,329 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-01-29 02:12:30,333 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-01-29 02:12:30,333 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-01-29 02:12:30,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-01-29 02:12:30,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-01-29 02:12:30,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-01-29 02:12:30,377 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-01-29 02:12:30,377 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-01-29 02:12:30,393 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-01-29 02:12:30,408 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2013-01-29 02:12:30,416 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-01-29 02:12:30,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-01-29 02:12:30,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 loaded in 0 seconds.
2013-01-29 02:12:30,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-01-29 02:12:30,422 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2013-01-29 02:12:30,525 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop/dfs/name/current/edits
2013-01-29 02:12:30,526 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop/dfs/name/current/edits
2013-01-29 02:12:30,878 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-01-29 02:12:30,879 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 513 msecs
2013-01-29 02:12:30,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct          = 0.949999988079071
2013-01-29 02:12:30,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-01-29 02:12:30,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension              = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 7 msec
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
2013-01-29 02:12:30,888 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-01-29 02:12:30,888 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-01-29 02:12:30,892 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-01-29 02:12:30,896 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-01-29 02:12:30,908 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-01-29 02:12:30,909 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50070 registered.
2013-01-29 02:12:30,910 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50070 registered.
2013-01-29 02:12:30,912 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:50070
2013-01-29 02:12:30,913 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Socket address is null
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:142)
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:340)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:306)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:529)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1403)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1412)

2013-01-29 02:12:30,913 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dheerajvc-ThinkPad-T420/127.0.1.1
************************************************************/

netstat --numeric-ports | grep "5431" 没有产生任何结果。所以我认为这些端口是免费的。namenode 期望什么套接字?

编辑:只有jobtracker和tasktracker正在启动,namenode是否需要在datanode和secondarynamenode启动之前启动?

我可以格式化 namenode ,但为什么它使用 /tmp 目录?我认为它应该使用 core-site.xml 中提到的位置

$ hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.

13/01/31 03:34:22 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = dheerajvc-ThinkPad-T420/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.1.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012
************************************************************/
13/01/31 03:34:22 INFO util.GSet: VM type       = 32-bit
13/01/31 03:34:22 INFO util.GSet: 2% max memory = 17.77875 MB
13/01/31 03:34:22 INFO util.GSet: capacity      = 2^22 = 4194304 entries
13/01/31 03:34:22 INFO util.GSet: recommended=4194304, actual=4194304
13/01/31 03:34:22 INFO namenode.FSNamesystem: fsOwner=dheerajvc
13/01/31 03:34:22 INFO namenode.FSNamesystem: supergroup=supergroup
13/01/31 03:34:22 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/01/31 03:34:22 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/01/31 03:34:22 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/01/31 03:34:22 INFO namenode.NameNode: Caching file names occuring more than 10 times 
13/01/31 03:34:22 INFO common.Storage: Image file of size 115 saved in 0 seconds.
13/01/31 03:34:22 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop/dfs/name/current/edits
13/01/31 03:34:22 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop/dfs/name/current/edits
13/01/31 03:34:23 INFO common.Storage: Storage directory /tmp/hadoop/dfs/name has been successfully formatted.
13/01/31 03:34:23 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dheerajvc-ThinkPad-T420/127.0.1.1
************************************************************/
4

1 回答 1

2

您应该在hdfs-site.xml文件而不是core-site.xml中提及目录位置。

hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/D:/analytics/hadoop/data/dfs/namenode</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/D:/analytics/hadoop/data/dfs/datanode</value>
    </property>
    <property>
    <name>dfs.permissions</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.http.address</name>
    <value>127.0.0.1:50070</value>
  </property>
  <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
    <description>to enable webhdfs</description>
    <final>true</final>
  </property>

</configuration>
于 2016-10-27T16:42:54.837 回答