0

我的目的——启动namenode的恶魔。我有必要使用 hdfs 的文件系统,从本地文件系统复制文件,在 hdfs 中创建文件夹,并且需要在配置 /conf/core-site 中指定的端口上启动 namenode 的恶魔.xml 文件。我启动了一个脚本

./hadoop namenode

结果我收到了以下消息

2013-02-17 12:29:37,493 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = one/192.168.1.8
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
************************************************************/
2013-02-17 12:29:38,325 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-17 12:29:38,400 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-17 12:29:38,427 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-17 12:29:38,427 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-02-17 12:29:39,509 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-17 12:29:39,542 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-02-17 12:29:39,633 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-02-17 12:29:39,635 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-02-17 12:29:39,704 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit
2013-02-17 12:29:39,708 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2013-02-17 12:29:39,708 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries
2013-02-17 12:29:39,708 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-02-17 12:29:42,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
2013-02-17 12:29:42,737 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-17 12:29:42,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-02-17 12:29:42,937 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-02-17 12:29:42,940 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-02-17 12:29:45,820 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-02-17 12:29:46,229 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2013-02-17 12:29:46,836 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-02-17 12:29:47,133 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-02-17 12:29:47,134 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 loaded in 0 seconds.
2013-02-17 12:29:47,134 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-02-17 12:29:47,163 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2013-02-17 12:29:47,375 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2013-02-17 12:29:47,479 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-02-17 12:29:47,480 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 6294 msecs
2013-02-17 12:29:47,919 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-02-17 12:29:47,919 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 430 msec
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 6 secs.
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-02-17 12:29:48,198 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-02-17 12:29:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 129 msec
2013-02-17 12:29:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 129 msec processing time, 129 msec clock time, 1 cycles
2013-02-17 12:29:48,280 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-02-17 12:29:48,280 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-02-17 12:29:48,280 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-02-17 12:29:48,711 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-02-17 12:29:48,836 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort2000 registered.
2013-02-17 12:29:48,836 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort2000 registered.
2013-02-17 12:29:48,865 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: one/192.168.1.8:2000
2013-02-17 12:30:23,264 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-02-17 12:30:25,326 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-02-17 12:30:25,727 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-02-17 12:30:25,997 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-02-17 12:30:26,269 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.net.BindException: Address already in use
2013-02-17 12:30:26,442 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
2013-02-17 12:30:26,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2013-02-17 12:30:26,446 INFO org.apache.hadoop.ipc.Server: Stopping server on 2000
2013-02-17 12:30:26,446 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2013-02-17 12:30:26,616 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:722)
2013-02-17 12:30:26,761 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:344)
    at sun.nio.ch.Net.bind(Net.java:336)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:581)
    at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:445)
    at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:353)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:353)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:305)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2013-02-17 12:30:26,784 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at one/192.168.1.8
************************************************************/

帮助启动namenode的demo,进一步启动应用程序的hadoop。

4

2 回答 2

0
2013-02-17 12:30:26,761 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use

看起来您已经在名称节点绑定到的同一端口上运行了一个进程。可能意味着您已经运行了一个名称节点进程的实例。

您应该能够使用该jps -v命令列出当前用户正在运行的 java 进程,或ps aww | grep java列出所有正在运行的 java 进程。

于 2013-02-17T21:15:56.717 回答
0

检查您的 IP 地址是否在 /etc/hosts 文件中正确映射。使用 ifconfig 检查并映射到正确的 DNS 名称。如果映射也不正确,则会引发此错误。

于 2014-08-26T23:47:26.640 回答