26

What are simple commands to check if Hadoop daemons are running?

For example if I'm trying to figure out why HDFS is not setup correctly I'll want to know a way to check if namemonode/datanode/jobtracker/tasktracker are running on this machine.

Is there any way to check it fast without looking into logs or using ps(on Linux)?

4

9 回答 9

18

In the shell type 'jps' (you might need a jdk to run jps). It lists all the running java processes and will list out the hadoop daemons that are running.

于 2013-03-21T19:08:00.393 回答
11

If you see hadoop process is not running on ps -ef|grep hadoop, run sbin/start-dfs.sh. Monitor with hdfs dfsadmin -report:

[mapr@node1 bin]$ hadoop dfsadmin -report
Configured Capacity: 105689374720 (98.43 GB)
Present Capacity: 96537456640 (89.91 GB)
DFS Remaining: 96448180224 (89.82 GB)
DFS Used: 89276416 (85.14 MB)
DFS Used%: 0.09%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Name: 192.168.1.16:50010
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 44638208 (42.57 MB)
Non DFS Used: 4986138624 (4.64 GB)
DFS Remaining: 47813910528(44.53 GB)
DFS Used%: 0.08%
DFS Remaining%: 90.48%
Last contact: Tue Aug 20 13:23:32 EDT 2013


Name: 192.168.1.17:50010
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 44638208 (42.57 MB)
Non DFS Used: 4165779456 (3.88 GB)
DFS Remaining: 48634269696(45.29 GB)
DFS Used%: 0.08%
DFS Remaining%: 92.03%
Last contact: Tue Aug 20 13:23:34 EDT 2013
于 2013-08-20T17:25:23.243 回答
6

I did not find great solution to it, so I used

ps -ef | grep hadoop | grep -P  'namenode|datanode|tasktracker|jobtracker'

just to see if stuff is running

and

./hadoop dfsadmin -report

but last was not helpful until server was running.

于 2013-03-22T21:47:19.773 回答
6

you can use Jps command as vipin said like this command :

/usr/lib/java/jdk1.8.0_25/bin/jps  

of course you will change the path of java with the one you have "the path you installed java in"
Jps is A nifty tool for checking whether the expected Hadoop processes are running (part of Sun’s Java since v1.5.0).
the result will be something like that :

2287 TaskTracker  
2149 JobTracker  
1938 DataNode  
2085 SecondaryNameNode  
2349 Jps  
1788 NameNode  

I get the answer from this tutorial: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

于 2014-12-09T19:33:12.080 回答
5

apart from jps, another good idea is to use the web interfaces for NameNode and JobTracker provided by Hadoop. It not only shows you the processes but provides you a lot of other useful info like your cluster summary, ongoing jobs etc atc. to go to the NN UI point your web browser to "YOUR_NAMENODE_HOST:9000" and for JT UI "YOUR_JOBTRACKER_HOST:9001".

于 2013-03-21T20:22:17.483 回答
5

Try jps command. It specifies the java processes which are up and running.

于 2014-03-19T15:54:38.403 回答
0

To check whether Hadoop Nodes are running or not:

sudo -u hdfs hdfs dfsadmin -report

Configured Capacity: 28799380685 (26.82 GB)
Present Capacity: 25104842752 (23.38 GB)
DFS Remaining: 25012056064 (23.29 GB)
DFS Used: 92786688 (88.49 MB)
DFS Used%: 0.37%
Under replicated blocks: 436
Blocks with corrupt replicas: 0
Missing blocks: 0


Datanodes available: 1 (1 total, 0 dead)

Live datanodes:
Name: 127.0.0.1:50010 (localhost.localdomain)
Hostname: localhost.localdomain
Rack: /default
Decommission Status : Normal
Configured Capacity: 28799380685 (26.82 GB)
DFS Used: 92786688 (88.49 MB)
Non DFS Used: 3694537933 (3.44 GB)
DFS Remaining: 25012056064 (23.29 GB)
DFS Used%: 0.32%
DFS Remaining%: 86.85%
Last contact: Thu Mar 01 22:01:38 IST 2018

于 2018-03-01T16:38:58.547 回答
0

To check deamons are running?

You can check with jps command

use use below commands also

ps -ef | grep -w namenode

ps -ef | grep -w datanode

ps -ef | grep -w tasktracker 

-w :- will help to fetch the exact string

If you have Superuser privilege then you can also use below one for the same:

./hadoop dfsadmin -report

Hope this will help !!!

于 2019-05-14T16:30:28.197 回答
-1

Try running this:

for service in /etc/init.d/hadoop-hdfs-*; do $service status; done;
于 2015-02-10T04:52:21.433 回答