1

我的从属虚拟机出现故障,我想这是因为使用的 DFS 是 100%。您能否给出一个系统的方法来解决这个问题?是防火墙问题吗?容量问题或可能导致它的原因以及如何解决?

ubuntu@anmol-vm1-new:~$  hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

15/12/13 22:25:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 845446217728 (787.38 GB)
Present Capacity: 797579996211 (742.80 GB)
DFS Remaining: 794296401920 (739.75 GB)
DFS Used: 3283594291 (3.06 GB)
DFS Used%: 0.41%
Under replicated blocks: 1564
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (4 total, 2 dead)

Live datanodes:
Name: 10.0.1.190:50010 (anmol-vm1-new)
Hostname: anmol-vm1-new
Decommission Status : Normal
Configured Capacity: 422723108864 (393.69 GB)
DFS Used: 1641142625 (1.53 GB)
Non DFS Used: 25955075743 (24.17 GB)
DFS Remaining: 395126890496 (367.99 GB)
DFS Used%: 0.39%
DFS Remaining%: 93.47%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 22:25:51 UTC 2015


Name: 10.0.1.193:50010 (anmol-vm4-new)
Hostname: anmol-vm4-new
Decommission Status : Normal
Configured Capacity: 422723108864 (393.69 GB)
DFS Used: 1642451666 (1.53 GB)
Non DFS Used: 21911145774 (20.41 GB)
DFS Remaining: 399169511424 (371.76 GB)
DFS Used%: 0.39%
DFS Remaining%: 94.43%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 22:25:51 UTC 2015


Dead datanodes:
Name: 10.0.1.191:50010 (anmol-vm2-new)
Hostname: anmol-vm2-new
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 0 (0 B)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 21:20:12 UTC 2015


Name: 10.0.1.192:50010 (anmol-vm3-new)
Hostname: anmol-vm3-new
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 0 (0 B)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 22:09:27 UTC 2015
4

1 回答 1

4

在 VM 中只有一个文件系统。以 root 身份登录

  1. df -sh(其中一个挂载点将显示 ~100%)
  2. du -sh /(它将列出每个目录的大小)
  3. 如果你的 namenode 和 datanode 目录以外的任何目录占用了太多空间,你可以开始清理
  4. 您也可以运行hadoop fs -du -s -h /user/hadoop(查看目录的使用情况)
  5. 识别所有不必要的目录并通过运行开始清理hadoop fs -rm -R /user/hadoop/raw_data(-rm 是删除 -R 是递归删除,使用 -R 时要小心)。
  6. 运行hadoop fs -expunge(立即清理垃圾,有时需要运行多次)
  7. 运行hadoop fs -du -s -h/ (它将为您提供整个文件系统的 hdfs 使用情况,或者您也可以运行 dfsadmin -report - 以确认是否回收了存储空间)
于 2015-12-13T23:48:21.390 回答