我正在尝试仅使用 2 台机器的小型 hadoop 设置(用于实验)。我正在加载大约 13GB 的数据,一个大约 3900 万行的表,使用 Hive 的复制因子为 1。
我的问题是 hadoop 总是将所有这些数据存储在单个数据节点上。只有当我使用 setrep 将 dfs_replication 因子更改为 2 时,hadoop 才会在另一个节点上复制数据。我还尝试了平衡器($HADOOP_HOME/bin/start-balancer.sh -threshold 0
)。平衡器认识到它需要移动 5GB 左右才能平衡。但是说:No block can be moved. Exiting...
并退出:
2010-07-05 08:27:54,974 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: Using a threshold of 0.0
2010-07-05 08:27:56,995 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.252.130.177:1036
2010-07-05 08:27:56,995 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.220.222.64:1036
2010-07-05 08:27:56,996 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 1 over utilized nodes: 10.220.222.64:1036
2010-07-05 08:27:56,996 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 1 under utilized nodes: 10.252.130.177:1036
2010-07-05 08:27:56,997 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: Need to move 5.42 GB bytes to make the cluster balanced.
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
No block can be moved. Exiting...
Balancing took 2.222 seconds
任何人都可以建议如何在不复制的情况下在 hadoop 上实现数据的均匀分布?