0

我们重新启动集群上的 dastanodes

我们在 ambari 集群中有 15 台数据节点机器,而每台数据节点机器有 128G RAM

版本 - ( HDP - 2.6.4 和 ambari 版本 2.6.1 )

但datanode未能启动以下错误

Error occurred during initialization of VM
Too small initial heap

这很奇怪,因为 dtnode_heapsize 是 8G(DataNode 最大 Java 堆大小 = 8G),从日志中我们也可以看到

InitialHeapSize=8192 -XX:MaxHeapSize=8192

所以我们不明白它是怎么回事

剂量 - 与DataNode 最大 Java 堆大小相关的初始堆 大小 ?

来自datanode机器的日志

Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 197804180k(12923340k free), swap 16777212k(16613164k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:GCLogFileSize=1024000 -XX:InitialHeapSize=8192 -XX:MaxHeapSize=8192 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:NumberOfGCLogFiles=5 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseGCLogFileRotation -XX:+UseParNewGC 
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker01.sys242.com.out <==
Error occurred during initialization of VM
Too small initial heap
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 772550
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

另一个日志示例:

resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start datanode'' returned 1. starting datanode, logging to 
Error occurred during initialization of VM
Too small initial heap
4

1 回答 1

2

您提供的值以字节为单位指定。应该InitialHeapSize=8192m -XX:MaxHeapSize=8192m

请参阅https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html

于 2018-12-26T18:27:01.463 回答