我有以下设置:
- Hadoop 1.2.1
- 甲骨文Java 1.7
- Suse 企业服务器 10 32 位
如果我在独立模式下执行 Pi 示例
bin/hadoop jar hadoop-examples-1.2.1.jar pi 10 10
然后Java死得很惨,告诉我
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGFPE (0x8) at pc=0xb7efa20b, pid=9494, tid=3070639008
#
# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)
# Java VM: Java HotSpot(TM) Server VM (24.0-b56 mixed mode linux-x86 )
# Problematic frame:
# C [ld-linux.so.2+0x920b] do_lookup_x+0xab
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /opt/hadoop-1.2.1-new/hs_err_pid9494.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
(完整的痕迹在这里)
在分布式设置中,我可以使用start-all
组件并且它们运行良好。但是当我提交一份工作时,jobtracker 会立即死掉java.io.EOFException
,我认为这是由于与上述相同的错误。
我已经在另一台计算机上尝试过相同的 hadoop,一切都很好(尽管这个运行 Arch Linux 64 位),而其他 Java(OpenJDK、1.6、1.7)没有帮助。
有什么建议么?