我正在尝试将我的 es 2.2.0 版本与 hadoop HDFS 集成。在我的环境中,我有 1 个主节点和 1 个数据节点。在我的主节点上安装了我的 Es。但是在将它与 HDFS 集成时,我的资源管理器应用程序作业卡在 Accepted 状态。不知何故,我找到了更改 yarn-site.xml 设置的链接:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2200</value>
<description>Amount of physical memory, in MB, that can be allocated for containers.</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>500</value>
</property>
我也这样做了,但它没有给我预期的输出。
配置:-
我的核心站点.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.
</description> </property>
<property> <name>fs.default.name</name>
<value>
hdfs://localhost:54310
</value>
<description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.
</description>
</property>
我的 mapred-site.xml,
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description>
</property>
我的 hdfs-site.xml,
<property>
<name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description>
</property>
请帮助我如何将我的 RM 作业更改为运行状态。这样我就可以在 HDFS 上使用我的弹性搜索数据。