2

我有一个将日志写入 HDFS 的水槽。
我在单个节点中创建了一个代理。
但它没有运行。
有我的配置。


# example2.conf:单节点 Flume 配置

# 命名这个代理上的组件
agent1.sources = source1
agent1.sinks = sink1 agent1.channels =
channel1

# 描述/配置 source1
agent1.sources.source1.type = avro
agent1.sources.source1.bind = localhost
agent1.sources.source1.port = 41414

# 在内存中使用缓冲事件的通道
agent1.channels.channel1.type = memory
agent1.channels.channel1.capacity = 10000
agent1.channels.channel1.transactionCapacity = 100

# 描述 sink1
agent1.sinks.sink1.type = HDFS
agent1.sinks.sink1.hdfs.path = hdfs://dbkorando.kaist.ac.kr:9000/flume

# 绑定源和汇通道
agent1.sources.source1.channels = channel1 agent1.sinks.sink1.channel =
channel1


我命令

flume-ng agent -n agent1 -c conf -C /home/hyahn/hadoop-0.20.2/hadoop-0.20.2-core.jar -f conf/example2.conf -Dflume.root.logger=INFO,console

结果是


信息:包括通过 (/home/hyahn/hadoop-0.20.2/bin/hadoop) 找到的用于 HDFS 访问的 Hadoop 库
+ exec /usr/java/jdk1.7.0_02/bin/java -Xmx20m -Dflume.root.logger=信息,控制台 -cp '/etc/flume-ng/conf:/usr/lib/flume-ng/lib/*:/home/hyahn/hadoop-0.20.2/hadoop-0.20.2-core.jar' - Djava.library.path=:/home/hyahn/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 org.apache.flume.node.Application -n agent1 -f conf/example2。 conf
2012-11-27 15:33:17,250 (main) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.start(LifecycleSupervisor.java:67)] 启动生命周期主管 1
2012-11-27 15:33:17,253 (main) [INFO - org.apache.flume.node.FlumeNode.start(FlumeNode.java:54)] Flume 节点启动 - agent1
2012-11-27 15:33:17,257 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.conf.file.AbstractFileConfigurationProvider.start(AbstractFileConfigurationProvider.java:67)] 配置提供程序开始于
2012-11-27 15 :33:17,257 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.start(DefaultLogicalNodeManager.java:203)] 节点管理器开始于
2012-11-27 15:33:17,258 (lifecycleSupervisor -1-0) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.start(LifecycleSupervisor.java:67)] 启动生命周期主管 9
2012-11-27 15:33:17,258 (conf-file-poller-0) [INFO - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:195)] 重新加载配置文件:conf/example2.conf
2012-11-27 15:33:17,266 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:988)] 处理:sink1
2012-11- 27 15:33:17,266 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:988)] 处理:sink1
2012-11-27 15:33 :17,267 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:988)] 处理:sink1
2012-11-27 15:33:17,268 (conf -file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:902)] 添加了接收器:sink1 代理:agent1
2012-11-27 15:33:17,290 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:122)] 验证后水槽配置包含代理的配置: [agent1]
2012-11-27 15:33:17,290 (conf-file-poller-0) [INFO - org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadChannels(PropertiesFileConfigurationProvider.java:249)] 创建通道
2012 -11-27 15:33:17,354 (conf-file-poller-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.(MonitoredCounterGroup.java:68)] 监控的计数器组类型:CHANNEL,名称:channel1 , 注册成功。
2012-11-27 15:33:17,355 (conf-file-poller-0) [INFO - org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadChannels(PropertiesFileConfigurationProvider.java:273)] 创建通道 channel1
2012-11- 27 15:33:17,368 (conf-file-poller-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.(MonitoredCounterGroup.java:68)] 监控的计数器组类型:SOURCE,名称:source1,注册成功.
2012-11-27 15:33:17,378 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:70)] 创建 sink 实例: sink1,输入:高密度文件系统


如上,出现了flume-ng在sink生成部分停止的问题。问题是什么?

4

1 回答 1

1

您需要打开另一个窗口并在端口发送 avro 命令41414

bin/flume-ng avro-client --conf conf -H localhost -p 41414 -F /home/hadoop1/aaa.txt -Dflume.root.logger=DEBUG,console

这里我有一个名为目录aaa.txt的文件/home/hadoop1/

您的水槽将读取此文件并发送到 hdfs。

于 2012-12-31T08:33:29.620 回答