0

我刚开始使用水槽,需要在 hdfs 接收器中插入一些标题。

虽然格式错误并且我无法控制列,但我有这个工作。

使用此配置:

a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = syslogudp
a1.sources.r1.host = 0.0.0.0
a1.sources.r1.port = 44444

a1.sources.r1.interceptors = i1 i2
a1.sources.r1.interceptors.i1.type = org.apache.flume.interceptor.HostInterceptor$Builder
a1.sources.r1.interceptors.i1.preserveExisting = false
a1.sources.r1.interceptors.i1.hostHeader = hostname

a1.sources.r1.interceptors.i2.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
a1.sources.r1.interceptors.i2.preserveExisting = false

a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://localhost:9000/user/vagrant/syslog/%y-%m-%d/
a1.sinks.k1.hdfs.rollInterval = 120
a1.sinks.k1.hdfs.rollCount = 100
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text

a1.sinks.k1.serializer = header_and_text
a1.sinks.k1.serializer.columns = timestamp hostname
a1.sinks.k1.serializer.format = CSV
a1.sinks.k1.serializer.appendNewline = true

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

写入HDFS的日志除了序列化的方面主要是ok的:

{timestamp=1415574695138, Severity=6, host=PolkaSpots, Facility=3, hostname=127.0.1.1} hostapd: wlan0-1: STA xx WPA: group key handshake completed (RSN)

如何格式化日志,使它们看起来像这样:

1415574695138 127.0.1.1 hostapd: wlan0-1: STA xx WPA: group key handshake completed (RSN)

时间戳首先是主机名,然后是系统日志消息正文。

4

1 回答 1

1

原因是您配置的两个拦截器正在将值写入 Flume 事件标头,这些标头由 HeaderAndBodyTextEventSerializer 序列化到正文。后者只是这样做:

public void write(Event e) throws IOException {
    out.write((e.getHeaders() + " ").getBytes());
    out.write(e.getBody());
    if (appendNewline) {
      out.write('\n');
    }
  }

委托给 e.getHeaders() 只会将映射序列化为 JSON 字符串。

要解决此问题,我建议创建自己的序列化程序并重载 write() 方法以将输出格式化为制表符分隔值。在这种情况下,您只需要在以下位置指定类的路径:

a1.sinks.k1.serializer = com.mycompany.MySerlizer

并将 jar 放到 Flume 的类路径中。

于 2015-02-15T20:47:33.540 回答