0

我们在 fluentd 管道中有以下记录转换器配置:

<filter docker.**>
  @type record_transformer
  enable_ruby true
  <record>
    servername as1
    hostname "#{Socket.gethostname}"
    project xyz
    env prod
    service ${record["docker"]["labels"]["com.docker.compose.service"]}
  </record>
  remove_keys $.docker.container_hostname,$.docker.id, $.docker.image_id,$.docker.labels.com.docker.compose.config-hash, $.docker.labels.com.docker.compose.oneoff, $.docker.labels.com.docker.compose.project, $.docker.labels.com.docker.compose.service
</filter>

我们正在使用 S3 插件将日志推送到 S3。现在我们要使用自定义路径将日志保存在 S3 上ProjectName/ENv/service,为此我们创建 S3 输出插件,如下所示:

<store>
@type s3
s3_bucket test
s3_region us-east-1
store_as gzip_command
path logs
s3_object_key_format %{path}/${project}/${env}/${service}/%Y/%m/%d/%{time_slice}_%{index}.%{file_extension}
<buffer tag,time,project,env,service>
type file
path /var/log/td-agent/container-buffer-s3
timekey 300 # 1 minutes
timekey_wait 1m
timekey_use_utc true
chunk_limit_size 256m
</buffer>
time_slice_format %Y%m%d%H
</store>

不幸的是,这对我们不起作用。低于警告日志:

{"time":"2021-08-07 17:59:49","level":"warn","message":"chunk key placeholder 'project' not replaced. template:logs/${project}/${env}/${service}/%Y/%m/%d/%{time_slice}_%{index}.gz","worker_id":0}

期待对此的指导或任何建议。

4

1 回答 1

0

this config is correct and its working for us.

<store>
@type s3
s3_bucket test
s3_region us-east-1
store_as gzip_command
path logs
s3_object_key_format %{path}/${project}/${env}/${service}/%Y/%m/%d/%{time_slice}_%{index}.%{file_extension}
<buffer tag,time,project,env,service>
type file
path /var/log/td-agent/container-buffer-s3
timekey 300 # 1 minutes
timekey_wait 1m
timekey_use_utc true
chunk_limit_size 256m
</buffer>
time_slice_format %Y%m%d%H
</store>
于 2021-08-08T06:16:32.317 回答