0

我想通过fluentd将Kubernetes日志从fluent-bit转发到elasticsearch,但是fluent-bit无法正确解析kubernetes日志。为了安装 Fluent-bit 和 Fluentd,我使用 Helm 图表。我尝试了 stable/fluentbit 和 fluent/fluentbit 并面临同样的问题:

#0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: mapper_parsing_exception [reason]: 'Could not dynamically add mapping for field [app.kubernetes.io/component]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text].'"

我将以下几行放入流利位值文件中,如下所示

  remapMetadataKeysFilter:
    enabled: true
    match: kube.*

    ## List of the respective patterns and replacements for metadata keys replacements
    ## Pattern must satisfy the Lua spec (see https://www.lua.org/pil/20.2.html)
    ## Replacement is a plain symbol to replace with
    replaceMap:
      - pattern: "[/.]"
        replacement: "_"

...没有任何改变,列出了相同的错误。

是否有解决方法来摆脱该错误?

我的 values.yaml 在这里:

    # Default values for fluent-bit.

# kind -- DaemonSet or Deployment
kind: DaemonSet

# replicaCount -- Only applicable if kind=Deployment
replicaCount: 1

image:
  repository: fluent/fluent-bit
  pullPolicy: Always
  # tag:

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  create: true
  annotations: {}
  name:

rbac:
  create: true

podSecurityPolicy:
  create: false

podSecurityContext:
  {}
  # fsGroup: 2000

securityContext:
  {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

service:
  type: ClusterIP
  port: 2020
  annotations:
    prometheus.io/path: "/api/v1/metrics/prometheus"
    prometheus.io/port: "2020"
    prometheus.io/scrape: "true"

serviceMonitor:
  enabled: true
  namespace: monitoring
  interval: 10s
  scrapeTimeout: 10s
  # selector:
  #  prometheus: my-prometheus

resources:
  {}
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

podAnnotations: {}

priorityClassName: ""

env: []

envFrom: []

extraPorts: []
#   - port: 5170
#     containerPort: 5170
#     protocol: TCP
#     name: tcp

extraVolumes: []

extraVolumeMounts: []

## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit
config:
  ## https://docs.fluentbit.io/manual/service
  service: |
    [SERVICE]
        Flush 1
        Daemon Off
        Log_Level info
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        Parser docker
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    [INPUT]
        Name systemd
        Tag host.*
        Systemd_Filter _SYSTEMD_UNIT=kubelet.service
        Read_From_Tail On

  ## https://docs.fluentbit.io/manual/pipeline/filters
  filters: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Merge_Log_Key       log_processed
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off

    [FILTER]
        Name    lua
        Match   kube.*
        script  /fluent-bit/etc/functions.lua
        call    dedot
        
  ## https://docs.fluentbit.io/manual/pipeline/outputs
  outputs: |
    [OUTPUT]
        Name          forward
        Match         *
        Host          fluentd-in-forward.elastic-system.svc.cluster.local
        Port          24224
        tls           off
        tls.verify    off

  ## https://docs.fluentbit.io/manual/pipeline/parsers
  customParsers: |
    [PARSER]
        Name docker_no_time
        Format json
        Time_Keep Off
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
4

2 回答 2

1

我有同样的问题,这是由转换为 json 的多个标签引起的。我重命名了冲突键以匹配推荐标签的新格式:

<filter **>
  @type rename_key
  rename_rule1 ^app$ app.kubernetes.io/name
  rename_rule2 ^chart$ helm.sh/chart
  rename_rule3 ^version$ app.kubernetes.io/version
  rename_rule4 ^component$ app.kubernetes.io/component
  rename_rule5 ^istio$ istio.io/name
</filter>
于 2021-01-08T11:26:23.850 回答
0

我认为您的问题不在 kubernetes 中,不在 fluentbit/fluentd 图表中,您的问题在弹性搜索中,特别是在映射中。

在 elsticsearch 版本 7.x 中,同一字段不能有不同的类型(字符串、整数等)。

为了解决这个问题,我在用于 kubernetes 日志的索引模板中使用了 "ignore_malformed": true 。

https://www.elastic.co/guide/en/elasticsearch/reference/current/ignore-malformed.html

格式错误的字段没有被索引,但文档中的其他字段被正常处理。

于 2020-07-20T12:03:45.260 回答