我正在研究ELK 堆栈。我正在尝试将 wso2 应用程序日志推送到 Elastic Search。
我将 Filebeats 配置为读取包含DCS
.
我在终端中看到filebeats
日志,因为删除行与提供的模式不匹配。
2020-06-25T01:43:10.557+0530 DEBUG [harvester] log/harvester.go:488 Drop line as it does not match any of the include patterns TID: [-1234] [] [2020-06-25 01:43:01,725] INFO {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker} - Startup : syncUdaDataToUsage_OnlyOnce was removed from the Synapse configuration successfully - [ Deployed From Artifact Container: usage-service-capp ] {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker}
但是我在 Kibana 中看到了相同的日志
文件节拍.yml
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- C:\WORKSPACE\TransitionServices\wso2ei-6.1.0\repository\logs\wso2carbon.log
include_lines: [' DCS ']
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
logstsh-beat.conf
input {
beats {
type => "beats"
host => "localhost"
port => 5044
}
}
filter {
grok {
match => {"message" => "TID:%{SPACE}\[%{INT:SystemId}\]%{SPACE}\[%{DATA:ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:logLevel}%{SPACE}{org.apache.synapse.mediators.builtin.LogMediator}%{SPACE}-%{SPACE}%{WORD:dataCollector}%{SPACE}%{GREEDYDATA:sequence}%{SPACE}-%{SPACE}%{DATA:logMessage}=%{SPACE}%{GREEDYDATA:responseMessage}%{SPACE}{org.apache.synapse.mediators.builtin.LogMediator}" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "uda"
}
stdout {
codec => rubydebug
}
}
我不明白为什么如果它与模式不匹配,beats 会发送一条线