我正在尝试使用 log4j 和 rsyslog 进行集中式日志记录。
到目前为止我所拥有的
Solr 在 RHEL6 上的 tomcat6 内运行,使用以下 log4j 和 sl4j 库
# lsof -u tomcat | grep log4j
java 14503 tomcat mem REG 253,0 9711 10208 /usr/share/java/tomcat6/slf4j-log4j12-1.6.6.jar
java 14503 tomcat mem REG 253,0 481535 10209 /usr/share/java/tomcat6/log4j-1.2.16.jar
java 14503 tomcat mem REG 253,0 378088 1065276 /usr/share/java/log4j-1.2.14.jar
java 14503 tomcat 20r REG 253,0 378088 1065276 /usr/share/java/log4j-1.2.14.jar
java 14503 tomcat 21r REG 253,0 481535 10209 /usr/share/java/tomcat6/log4j-1.2.16.jar
java 14503 tomcat 35r REG 253,0 9711 10208 /usr/share/java/tomcat6/slf4j-log4j12-1.6.6.jar
#
Solr 正在使用以下 log4j.properties 文件(通过 -Dlog4j.configuration=file:///opt/solr/lib/log4j.properties)
# Logging level
log4j.rootLogger=INFO, file, CONSOLE, SYSLOG
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%-4r [%t] %-5p %c %x \u2013 %m%n
#- size rotation with log cleanup.
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.MaxFileSize=4MB
log4j.appender.file.MaxBackupIndex=9
#- File to log to and log format
log4j.appender.file.File=/var/log/tomcat6/solr.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%-5p - %d{yyyy-MM-dd HH:mm:ss.SSS}; %C; %m\n
log4j.logger.org.apache.zookeeper=WARN
log4j.logger.org.apache.hadoop=WARN
# set to INFO to enable infostream log messages
log4j.logger.org.apache.solr.update.LoggingInfoStream=OFF
#- Local syslog server
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.facility=LOCAL1
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=${sysloghostname} %-4r [%t] java %-5p %c %x %m%n
log4j.appender.SYSLOG.Header=true
在同一台服务器上,我运行 rsyslog 并接受来自 log4j 的日志消息。
# rpmquery -a | grep syslog
rsyslog-5.8.10-7.el6_4.x86_64
#
rsyslog 配置
# #### MODULES ####
$MaxMessageSize 32k
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imklog # provides kernel logging support (previously done by rklogd)
$ModLoad imfile # provides file monitoring support
#
$ModLoad imudp.so
$UDPServerRun 514
$WorkDirectory /var/lib/rsyslog # where to place spool files
# #### GLOBAL DIRECTIVES ####
# # Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$IncludeConfig /etc/rsyslog.d/*.conf
$ActionQueueType LinkedList # run asynchronously
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionResumeRetryCount -1 # infinite retries if host is down
$ActionSendStreamDriverMode 0 # require TLS for the connection
$ActionSendStreamDriverAuthMode anon # chain and server are verified
#local1.*;*.* @@(o)XXXXXXXX:5544
local1.* /var/log/remote.log
# # The authpriv file has restricted access.
authpriv.* /var/log/secure
# # Log all the mail messages in one place.
mail.* -/var/log/maillog
# # Log cron stuff
cron.* /var/log/cron
# # Everybody gets emergency messages
*.emerg *
# # Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# # Save boot messages also to boot.log
local7.* /var/log/boot.log
我正在从 Solr 的 logj4 捕获 local1 消息并将它们重定向到 /var/log/remote.log 一切都按预期工作。示例 INFO 消息
Oct 31 13:57:08 hostname.here 3431839 [http-8080-10] java INFO org.apache.solr.core.SolrCore [collection1] webapp=/solr path=/select params={indent=true&q=*:*&wt=json&rows=1} hits=42917 status=0 QTime=1
并且堆栈跟踪与错误消息在同一行
Oct 31 12:27:17 hostname.here 157666248 [http-8080-7] java ERROR org.apache.solr.core.SolrCore org.apache.solr.common.SolrException: undefined field *#012#011at org.apache.solr.schema.IndexSchema.getDynamicFieldType(IndexSchema.java:1223)#012... Cut for brevity....#011at java.lang.Thread.run(Thread.java:724)#012
注意#012 作为行尾和#011 制表符。
使用此设置,我可以通过 TCP 将日志发送到远程 rsyslog 服务器,并将它们通过管道传输到 fluentd/elaticsearch/kibana 等……一切都按预期工作。
我现在试图让另一个 webapp 在同一个 tomcat 容器中运行以如上所述记录的问题,除了堆栈跟踪之外,一切都按预期工作,堆栈跟踪的每一行最终都在单独的行上(单独的系统日志消息)
Oct 31 12:54:47 hostname.here 4909 [main] java ERROR org.hibernate.tool.hbm2ddl.SchemaUpdate could not get database metadata
Oct 31 12:54:47 hostname.here org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure
Oct 31 12:54:47 hostname.here
Oct 31 12:54:47 hostname.here The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
webapp 附带它自己的 log4j 库和 log4j.xml 配置。库与 solr 使用的版本相同。
此应用程序的 log4j.xml 文件
<appender name="SYSLOG" class="org.apache.log4j.net.SyslogAppender">
<param name="SyslogHost" value="localhost" />
<param name="Facility" value="LOCAL1" />
<param name="Header" value="false" />
<property name="facilityPrinting" value="false"/>
<param name="Threshold" value="DEBUG" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern"
value="%-4r [%t] java %-5p %c %x %m%n"/>
</layout>
</appender>
我希望看到来自新应用程序的堆栈跟踪出现在同一行,就像使用 Solr 一样。
有谁知道这是否是 log4j 配置问题?
非常感谢。