我正在尝试通过logstash将数据从csv文件输入elasticsearch。这些 csv 文件包含第一行作为列名。解析文件时有什么特殊的方法可以跳过该行吗?是否有任何条件/过滤器可以使用,以便在出现异常时跳到下一行?
我的配置文件如下所示:
input {
file {
path => "/home/sagnik/work/logstash-1.4.2/bin/promosms_dec15.csv"
type => "promosms_dec15"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
columns => ["Comm_Plan","Queue_Booking","Order_Reference","Generation_Date"]
separator => ","
}
ruby {
code => "event['Generation_Date'] = Date.parse(event['Generation_Date']);"
}
}
output {
elasticsearch {
action => "index"
host => "localhost"
index => "promosms-%{+dd.MM.YYYY}"
workers => 1
}
}
我的 csv 文件的前几行看起来像
"Comm_Plan","Queue_Booking","Order_Reference","Generation_Date"
"","No","FMN1191MVHV","31/03/2014"
"","No","FMN1191N64G","31/03/2014"
"","No","FMN1192OPMY","31/03/2014"
无论如何我可以跳过第一行吗?此外,如果我的 csv 文件以新行结尾,其中没有任何内容,那么我也会收到错误消息。如果这些新行出现在文件末尾或者两行之间有一个空行,我该如何跳过它们?