想人能够协助我处理以下问题:
我已经安装了Logstash conf台,以利用Csv输入金。 数据输入领域,价值如下......
2024-01-09 22:21.04
然后,我掌握了处理日期的过滤金中的这一逻辑。
date {
match => ["cart_received_timestamp", "yyyy-MM-dd HH:mm:ss", "yyyy-MM-dd T HH:mm:ss.SSS Z "]
target => "@timestamp"
}
I am getting the following error (I’m using strict in my index to reject invalid data). The input date-
2024-01-09 22:21.04
… and the index expectation-
y HH:mm:s T HH:mm:ss.SSS ZZ
......也一样。
似乎像仪表仪将我的日期改为以下格式:
2024-01-09T22:21:04.567414642Z
...... 造成这一错误,因为它与这个领域所需格式的指数制图不一致。
{"update"=>{"status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:539] failed to parse field [@timestamp] of type [date] in document with id punchout_cart_item_182439 . Preview of field s value: 2024-01-09T22:21:04.567414642Z", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [2024-01-09T22:21:04.567414642Z] with format [ y HH:mm:s T HH:mm:ss.SSS ZZ]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}
我曾尝试过对组合文件格式的各种改动(如ISO8601, 加上迄今为止改变外地的转换选择——时间,ChanetGPT建议采用一些废墟选择,将日期改变格式......无改动。
现有木斯塔什信(有***个用于隐藏敏感信息)。
input {
file {
id => "bulk_***_carts_items_input"
path => "/etc/logstash/data/****/*.csv"
max_open_files => 1
mode => "read"
start_position => "beginning"
exit_after_read => true
tags => ["bulk-load"]
type => "csv"
file_completed_action => "log"
file_completed_log_path => "/etc/logstash/data/processed-log/processed.log"
}
}
filter {
csv {
skip_empty_rows => "true"
separator => ","
columns => [ elastic_index_id , elastic_index_created_date , cart_received_timestamp , cart_item_id , cart_item_quantity , cart_item_description , cart_item_unit_price , cart_item_curren$
convert => {
"cart_item_received_timestamp" => "date_time"
"cart_item_updated_timestamp" => "date_time"
"cart_received_timestamp" => "date_time"
"elastic_index_created_date" => "date_time"
"session_timestamp" => "date_time"
}
}
date {
match => ["cart_received_timestamp", "ISO8601"]
target => "@timestamp"
}
mutate {
remove_field => ["[event]", "[type]", "[host]", "[message]", "[log]"]
}
}
output {
elasticsearch {
cloud_id => "${ES_CLOUD_ID}"
cloud_auth => "${ES_CLOUD_USERNAME}:${ES_CLOUD_PASSWORD}"
index => "******-bulk-1"
action => "update"
doc_as_upsert => true
document_id => "%{elastic_index_id}"
}
stdout { codec => rubydebug }