I’ve logs with this nomenclature :
hostname field1=value field2=01 field3=“value” etc.
For events from the same source, the order and the number of fields vary: several dozen layouts. It’s therefore not possible to do this with the extractors (the combinations (grok | grok) do not return all the fields in all cases, and this becomes unmanageable).
As the events all have the same beginning (hostname without k / v) then only k / v with the only variant: with or without quotes for the values, I told myself that it was possible to parse these events from a pipeline .
However, today I only replace prefixes, I don’t see how to perform this normalization action with the pipeline.
I wish to have :
- For the first field: hostname: value
- For the others: retrieve K and V as with an extractor but automatically for all the fields as we have the same syntax except quotes.
Can you help me ?
I create a pipeline “A” with 1 rules:
rule "KV parsing" when has_field("kv") then set_fields( fields: key_value( value: to_string($message.message), trim_value_chars: "\"" ) ); end
And I associate the pipeline to the stream which match log gl2_source_input.
I’ve logs in the stream, but nothing “enter” in the pipeline.
I’ve update the rule with:
rule "KV parsing" when has_field("message") then set_fields( fields: key_value( value: to_string($message.message), trim_value_chars: "\"" ) ); end
But I’ve “Throughput = 0 msg/s”. I’ve my stream that receive event connected to the pipeline. The pipeline have this rules and this rules in the simulator works.
I don’t see what I forget.
On the simulator with raw string copy/paste to message field of an event, it’s OK, i’ve only:
timestamp 2020-05-15T21:51:30.132Z 1589550878
Is it normal ?