Hi Everyone,
during the evaluation of graylog2 server, I found out that there are different ways of parsing data.
- Input -> Extractor -> Stream -> Pipeline
- Input -> Pipeline -> Stream
In the first scenario, I use the JSON Extractor to extract data and the stream rules to get messages into a stream.
Afterwards I use the Pipeline Steps to set timestamp to real logtimestamp and to some manipulations.
Configuration order is:
Message Filter Chain
Pipelines
GeoIP Resolver
In the second scenario, the pipeline will be used to put the messages into a stream and extract data and manipulate data.
Configuration order is:
Pipelines
Message Filter Chain
GeoIP Resolver
I read a topic from begin of 2016, that executors should go end of life, is that true?
I thought that in the above examples the first solutions should be faster, because not all messages has to go the whole way through the pipeline.
Are there any recommendations which is the preferred solution?
Kind regards,
Christian