Hello,
When I enable an extractor to split mod_security logs, After 1 or 2 hours, my journal grow up too much, and canβt consume all logs.
I already have a lot of extractors from years ago without problem, But this one crash Graylog
I added RAM to graylog2 process (2Go of XmX right now), but same problem.
NNothing on logs.I have to restart graylog, and disable extractor to resume.
The extractor is a grok pattern:
Iβve tryed to send logs direct in json, to have field splited, but I use filebeat (and logstash output) to send logs, and donβt know how to configure it.
Here is an earlier post about GROK performance with a solution that seems related:
There are a couple things that you can do to increase performance such as using NOTSPACE instead of DATA (if possible) and strictly defining the beginning (^) and end ($) of your GROK
I double check the journal usage:
1 hour after I done your advices, with start, and end regex, and NOTSPACE, the Process buffer start to raise (cpu 100% with graylog process)
Once Process buffer up to 65536 (100%), Journal start to growth
From 30 min ago, I have only 4700 msg in this input.
I donβt understand why there is a bottleneck
I have also had GROK hang a processor buffer- on the NODE page choose more actions then get processor-buffer dump. Normally I see those as idle but when I was getting system hangs it would show the message it was hung up onβ¦