Grok optimization

These are the timings for the grok patterns that I have. These are bad , good?

and this one:

My entire 4 nodes are processing about 5000 msg/s . If it receive more than 6000 or 7000, it will start filling disk journal.
4 gl-nodes with 8 vCPU each should be able to process more messages?

Hej @cantipop

please do not high-jack threads. It looks like you are asking for help to determine if your Grok patterns can be optimized. That is another topic.

That is why I have moved that out of the thread



to me, the first screen capture looks grim, but in the second capture the figures don’t look so bad.

I don’t know how to optimize GROK inputs. That is why I have switched most of my GROK patterns to regexes. I got a 10-fold performance improvement from that, and then further 2-3 fold improvement after optimizing my regexes.

The only guess on optimizing GROK from me would be that you need to be sure that ALL log lines have all the fields, as failed matches consume resources quite a lot for nothing.

Seems a bit low for that many nodes but I have found that Grok patterns can have a huge impact if not done correctly. We do use some but have learned it is MUCH faster and performance gaining to send the logs preformatted in GELF from the source. Not possible in all instances but a huge benefit where the option is available. NXLOG does a great job of this when configured for it. In general we process around 10-15k per second on just two nodes of around 4 cpus and 16 GB memory right now. Even with those specs We only generally have those nodes hit around 50% utilization most of the time.