Process Buffer Full - How do i fault find?

Hi All,
Help, i have a issue where I don’t think something is quite right!
My process buffer is full, but my server doesn’t look like its processing any inputs.
My understanding is, input and output buffer issues points at elasticsearch. So what should i be looking for in trying to solve process buffer issues?
I’ve tired different buffer variables, with no luck.
These are docker containers, and i can see when running top there is a 1100 user which i believe is graylog default and a Java Process stuck at 100% but it must be single threaded as its only using one core of my 4 core server.
Any ideas all?




what kind of processing did you run? Regex, Grok or any other lookup? What plugins did you have installed? how many streams did you have?

Hi, So i don’t believe i have any plug in’s and i have 3 extractors.
I only have one pfsense box pointing syslogs at it, with snort also configured.
All of the extractors are grok extractors. The snort one i wrote myself the other two were lifted from a github site on setting this up.

I thought it was one of the lookup extractors which i picked up from a github on pfsense but i removed them and i still have the same issue.

If you can point me to where to look next that’ll be awesome!

Thanks - Pete

So after doing more hunting, i’ve tried replacing the over complicated GROK pattern that was found online. (It was cascaded, one pattern linked to another etc etc) but was in one extractor. To a different method of using many extractor rules using regular expressions.
It seems to have seriously dropped my cpu and memory usage!

What is best practice here, many extractors or many groks?
Is there more performance using regular expressions than groks??

I’ll keep testing, thanks for the input.


he @psfletchthetek

you can’t give a general rule here. But most people underestimate the complexibility of the regex rules that are created out of a simple grok pattern.

if you are able to write regex rules, use them it is very likely that you are writing them better (speaking of CPU usage) than what is used to have those wide matching grok patterns.

That would make sense, the grok rule looked very complex,
I’ve changed this to many extractor rules with regex patterns, and the server hasn’t gone above 500MHz. where it was locked at 4GHz and a ever increasing disk journal!


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.