Processors buffer configuration, process buffer 100%

(Cantemir Pop) #1

Hello, Where can I find some configuration on output/process/input buffer processors?
I am running version 1.3.4 on 4 graylog nodes and using 8 vCPU each.
The problem is that Process Buffer is at 100% and the graylog java process is using allt the CPU. The messages are gathering in Disk Journal and during the day we have a pover a 10 millions of messages pending in the cache (disk journal).
There is something we can do some tunning on it?
Or some documentation about the setting for output/process/input buffer processors


you could first try hints in this thread:

(Cantemir Pop) #3

My elasticsearch cluster resides on different machines. I do not have significant load on elasticssearch cluster. I think if the elasticsearch experience issue ingesting data I could see it in load or in graylog output buffer usage, but is not the case.
I think I need more cpu power or more graylog nodes.


Did you already check your extractors, as suggested in the thread I linked?

(Rafaelcarsetimo) #5

@jtkarvo and @cantipop.

Thanks for ALL helps. My real problem has a extractor. I made a Cluster with 5 graylog servers and 3 elasticserach nodes to balance the devices of my network (almost 200 devices, 55K messages per minute). And by elimination, discover one extractor thats stuck the process buffer. After recreate the regex, all works fine. At least 12 hours.


(Jan Doberstein) #6

3 posts were split to a new topic: Grok optimization

(Cleyton) #7

I had this same problem too. Hence I followed an orientation I found in a group on the internet to install the ntp package to synchronize the clocks of my servers. My graylog server and two elasticsearch servers. After that, I had no problem with the full Process Buffer.

(Jan Doberstein) #8

That is the reason why out multinode guide recommends exactly that:

We highly recommend that the system time on all systems is kept in sync via NTP or a similar mechanism. Needless to say that DNS resolution must be working, too. Because everything is a freaking DNS problem