65536 messages in process buffer, 100% utilized

@masterdou

You may want to look into this.
https://docs.graylog.org/v1/docs/multinode-setup

I understand now why you have this issue, your server can not keep up. If you trying to do this with one node, you need to increase these settings, this process the logs basically creates thread from the amount of CPU’s installed on that node. Its advisable for each buffer number = the CPU core on the Graylog server

processbuffer_processors = 7
outputbuffer_processors = 3
inputbuffer_processors = 2

This should and would indicate I have at least 12 CPU cores on my graylog server, this is stated in the Documentation. If you have the resource you could try to increase those, preferably the processbuffer_processors = 7 one.

ok. thanks for your advise @gsmith
if i decide to build a cluster,there could be more data than 350GB(maybe 600G or more because i have not set receivng all network devices),
so could you please give some advise about this cluster scale?3 garylog servers/mongdb/es or 5 garylog servers/mongdb/es or more?

Yes. Please look here…

https://docs.graylog.org/v1/docs/multinode-setup

Hi @masterdou,

my graylog system (4.3.7) handles a similar size of data as Single Node.

Outgoing traffic Last 30 days: 3.4TiB

There were problems with that too.
I have set the Java Heapsize to 8GB for ElasticSearch AND Graylog

Optimized the logs before they arrive in Graylog. (Only store what is really needed)
But there may not be a way around a distributed system. :wink:

1 Like