Graylog Processing is slow

Hi All,
I have a Centralized log server where all my logs get rsync. I have a graylog setup which collects and sends logs from this server.

My architecture is as below:

1 Graylog server running on same Centralized server with multiple filebeats.
24 CPU 64 GB RAM
Graylog node -15 GB heap

3 ES data nodes with 32 CPU 64 GB RAM and 31GB heap size.

I find that my Process buffer is full always and also my journal gets full. I get the point that my graylog is unable to handle the huge amount of logs incoming.
I am planning to have a seperate Graylog cluster of 3 servers.

Kindly help me with any other improvements to be done.

Sharing the configs:


processbuffer_processors = 5
outputbuffer_processors = 10
ring_size = 524288

inputbuffer_ring_size = 524288
inputbuffer_processors = 4
inputbuffer_wait_strategy = blocking

output_batch_size = 50000
elasticsearch_shards = 3

Elastic search:
3 nodes with all 3 data nodes and 1 of them acting as master node too.

In and out count has a very huge difference. Kindly let me know if i can do some config changes to improve the performance.

Logs size - 900Gb/day
Filebeats- 4
Using 4 filebeats to read logs of 13 folders.
P.S - Each folder contains logs of 1 production server.

Any config changes to make sure my out message count is more.

@jan Can you please help. Thanks in advance.


sorry for that log volumen I’ll not provide free help - but I’ll point to the contact form:

I guess that someone from the community will can help here too.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.