Answer to Graylog Journal is fully utilized and there are millions of unprocessed messages

Hi All,

If anyone see in Node that Disk Journal utilization is more than 100% and due to which there are millions of messages are flooded and cannot be processed. So you cannot process the unprocessed message in running Graylog server . You need to increase the graylog journal size from 5GB to 10 GB and restart the server. Now you will see though incoming messages are too high but gradually all messages will be processed as journal Size is now increased and it can keep more messages until all the unprocessed messages can be processed and write to Elastic Search .

Please let me know if you have any other suggestion.

Please rephrase your question.

I guess that your Elasticsearch is not able to process the messages in the volumen you ingest.

I think it wasn’t a question as much as an attempt at a tip…

@benvanstaveren
Yes you are right. In past same post has been closed so I could not reply on that so initiated a new thread. This is the answer for this issue and but if you can add some more suggestion that would be welcomed.

Hi @jan , yes you are right because I am using only one node for graylog where graylog , elastic search and mongo services are running as container and this node has 4 CPUs and 30 GB RAM . So inspite of increasing Disk Journal size which can store more message until all input messages gets processed and it will have more time to process the messages to Elastic Search. But this time even 10 GB size is full due to huge flood of messages which process buffer can not process as its max size is 65536. I am thinking to upgrade the machine type from r3.large to c5 or m4 in AWS or I can add one more node with graylog , ES and mongo which I will add into load balancer so that incoming messages will routed into 2 nodes. Is it a right approach ?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.