Graylog Cluster, Buffer process 100% stop process messages

Hello @Totally_Not_A_Robot thanks for your answer and help - I was off for somedays that’s way I’m coming late.

Maybe its better to open another threat to try to gain help from the community. And I’ll do that i you say so.

I need to tell you guys that I’m very new to Graylogs I was looking for a logging solution and I’ve ended up with Graylogs. I’ve installed it started sending some logs and the…
first issue was related processing stopping and I had to delete the journal everytime since than I’ve never had a working day without the need to delete the journal.
Now I still have the journal issue and another error on search with the message bellow:
Error Message:

Unable to perform search query {“root_cause”:,“type”:“search_phase_execution_exception”,“reason”:“all shards failed”,“phase”:“query”,“grouped”:true,“failed_shards”:}

In terms of resource I’m using a machine with

  • 16 GB RAM
  • 50 GB HDD (OS, Graylogs, Elasticsearch)
  • 1TB HDD (MongoDB)
  • 4 CPU(s)

For the java heap size I had not changed nothing.

Bellow the top information:

top - 14:09:03 up 11 days, 23:58, 1 user, load average: 2.42, 2.83, 2.96
Tasks: 396 total, 1 running, 395 sleeping, 0 stopped, 0 zombie
%Cpu(s): 16.2 us, 20.7 sy, 0.0 ni, 62.5 id, 0.7 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16267292 total, 417584 free, 6004436 used, 9845272 buff/cache
KiB Swap: 4063228 total, 3982844 free, 80384 used. 8970252 avail Mem

This is my first installation and I’m planning in feature to use a well balanced graylog installation to forward logs to our corporate SIEM.

Regards,

Yes please you really should start a new thread for this. Or an admin like @jan could split off your post into a separate thread…

Your journal is filling up, because the messages aren’t going into ElasticSeach. Your new log messages support this theory.

Have you verified that even one message has made it into Elastic and that you can query Graylog?

  • 50 GB HDD (OS, Graylogs, Elasticsearch)
  • 1TB HDD (MongoDB)

This is exactly the wrong way around. MongoDB is TINY, it only contains the configuration. ElasticSearch will contain all the logs and will grow huge.

1 Like

Check that you have enough disk space, and that your setting on journal size is such that the whole journal fits on the disk. If you have these configured correctly, you should not need to delete journal ever. When the journal fills, the graylog node just stops accepting more messages/log lines, and when elasticsearch is able to ingest the journal, the graylog node starts accepting new messages/log lines again.