Hello All, I have found a fun problem with a process buffer that fills.
From watching over some time it looks like the diffrence from the input and output of the process buffer is added to the buffer usage, but over time the numbers do not add up.
The photos are from the system when it had it’s processing paused
input - output != usage
The system is running from the official docker images 2.4.6, ES is a AWS ES domain running 5.5
I’m not using any pipelines, so i know i did not make an event loop.
If anyone has an idea I’d live to know what it might be.
org.graylog2.buffers.process.usage is the current number of messages in the process buffer, not how many messages have been processed in a given time. If the process buffer fills up but the number of outgoing messages is low, you have a problem with the indexing performance of your Elasticsearch cluster.
I get that, and that is part of the problem that I see. If I take the input - the output I should have a number that is close to the usage. But when you look at the two screen grabs what I see is that the usage will grow by 1 to 4 eps, the output will trail the input by 2 events. One other funny Behavior, this is in a graylog cluster with three nodes when I shut down the node having the issue the problem seems to migrate to one of the other nodes that was working just fine until the shutdown of the problem node, the inputs are cloud trail and cloudwatch.
The problem looks to be connected to the first instance that is turned on.
it only affects one of the three servers in the cluster.