Buffer utilization is 100% for all nodes having backlog

Hi All,

Heavy backlog on more than one nodes. some nodes having more than 2M backlog and some having more than 10M. When I click on nodes I found process and output buffer is 100% utilized. Can we do any modifications to solve this? Already read the thread regarding this issue but not able to understand.

Graylog version - 2.4.6
Elasticsearch - 5.6

14%20PM

the simple answer:

check your outputs (if you have any) and give elasticsearch more resources.

@jan

Already having high resources.

@jan which outputs…?

@jan, How can we configure buffer…?

the question is - do you have some configured?

Go to system/outputs and check. If not you might want to look at the batch_size

And the buffer workers

The total number of processors should not be more than the available CPUs and the batch size should be like the median number of messages you have in the flush_period.

@jan Thanks for the reply.

In system/outputs I found some of the entries but what I have to do?

Graylog is processing the all outputs in sequence and the last is the output to elasticsearch.

If you have outputs configured and they are not very responsive - means slow this will slow down your processing.
If possible disable all outputs and see if this helps to speed your system up.

@jan
sorry but not having disable button.

the outputs are always bound to the stream - so, unfortunately, you need to check your streams if one of the outputs is configured on them.

@jan Disabled all the outputs but still no luck. Backlog is still very high and also process buffer and output buffer utilization is 100%

you might need to restart Graylog after you have disabled the outputs. (just try it with a single node)

still no luck @jan

But after restart node, I found something in the log.

2018-10-02T20:36:21.580Z WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=Beats - DmsBatchJobs ( Extra Syncer ), type=org.graylog.plugins.beats.BeatsInput, nodeId=hj3c02f5-9878-45b0-a788-8bfe9c9223d2} should be 2097152 but is 212992.
2018-10-02T20:36:21.580Z WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=Beats (Tc's & docker01-04 ), type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 2097152 but is 212992.
2018-10-02T20:36:21.580Z WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input GELFTCPInput{title=Gelf Tcp (Dms01, Dms02), type=org.graylog2.inputs.gelf.tcp.GELFTCPInput, nodeId=hj3c02f5-9878-45b0-a788-8bfe9c9223d2} should be 1048576 but is 212992.

What does the number means…?

that shows that the messages are oversized for the configured receive buffers. So you need to raise the input size of this named inputs to get the bigger messages.

How can we raise the input size @jan
Sorry, I am little bit confused that’s why I am asking. Also how can we select exact input name.

just read the message - two beats inputs and one gelf TCP input. Open those inputs and you can edit the receive buffer size.

@jan

It’s already at 2097152. But why the logs say that it should be 2097152 but is 212992.

Taking a guess here, but considering that all three of those buffers are capped at 212992, it’s likely a system-level setting capping TCP buffer sizes to that limit.

If that is the case, the way to change that limit is going to vary based on your OS, the service manager you’re using to run Graylog, etc.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.