Problem with Graylog cluster

Hi. Hopefully there is someone here who know deeper workings of Graylog and might be able to point me to the right direction.
I inherited a quite large Graylog cluster. It consists of 10 hardware data nodes which all have 1 instance of Graylog and 4 instances of Elasticsearch each. In addition there are 3 master VMs which also have Elastic and Graylog.
The problem occurs when we started to replace some of the old hardware nodes. We created 11th node, fully Ansible, identical to other 10 nodes. When we try to add one additional Elasticsearch instance to cluster, it works just fine for around 30-60 minutes and then Graylog just stops processing new logs, output buffer goes full and journal starts growing, on all 10 nodes simultaneously. When I stop/restart the additional ES instance, processing starts again immediately at all 10 nodes. This happens with or without any roles assigned to ES instance. Otherwise cluster seems to recognize the new node just fine, ES side looks all green and butterflies.
I have fully digged into logs, even enabled full debug, can’t find anything of interest in both ES and GL, at the “stoppage” time nor the “restart” time.
Only difference right now between the 10 nodes and the new one is that the new one is in different network segment, but both old and new segment have any<>any in both directions.
GL: 3.3.0
ES = 6.8.7

Thanks in advance, if some cares to think along.

:wave: Hmmm, this seems like a tough one. Given the symptoms, this has me leaning toward it being an ES issue. Are you getting anything in the logs from the ES nodes at that time?

Hi, thanks for the answer!

No, I don’t see anything of relevance in ES logs at the time when the stoppage occurs.

I actually do see some errors in Graylog master instance which I somehow previously missed. These seem to match to the times that new ES node is running. This error sadly does not provide much value for me in further debugging.
This time, processing stopped around the same time:

2020-11-30T19:19:55.720+02:00 ERROR [IndexerClusterCheckerThread] Uncaught exception in periodical
org.graylog2.indexer.ElasticsearchException: Unable to read Elasticsearch node information
at org.graylog2.indexer.cluster.jest.JestUtils.execute( ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute( ~[graylog.jar:?]
at org.graylog2.indexer.cluster.Cluster.catNodes( ~[graylog.jar:?]
at org.graylog2.indexer.cluster.Cluster.getFileDescriptorStats( ~[graylog.jar:?]
at org.graylog2.periodical.IndexerClusterCheckerThread.checkOpenFiles( ~[graylog.jar:?]
at org.graylog2.periodical.IndexerClusterCheckerThread.doRun( ~[graylog.jar:?]
at [graylog.jar:?]
at java.util.concurrent.Executors$ [?:1.8.0_121]
at java.util.concurrent.FutureTask.runAndReset( [?:1.8.0_121]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301( [?:1.8.0_121]
at java.util.concurrent.ScheduledThreadPoolExecutor$ [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_121]
at [?:1.8.0_121]
Caused by: Read timed out
at Method) ~[?:1.8.0_121]
at ~[?:1.8.0_121]
at ~[?:1.8.0_121]
at ~[?:1.8.0_121]
at ~[graylog.jar:?]
at ~[graylog.jar:?]
at ~[graylog.jar:?]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead( ~[graylog.jar:?]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead( ~[graylog.jar:?]
at ~[graylog.jar:?]
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader( ~[graylog.jar:?]
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader( ~[graylog.jar:?]
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse( ~[graylog.jar:?]
at org.apache.http.protocol.HttpRequestExecutor.execute( ~[graylog.jar:?]
at org.apache.http.impl.execchain.MainClientExec.execute( ~[graylog.jar:?]
at org.apache.http.impl.execchain.ProtocolExec.execute( ~[graylog.jar:?]
at org.apache.http.impl.execchain.RedirectExec.execute( ~[graylog.jar:?]
at org.apache.http.impl.client.InternalHttpClient.doExecute( ~[graylog.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute( ~[graylog.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute( ~[graylog.jar:?]
at io.searchbox.client.http.JestHttpClient.executeRequest( ~[graylog.jar:?]
at io.searchbox.client.http.JestHttpClient.execute( ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute( ~[graylog.jar:?]
… 13 more

I also have this error spamming at the master every 30 seconds or so, not sure if this is somehow related as this has been ongoing for a long time already:

ERROR [IndexerClusterCheckerThread] Error while trying to check Elasticsearch disk usage.Details: null

Little update.
I ended up trying to put the 11th node to same network as rest of the cluster is in. Problem seem to have disappeared, 11th ES node is running and GL is happily still processing, for 2+ hours already.
Might be some timeouts, although the networks are very closely connected. Not sure though if it’s GL or ES side in this case? Is it expected that there is no error messages when timeouts occur?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.