Elastic connection issues according to the server.log

Hi
I’m getting Graylog errors like these:

2020-10-20T14:24:58.066+03:00 ERROR [IndexerClusterCheckerThread] Error while trying to check Elasticsearch disk usage.Details: Unable to read Elasticsearch node information
2020-10-20T14:25:38.471+03:00 ERROR [IndexFieldTypePoller] Couldn’t get mapping for index <graylog_21>: Read timed out.
2020-10-20T14:25:57.947+03:00 ERROR [IndexRotationThread] Couldn’t point deflector to a new index
org.graylog2.indexer.ElasticsearchException: Couldn’t collect indices for alias gl-system-events_deflector
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:54) ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:65) ~[graylog.jar:?]
at org.graylog2.indexer.indices.Indices.aliasTarget(Indices.java:336) ~[graylog.jar:?]
at org.graylog2.indexer.MongoIndexSet.getActiveWriteIndex(MongoIndexSet.java:204) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.checkAndRepair(IndexRotationThread.java:144) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:76) ~[graylog.jar:?]
at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_262]
at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_262]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_262]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_262]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_262]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_262]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_262]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_262]
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[?:1.8.0_262]
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) ~[?:1.8.0_262]
at java.net.SocketInputStream.read(SocketInputStream.java:171) ~[?:1.8.0_262]
at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[?:1.8.0_262]
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) ~[graylog.jar:?]
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) ~[graylog.jar:?]
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282) ~[graylog.jar:?]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) ~[graylog.jar:?]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) ~[graylog.jar:?]
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) ~[graylog.jar:?]
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) ~[graylog.jar:?]
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165) ~[graylog.jar:?]
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) ~[graylog.jar:?]
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) ~[graylog.jar:?]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) ~[graylog.jar:?]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) ~[graylog.jar:?]
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111) ~[graylog.jar:?]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[graylog.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[graylog.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) ~[graylog.jar:?]
at io.searchbox.client.http.JestHttpClient.executeRequest(JestHttpClient.java:151) ~[graylog.jar:?]
at io.searchbox.client.http.JestHttpClient.execute(JestHttpClient.java:77) ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:49) ~[graylog.jar:?]
… 15 more
2020-10-20T14:25:58.107+03:00 ERROR [IndexerClusterCheckerThread] Error while trying to check Elasticsearch disk usage.Details: Unable to read Elasticsearch node information

I guess it’s some kind of performance issue(write counter became 0), but elastic cluster is ok, all indexes are green, no errors in Elastic logs, I can perform any request with curl
So I’m confused how someone can debug this - such diagnostics seems irrelevant.
Any ideas how to find the root cause, please?

2020-10-20T14:25:38.471+03:00 ERROR [IndexFieldTypePoller] Couldn’t get mapping for index <graylog_21>
Do you have this index “graylog_21” ? Reindex may solve the problem… you must have 22 and the 21 is empty or inexistent?

Yes, I have this index, and it’s green, that’s why I’m confused
I have to mention it’s not persistent issue, but happens from time to time

the index is green but inside (go to indices menu) and choose " [Default index set]"
Inside you have the “wrong” active index , in your case the there must be one graylog_22 (empty or not active)
Normaly rotate active write index may solve the issue.

image

Ok, I can imagine it was single index issue, sure I can fix it manually today and tomorrow and day by day…This is what happens when I don’t find root cause, but just deal with consequences…
But doesn’t this ‘Error while trying to check Elasticsearch disk usage.Details: Unable to read Elasticsearch node information’ mean that whole Elastic was not available?

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.