Graylog, Elasticsearch and Bad file descriptor

Hi there, I’m running Graylog 3.0.2 (1 master + 2 slave nodes) and 7 Elasticsearch 6.5.4 nodes (3 master +4 data nodes). I’ve noticed strange warnings and errors in /var/log/elasticsearch/graylog.log on one of my Elasticsearch data nodes:
“[WARN ][o.e.i.c.IndicesClusterStateService] [elasticsearch-data-3] [[graylog_1400][0]] marking and sending shard failed due to [shard failure, reason [refresh failed source[schedule]]] Bad file descriptor
at Method) ~[?:?]
at ~[?:?]
at ~[?:?]”
"Suppressed: Bad file descriptor
at Method) ~[?:?]
at ~[?:?]
at$1.close( ~[?:?]
at$ ~[?:?]
at jdk.internal.ref.CleanerImpl$PhantomCleanableRef.performCleanup( ~[?:?]
at jdk.internal.ref.PhantomCleanable.clean( ~[?:?]
at ~[?:?]
“[WARN ][o.e.i.IndexService ] [elasticsearch-data-3] [graylog_1400] failed to run task refresh - suppressing re-occurring exceptions unless the exception changes
org.elasticsearch.index.engine.RefreshFailedEngineException: Refresh failed”

Only one data node has this errors, haven’t noticed them on other data nodes
Should I be worried about there errors, what could cause them?
P.S. Already checked data disk on that VM using fsck - no errors.

my first advice would be to upgrade to the last stable 6.x release.

You should check the max_open_files or similar.

Checked /usr/lib/systemd/system/elasticsearch.service as I’m using Systemd and it overridedes /etc/sysconfig/elasticsearch
LimitNOFILE option is set to 65536 # Specifies the maximum file descriptor number that can be opened by this process

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.