Graylog is not processing messages to Elasticsearch

Hello,
Graylog 2.4.6-1
ES 5.6.10
Out 0 msg/s
Elasticsearch with about 400 pending tasks and status “green”
insertOrder timeInQueue priority source
46444693 4.8s HIGH put-mapping
46444694 4.8s HIGH put-mapping
46444695 4.8s HIGH put-mapping

Any help will be appreciated

What the GL and ES logs say?

After a few hours, pending tasks disappear and GL functioning good.
ES log flooded with such warnings, but also when ES not “locked”:
[2019-12-09T22:05:03,191][WARN ][o.e.d.i.m.TypeParsers ] Expected a boolean [true/false] for property [index] but got [not_analyzed]
In GL there are some types of errors:
ERROR: org.graylog2.plugin.lookup.LookupDataAdapter - Couldn’t start data adapter spamhaus-drop/5bfa9311d7d9eb0001f6b5d0/@75e498c5
org.graylog.plugins.threatintel.tools.AdapterDisabledException: Spamhaus service is disabled, not starting (E)DROP adapter. To enable it please go to System / Configurations.
at org.graylog.plugins.threatintel.adapters.spamhaus.SpamhausEDROPDataAdapter.doStart(SpamhausEDROPDataAdapter.java:68) ~[?:?]

ERROR: org.graylog2.shared.buffers.processors.DecodingProcessor - Unable to decode raw message RawMessage

ERROR: org.graylog2.indexer.messages.Messages - Caught exception during bulk indexing: java.net.SocketTimeoutException: Read timed out, retrying

ERROR: org.graylog2.indexer.messages.Messages - Failed to index [1] messages. Please check the index error log in your web interface for the reason. Error: One or more of the items in the Bulk request failed, check BulkResult.getItems() for more information

you might want to check your elasticsearch logs @ludaca

one penny that no space is left in the elasticsearch cluster.

The problem resolved, it was ES index.mapping.total_fields.limit parameters increased, and produce a lot of pending_tasks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.