Recurring indexer failures

We just set up a new Graylog cluster at UD and we are having a repeating problem of about 71,300 or so indexer failures almost every day or so. The errors don’t make much sense compared to the similar errors I’ve seen in the forums, mainly because they are all parser errors on [application_name] of type [date] similar to this:

ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed to parse field [application_name] of type [date] in document with id '05beac84-97bc-11eb-8f17-0010e079a20a'. Preview of field's value: 'xinetd']]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=failed to parse date field [xinetd] with format [strict_date_optional_time||epoch_millis]]]; nested: ElasticsearchException[Elasticsearch exception [type=date_time_parse_exception, reason=date_time_parse_exception: Failed to parse with all enclosed parsers]];

The id doesn’t appear to be of help since the messages never make it into ES (so I can’t go see the message it’s complaining about).

Any ideas how to track these down?

Thanks,
Ed

1 Like

All of our message are fed in by rsyslogd over TCP (use the RSYSLOG_SyslogProtocol23Format template) to our main Graylog input. Message work fine for quite a while and then we get a flurry of (oddly) around 71,300-ish indexer failures… then things go back to normal for a day or so.

You might be able to use ingore_malformed on your index/indices, which as I’m reading the doc, will add an _ignored field, which should allow you to track down the source of the malformed messages. FWIW, I’ve not used that attribute and can’t speak to whether or not that will actually do what you need, so buyer beware.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.