Indexer Errors Because of mapper_parsing_exception

Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question.
Don’t forget to select tags to help index your topic!

1. Describe your incident:
I just moved from Elasticsearch to OpenSearch, and while reviewing the Graylog configuration, I noticed 30,000+ Indexer failures in the last 24 hours. Looking at the show errors log, I noticed they predate the migration to OpenSearch. They all look similar to

ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed to parse field [journald_custom_syslog_timestamp] of type [date] in document with id '85a25377-9387-11ee-b84f-005056b447fb'. Preview of field's value: 'Dec 5 17:01:08']]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=failed to parse date field [Dec 5 17:01:08] with format [strict_date_optional_time||epoch_millis]]]; nested: ElasticsearchException[Elasticsearch exception [type=date_time_parse_exception, reason=Failed to parse with all enclosed parsers]];

I pointed it down to the index mapping including

curl "http://127.0.0.1:9200/graylog_150/_mapping"
"journald_custom_syslog_timestamp": {
  "type": "date"
}

Even after having a quick look at the indexer source code, I couldn’t find a way to change the type to something that doesn’t cause parser issues, e.g. keyword. I also have an different more recent installation of Graylog with OpenSearch, that indeed has the mapping of type keyword for exactly the same field. The graylog-internal template (curl "http://127.0.0.1:9200/_template/graylog-internal") doens’t mention the field. While the documentation mentions custom mappings, I can’t find a way to specify one.

2. Describe your environment:

  • OS Information:
    Debian 12.4

  • Package Version:
    Graylog 5.2.2, though the problem could be older.
    OpenSearch 2.11.1, though it happened with latest supported Elasticsearch, too.
    Filebeat 8 with journald input, though it also happens for latest 7, too.

  • Service logs, configurations, and environment variables:

OpenSearchException[OpenSearch exception [type=mapper_parsing_exception, reason=failed to parse field [journald_custom_syslog_timestamp] of type [date] in document with id '82ed74d0-9501-11ee-a444-005056b447fb'. Preview of field's value: 'Dec 7 14:06:58']]; nested: OpenSearchException[OpenSearch exception [type=illegal_argument_exception, reason=failed to parse date field [Dec 7 14:06:58] with format [strict_date_optional_time||epoch_millis]]]; nested: OpenSearchException[OpenSearch exception [type=date_time_parse_exception, reason=Failed to parse with all enclosed parsers]];

3. What steps have you already taken to try and solve the problem?

I now use a processor in filebeat to discard the field before submission, but this is more a workaround than a fix, I guess. While it might not be the most important field as it can be inferred by other fields, I’d still rather not manipulate the logs.

processors:
  - drop_fields:
      fields: ["journald.custom.syslog_timestamp"]
      ignore_missing: true

4. How can the community help?

Could someone please tell me, how this wrong mapping might have been created?
And could someone please help me to fix this issue, so I can remove the workaround?

Hey @maxried

After moving from Elasticsearch to Opensearch did you manually rotate you index sets? There is also a System / Indices / Maintenance button for Cleanup/ Recalculating.

If you click on a index set the upper right corner you should see something like this.

Hi @gsmith,

Thank you for your reply! While debugging, I rotated the affected index. Additionally, the problem pre-dates the migration to OpenSearch for me, so this is not what caused it.

At the moment it looks as if new indices now have it as “keyword” and not as “date” anymore. I don’t know which mechanic did this, and it was great if someone could point out why it could have been “date” in the first place…

It currently seems to work correctly, but I will conduct more tests.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.