Incompatible index mappings


Our Graylog installation has just started to ignore a bunch of previously working syslog sources, with lots of messages like these ones in /path/to/graylog-server/server.log:

WARN [Messages] Failed to index message: index=<> id=<> error=<{“type”:“mapper_parsing_exception”,“reason”:“failed to parse [application_name]”,“caused_by”:{“type”:“illegal_argument_exception”,“reason”:“Invalid format: "prism_gateway"”}}>
ERROR [Messages] Failed to index [499] messages. Please check the index error log in your web interface for the reason. Error: One or more of the items in the Bulk request failed, check BulkResult.getItems() for more information.

and it seems that now all messages from the source(s) are lost (not showing up in graylog), even the well-formated ones that always had showed up earlier in graylog.

I suspect this came after a restart of graylog (automatic upgrade from version 2.4.3 → 2.4.4)
But there were many things happening at the same time. The Disk Journal also got filled up due to not coping with more logs from new and old sources.
We had to add additional ram to Graylog with the increased amount of incoming messages (and restart it again).

After pulling enough hair pulling, just rotating the indices on the index that Graylog complained about, it went away and processing of the sources it previously complained about started to work.

Any ideas on what could have happened?

(Sorry about the long topic, I forgot I copy-pasted it to the headline)

FYI: the elasticsearch logs didn’t give a hint about something being wrong with the indices

[2018-05-21T20:44:04,049][INFO ][o.e.c.m.MetaDataCreateIndexService] [SEnf-Dg] [graylog_1450] creating index, cause [api], templates [graylog-internal], shards [1]/[0], mappings [message]
[2018-05-21T20:44:04,067][INFO ][o.e.c.r.a.AllocationService] [SEnf-Dg] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[graylog_1450][0]] …]).
[2018-05-21T20:44:04,304][INFO ][o.e.c.m.MetaDataMappingService] [SEnf-Dg] [graylog_1450/A9OoeHsxSL6szaWjiq5cwQ] update_mapping [message]
[2018-05-21T20:44:04,322][INFO ][o.e.c.m.MetaDataMappingService] [SEnf-Dg] [graylog_1450/A9OoeHsxSL6szaWjiq5cwQ] update_mapping [message]
[2018-05-21T20:44:04,339][INFO ][o.e.c.m.MetaDataMappingService] [SEnf-Dg] [graylog_1450/A9OoeHsxSL6szaWjiq5cwQ] update_mapping [message]
[2018-05-21T20:44:04,368][INFO ][o.e.c.m.MetaDataMappingService] [SEnf-Dg] [graylog_1450/A9OoeHsxSL6szaWjiq5cwQ] update_mapping [message]
[2018-05-21T20:44:04,393][INFO ][o.e.c.m.MetaDataMappingService] [SEnf-Dg] [graylog_1450/A9OoeHsxSL6szaWjiq5cwQ] update_mapping [message]
[2018-05-21T20:44:04,410][INFO ][o.e.c.m.MetaDataMappingService] [SEnf-Dg] [graylog_1450/A9OoeHsxSL6szaWjiq5cwQ] update_mapping [message]
[2018-05-21T20:44:52,946][INFO ][o.e.c.m.MetaDataDeleteIndexService] [SEnf-Dg] [graylog_1357/7Trf7GCxR9uPy_WY3YQ4Mg] deleting index

Check the index mapping for the “application_name” field in your current write-active index (you can use the graylog_deflector index alias for that).

Yes, that clearly explains why the errors emerged in the server.log:

                "application_name": {
                    "format": "strict_date_optional_time||epoch_millis",
                    "type": "date"

VS. the correct (current graylog_deflector alias):

                "application_name": {
                    "type": "keyword"

But how could this happen just out of the blue? (after 1448 of indices that worked correctly…)

Unless you’ve create a custom index template with the correct mapping for that field, Elasticsearch will try to “guess” the type of each field from the first message it receives in the index.

In that case, it seems that the contents of the “application_name” message field contained something resembling a date in the first message which was written into that Elasticsearch index.

Please refer to for more information about creating custom index templates.

Hmm… that makes sense as someone added new (spammy) syslog sources.

Given that graylog default has some predefined fields (like application_name), wouldn’t it make sense that graylog itself would enforce at least those by adding a custom index mapping for them? I can log a GitHub issue for that.

btw. “graylog_deflector” is missing from the latest docs too.

The only “default” fields in Graylog are “message”, “full_message”, “timestamp”, and “source”, all of which have an explicit index mapping:

The Graylog index model is explained at

ok, my bad, but how is application_name generated then? Because this is a pretty vanilla graylog installation, and we never installed an extractor for that field AFAICT.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.