Log Retention and Unassigned Shards

After upgrading to version 2.3, I’ve had an issue for when Graylog applies the log retention plan (Set for 1D, keep 30 days), graylog does not reassign the shard to the newly created index. I’ve had to manually go reassign the shard to the correct index to get my ES cluster back to green, and then “Recalculate index range”.

My understanding is that all of Graylog’s configuration, while it’s running, is located in mongoDB, I’ve gone through every collection on my MongoDB, but there doesn’t seem to be anywhere that defines where the “Graylog_deflector” currently is supposed to be pointing to, other than in the browser. In the graylog logs the error that is being thrown is that it can’t point to the newest index because the ES is red, because whenever my log retention policy kicks in and creates a new index, it’s not automatically reassigning the shard when the new index is created, as you can see below. The AggregatesMaintenance is from a plugin that runs every minute looking for events with the same IP that have happened more times in x amount of minutes, could this be interrupting the shard allocation? Below is the log entry of when the errors begin.

017-09-20T19:00:06.084-05:00 INFO  [AbstractRotationStrategy] Deflector index <Graylog> (index set     <graylog_123>) should be rotated, Pointing deflector to new index now!
2017-09-20T19:00:06.085-05:00 INFO  [MongoIndexSet] Cycling from <graylog_123> to <graylog_124>.
2017-09-20T19:00:06.085-05:00 INFO  [MongoIndexSet] Creating target index <graylog_124>.
2017-09-20T19:00:06.138-05:00 INFO  [Indices] Successfully created index template graylog-internal
[MongoIndexSet] Waiting for allocation of index <graylog_124>.
2017-09-20T19:00:36.407-05:00 INFO  [AggregatesMaintenance] removed 0 history items
2017-09-20T19:00:36.408-05:00 WARN  [Aggregates] Indexer is not running, not checking any rules this run.
2017-09-20T19:01:06.240-05:00 ERROR [IndexRotationThread] Couldn't point deflector to a new index

Here are my logs from ES at the same time:

[2017-09-20T18:03:04,077][INFO ][o.e.c.m.MetaDataMappingService] [_qngEZg]  [graylog_123/v19ViI9XSwKPWzCo_qInbg] update_mapping [message]
[2017-09-20T19:00:06,153][INFO ][o.e.c.m.MetaDataCreateIndexService] [_qngEZg] [graylog_124] creating index, cause [api], templates [graylog-internal], shards [1]/[0], mappings [message]
[2017-09-20T19:00:06,177][INFO ][o.e.c.r.a.AllocationService] [_qngEZg] Cluster health status changed from [YELLOW] to [RED] (reason: [index [graylog_124] created]).
[2017-09-20T19:36:33,131][INFO ][o.e.c.r.a.AllocationService] [_qngEZg] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[graylog_124][0]] ...]).
[2017-09-20T19:36:35,020][INFO ][o.e.c.m.MetaDataMappingService] [_qngEZg] [graylog_124/1-o_i8ksTZudU_Bmud5fKA] update_mapping [message]

Edit: Also to add additional information, I also added Kibana to my Graylog stack as well after I updated Graylog to 2.3, I would like to think that that didn’t create this issue, but it did create an additional index in Graylog called “Kibana”, and I just want to make sure I provide as much information.

Just in case someone else makes this same mistake I did, when you are updating from ES 2.X to 5.X make sure to reinitialize shard allocation… otherwise you will have this issue with no information as to why your index is not allocating shards…

curl -XPUT 'localhost:9200/_cluster/settings?pretty' -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": "all"
  }
} 
'

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.