I have set a Graylog instance to only hold 60 indices and delete all others. I had previously closed some older indices and then noticed that Graylog was not following the retention strategy and now has 69 indices, I then opened the old indices thinking that closing them may have caused this, but Graylog still hasn’t deleted them.
I have also tried cyling the active write index and recalculating index ranges, but this helped.
I encountered this issue once before, however I found Elasticsearch to be Red in status and resolving that issue resolved the retention strategy problem. In this case Elasticsearch is healthy.
I looked in the logs and can’t find anything regarding retention strategies.
I have looked into another working system and found logs such as these:
2018-08-21T01:03:58.379+01:00 INFO [AbstractIndexCountBasedRetentionStrategy] Number of indices (4) higher than limit (3). Running retention for 1 indices.
2018-08-21T01:03:59.296+01:00 INFO [AbstractIndexCountBasedRetentionStrategy] Running retention strategy [org.graylog2.indexer.retention.strategies.DeletionRetentionStrategy] for index <graylog_381>
Both systems are configured the same, however the one that is not working is set to 60 indices instead of 4.