I have set a Graylog instance to only hold 60 indices and delete all others. I had previously closed some older indices and then noticed that Graylog was not following the retention strategy and now has 69 indices, I then opened the old indices thinking that closing them may have caused this, but Graylog still hasn’t deleted them.
I have also tried cyling the active write index and recalculating index ranges, but this helped.
I encountered this issue once before, however I found Elasticsearch to be Red in status and resolving that issue resolved the retention strategy problem. In this case Elasticsearch is healthy.
OS: Ubuntu 16.04 LTS
Here you can see my config and also the amount of incides in the system.
did you see any entries in the logfiles about the retention check?
Graylog should give you some ideas why it can’t run the retention checks.
I looked in the logs and can’t find anything regarding retention strategies.
I have looked into another working system and found logs such as these:
2018-08-21T01:03:58.379+01:00 INFO [AbstractIndexCountBasedRetentionStrategy] Number of indices (4) higher than limit (3). Running retention for 1 indices.
2018-08-21T01:03:59.296+01:00 INFO [AbstractIndexCountBasedRetentionStrategy] Running retention strategy [org.graylog2.indexer.retention.strategies.DeletionRetentionStrategy] for index <graylog_381>
Both systems are configured the same, however the one that is not working is set to 60 indices instead of 4.
you need to find the difference to solve the issue.
The one that works is on 2.4.5 and the one that isn’t working is on 2.4.4, is this a known issue of 2.4.4?
We will aim to update as soon as we can but due to being in production it will have to be scheduled in.
not that I know - so it should be something else.
I did close and reopen about ten of the indices on the system that is not working properly. That’s all I can think of, it shouldn’t have caused this.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.