I have a question about “data tiering” in the graylog open version.
we run graylog 6.1.2 with opensearch 2.16.x (we are not yet using data node feature)
we have a mix of “data tiering” and “legacy” retention for our indices. I mostly switched to the new option cause of the “legacy” marked as deprecated.
My assumption with data tiering is that graylog would automatically trim indices to keep below the low watermark however we constantly see messages like this “Elasticsearch nodes disk usage above low watermark”, I then manually need to go and remove indices to bring the levels down so that shards can be allocated.
- How does this mechanism decide when to remove older indices, especially in the case where we have multiple indices with different “min” retention times
- is it possible to manually set thresholds for data tiering - example, try keep at least 15% of the disk space available etc…
As is stands I am considering switching back to “legacy” retention as i do not see the benefit (with regards to retention) of the “data tiering” option.
Any thoughts, comments here?