I just upgraded from Graylog 5.2 to 6.0.6 and noticed that the Indice retention is being deprecated. And now I just have Data tiering with max days and min days but I still want to rely on daily rotation.
You can still use the old style configurations by selecting “Legacy” in the “Rotation/Retention” section of the index configuration page.
It is deprecated because retention settings don’t map well to what users are actually trying to achieve, when there is a variable daily message rate; and resource usage could also be sub-optimal, with too many and/or poorly sized shards.
The way data tiering performs rotation is not as straightforward, but if you are after 1d rotation, you can have a 1 day gap between the min/max. For example, if you want to retain data for 30 days, you can set min to 30 and max to 31. The reason is that graylog uses the “leeway” in the event the index has not grown enough in size to rotate.
This is a basic diagram of the decision tree that happens. You can see the section where “Index Create Date > (Max Age - Min Age)”
We were thinking about just using the legacy option for indice retention.
But we were also wondering till when this legacy option will be available before its deleted.
Does anyone know for how much longer this option will be available?
It might meet our needs, but the new rotation system is a bit confusing for us right now. That’s why we’re considering using the legacy method.
We tried setting up a monthly rotation with an index retention period of one year, but unfortunately, we can’t seem to get it to work. I could be mistaken, but it feels like we had a lot more control and options with the legacy system.
I think the idea was to simplify and create a system that reduced the complexity of achieving an optimised shard size vs shard count vs total retention.
You can still alter default shard size and count but these are now options within the server.conf and this should be done when ingesting larger amounts of data but for smaller clusters the defaults should be good.
@LCE Yes, you had more control in the old system. But it only works well if you have a fairly constant ingest. When that varies - which it invariably does - selection of a constant size or time is sub-optimal. Allowing GL to dynamically determine when to rotate leads to much better resource usage and performance.
@patrickmann Thanks for the insights. The data I ingest is roughly the same size each month, so I was aiming for monthly rotations with a year of retention for each index. I now understand the system works differently, thanks to both @patrickmann and @Wine_Merchant for the explanations. I’m considering the data tiering solution—could you suggest what values for the minimum and maximum days in storage would best approximate monthly rotation and a year-long retention?
Bit of an essay for you @LCE…The system is based around obtaining the ideal size for a single shard within an index set, this setting defaults to 20gb per shard and can be altered with the below options in server.conf. 20GB shard size is considered ideal for search performance but you may wish to alter this based on how much data you ingest and the resources available within the cluster. How much memory you have assigned to heap should be a consideration when calculating how many shards an individual Opensearch can hold, let’s say 16GB is assigned as heap the equation would look like this 16 (total heap assigned in GB) x 20 (1GB of heap = 20 shards at 20GB shard size) = 320. This means 320 shards per OS node.
When applying this logic to your cluster, to make suggestions we would also need to know how many Opensearch nodes are available and their current heap allocation along with the daily ingest in GB.
To give the simplest answer you your question you would need to set the min to 365 days and the max to 375, this would give a 10 day leeway for the system to better optimise shard size but you will always retain at least 365 days of data.
Thanks so much for the detailed explanation! I had already read up on shards before this thread, and your input really helped me understand the new data tiering. I’ll definitely give this method a try once I’ve run some numbers for my environment.