Graylog Index retention changes not reflected on indices

1. Describe your incident:
I’m trying to tune my current index retention strategy to ensure a steady amount of disk usage. Previously, I had the default configured index rotation strategy:

  • 20 million messages
  • 4 shards
  • 20 indices
  • Delete retention strategy.

Now I’ve adjusted this to size-based rotation:

  • 1073741824 bytes (1.0GiB)
  • 4 shards
  • 15 indices
  • Delete

I thought after rotation occurred, this might rework my existing indices to be 1GB max on all 15 indices and for it to delete all the older data. Instead, it doesn’t look like it made any difference. I currently have the same 6 indices I had before with variable sizes:

  • graylog_0: 7GiB (~20mil documents)
  • graylog_1: 12.3GiB (~20 mil)
  • graylog_2: 10.7GiB
  • graylog_3: 9.1 GiB
  • graylog_4: 7.4 GiB
  • graylog_5: 540 MiB (manually rotated this via GUI)
  • graylog_6: 19 MiB (active)

What I’ve tried:

  • Updated index strategy as described above.
  • Restarted graylog-server service.
  • Maintenance > recalculate index ranges
  • Maintenance > rotate active write index

2. Describe your environment:

  • OS Information: CentOS 7

  • Kernel: Linux 3.10.0-1160.90.1.el7.x86_64

  • Package Version: Graylog 4.2.13+9c90b93

3. How can the community help?

I’m just looking to ensure I keep my total Graylog disk usage under ~20GiB via index rotation/retention. I’m confident I’m missing an important step here but most of the documentation I’m finding online is written like you’re setting up index rotation for the first time, not changing an active index sets retention strategy.

Can anyone point me towards what I’m missing here? How can I ensure my existing index set stays under 20GiB? Thanks!

Bump. It’s been a week now and the old indices are definitely not being rotated out. At this point I’m thinking I’ll have to go in and manually delete documents past a certain date to clean up the old indices.

Is there really no better way?

Hey @boogity

Does this look like your Index set?

If so, you see anything in elasticsearch/Opensearch log file that might give us a clue? Or even Graylog log file

I thought after rotation occurred, this might rework my existing indices to be 1GB max on all 15 indices and for it to delete all the older data

Index settings only apply to indexes created (either by manually rotating, or waiting for the retention policy to rotate) after the changes were made. None of the existing indexes will change in size.

You will need to manually delete the older indexes if you want to free up that disk space usage.

Hope this helps.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.