Despite Index Retention Configuration setting disk-space not cleared

Hello GrayLog Experts,
I am using docker for GrayLog as follows:

I have an index retention config for 3 GB per index up to a total of 20 indices. My retention strategy is to Delete the Index. But for some reason, the indices do not delete even though the GRayLog UI indices shows it only has the last 20 indices. Disk space rapidly reaches 100% usage on a 500GB disk. However, when I stop the grayLog and elastic docker containers, disk space is released and then disk usage returns to 10% from 100%. It is almost like the indices are not deleted until I stop the docker container.

Any ideas on how to handle this? I use the following docker run commands to mount the file on the host system:

docker run --name mongo
-v /home/centos/mongo/data:/data
-d mongo:3rd_place_medal:
docker run --name elasticsearch
-e “” -e “”
-v /home/centos/elasticsearch/scripts:/usr/share/elasticsearch/config/scripts
-v /home/centos/elasticsearch/data:/usr/share/elasticsearch/data

sleep 60
docker run --link mongo --link elasticsearch --name graylog
-p 9000:9000 -p xxxx:xxxx -p xxxx:xxx/udp -p xxx:xx
-e GRAYLOG_WEB_ENDPOINT_URI=“xxx://xxxxxxxxxx”
-v /home/centos/graylog/data/journal:/usr/share/graylog/data/journal
-d graylog/graylog:2.4.0-1

are the data persitent on your disk? Did you have all ingested data if you restart Graylog?

Did you checked what had “filled” your space?

Hi Jan,
Here are my responses:

  • I have mounted the data directory of each of the containers as shown above. However, I am not sure if there are other directories that also have data. As a test, I removed each container and restarted each container and I lost all users and streams I had created. So looks like I am not mounting all directories.
  • I only had the last 20 indices when I restarted. And actually only last 20 indices were visible before I restarted.
  • I did not check what had consumed the disk space. All I know is when I restarted the containers, disk utilization was back to normal (15%) from 100%

Any thoughts on what might have caused high disk usage?


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.