Running out of Disk space despite Index retention set to delete


(AT@Austin) #1

Hello GrayLog Experts,
I am using docker for GrayLog as follows:

mongo:3
docker.elastic.co/elasticsearch/elasticsearch:5.6.2
graylog/graylog:2.4.0-1
I have an index retention config for 3 GB per index up to a total of 20 indices. My retention strategy is to Delete the Index. But for some reason, the indices do not delete even though the GRayLog UI indices shows it only has the last 20 indices. Disk space rapidly reaches 100% usage on a 500GB disk. However, when I stop the grayLog and elastic docker containers, disk space is released and then disk usage returns to 10% from 100%. It is almost like the indices are not deleted until I stop the docker container.

Any ideas on how to handle this? I use the following docker run commands to mount the file on the host system:

docker run --name mongo
-v /home/centos/mongo/data:/data
-d mongo:3rd_place_medal:
docker run --name elasticsearch
-e “http.host=0.0.0.0” -e “xpack.security.enabled=false”
-v /home/centos/elasticsearch/scripts:/usr/share/elasticsearch/config/scripts
-v /home/centos/elasticsearch/data:/usr/share/elasticsearch/data
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.2

sleep 60
docker run --link mongo --link elasticsearch --name graylog
-p 9000:9000 -p xxxx:xxxx -p xxxx:xxx/udp -p xxx:xx
-e GRAYLOG_PASSWORD_SECRET=xxxxxxxx
-e GRAYLOG_ROOT_PASSWORD_SHA2=xxxxxxxx
-e GRAYLOG_WEB_ENDPOINT_URI=“xxx://xxxxxxxxxx”
-v /home/centos/graylog/data/journal:/usr/share/graylog/data/journal
-d graylog/graylog:2.4.0-1

The file that is consuming all disk space is /var/lib/containers//.json

Why is this file not getting purged with the retention strategy?

Thanks!


(Jochen) #2

Are the indices deleted in Elasticsearch by Graylog or do they still exist?
If the indices are deleted correctly, maybe there are some dangling file descriptors which prevent freeing the disk space until the Elasticsearch process/container has been restarted.


(AT@Austin) #3

Thank you so much jochen!

The file that is consuming all disk space is /var/lib/containers//.json

The indices are deleted in ES. The above file is what takes up disk space. Do you have recommendations why this file maybe holding onto indices. I have to stop all docker containers, delete this file, then delete everything under data/journal and restart the docker containers.

Thoughts?

Thanks again!


(Jochen) #4

Is this a file on your container host or inside the Elasticsearch container?


(AT@Austin) #5

This is the file on the host under the Graylog container ID. My apologies, it is the -json.log

/var/lib/docker/containers/00666dab8f3205994d7fe1302fa081926ab5f089f7bd3d22d06e014b1c3ede4e/00666dab8f3205994d7fe1302fa081926ab5f089f7bd3d22d06e014b1c3ede4e-json.log


(Jochen) #6

You might want to restrict the size of your container logs:


(AT@Austin) #7

Thank you again jochen


(system) #8

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.