Graylog Won't Start - Disk Space Ran Out

1. Describe your incident:

I had my index settings set wrong so I wasn’t limiting disk space usage. Unfortunately, this has caused our Ubuntu VPS with Graylog to run out of disk space and no longer start.

What steps have you already taken to try and solve the problem?

I have found some things online about manually clearing indexs/data but nothing has been working, paths are not matching, and a lot of it seems outdated.

** How can the community help:**

Any easy ways to delete log data but keep configuration? We don’t care as much about the log data at this point, just the configuration we have is important.

Thanks!

Hello && Welcome @HHN

Yep I been there, you have a couple choices.
1.Expand the volume on the Graylog server.
2.Delete old indices on the Web UI.
3.Since Graylog is unable to start Removed old Indices through CURL just enough to start Graylog then remove/edit the indices on the Web UI ( which is preferred)

Check the size of directory’s. Used this first because I found a large log file and all I had to do was delete it and start Graylog service.

root# du --max-depth=5 /* | sort -rn | more 

What I have done was…

Find old indices

curl -X GET "192.168.1.100:9200/_cat/indices?pretty"

Delete indices need to start Graylog service.

curl -XDELETE "http://192.168.1.100:9200/graylog_457 /"

Restart Graylog service.

root# systemctl restart graylog-server

Chances are Elasticsearch went into read mode, So I would check the setting on that.

I think it is something like this to take it out of read mode. I’m not 100% sure.

PUT my_index/_settings
 {
      "index": {
        "index.blocks.read_only": false
      }
    }

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.