Hello everyone, I hope this won’t sound like the usual question answered by “read the docs”, but I’m at a loss right now.
As per subject line, my Graylog instance is currently reporting “Journal utilization is too high”, and indeed the journal contains almost 400K messages as of right now.
After googling and reading the docs, I have checked Elastic search (process is running, has been restarted, in the Node window in the WebUI, it says “Elasticsearch cluster is green. Shards: 16 active, 0 initializing, 0 relocating, 0 unassigned”), I have verified resource usage (disk space and speed, CPU, RAM,…) and everything appears to be fine, the machine isn’t doing much.
I’ve checked the elasticseach and graylog log files and all I managed to find is 160k+ instances of “failed to execute bulk item (index) BulkShardRequest [[graylog_3]] containing  requests org.elasticsearch.index.mapper.MapperParsingException: failed to parse [level]” which however appear to be parsing errors.
I’ve now stopped all inputs, to make sure the instance doesn’t explode, but the message count in the journal doesn’t go down at all.
Processing is running and the node is marked as ALIVE.
What I believe caused the issue, was that the disk filled up, meaning we had to stop everything, expand the disk (it’s a VM), reboot, restart Graylog, MongoDB and Elasticsearch and delete an older index.
What am I missing?