I am using the Graylog 3.3.x and recently had the storage fill up.
I was receiving this error in Graylog:
‘Journal utilization is too high and may go over the limit soon. Please verify that your Elasticsearch cluster is healthy and fast enough. You may also want to review your Graylog journal settings and set a higher limit.’
I grew the storage and it shows as available if I run a df-h.
However, nothing is showing up on my input for my WinLogBeat input and nothing is showing as Output to ElasticSearch.
I have stopped the services and deleted the journal file as I saw recommended in several posts but it did not correct the issue.
Does anyone have any advice on how to further troubleshoot? It would be GREATLY appreciated.
PS - this is what was showing in ElasticSearch logs earlier: https://i.imgur.com/0QGXGtm.png
This is the latest entries: https://i.imgur.com/wsy0h9E.png
This problem is related to your disk utilization in the Elasticsearch cluster.
Journal is a machanism in Graylog to temporally store you data when ES cannot for some reason.
In your case in particular, your ES Cluster isn’t receiving data to be indexed due to disk utilization.
If you have spare space in your filesystem you can change settings about the watermark in Elasticsearch.
Otherwise you should consider move those data to another node with more space.
Based on your Graylog version, I think you can work with Elasticsearch ILM to migrate data between nodes in order to avoid this problem.
P.S.: If you’re considering migrate Graylog to v4 ILM is not an option, and you shold take a look at this post to know how to proceed.
Thank you for your reply. I realize that was my initial issue but I currently have over 200GB of free space after the addition I made today. I am still not outputting any logs though.
@poisedforflight my pleasure!
can we pass over a little checklist?
Your free additional space was properlly recognized by the Operational System?
If so, is this the same filesystem that Elasticsearch is writing data?
I saw once Elasticsearch losing itself to write new data to the index. After you guarantee the two steps above are OK, can you try to restart your Elastichsearch and see if new data starts being indexed?
Whats the status of your elasticsearch by chance?
curl -XGET http://localhost:9200/_cluster/health?pretty=true
How is your Input buffer, Process buffer and Output buffer?
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.