We’ve been using Graylog for a number of months now at my company in production, but since updating to the latest version of graylog-server 4.1.0+4eb2147, we keep getting the warning ‘Elasticsearch nodes disk usage above high watermark’.
For context, we’re using an Elasticsearch AWS cluster v7.10, there’s 9 nodes in total (3x master nodes, one in each AZ and 6 data nodes each with 200 EBS storage). The Elasticsearch cluster is in otherwise good health too.
Hey @reimlima thanks for your reply, but it doesn’t look like this is the fix for me unfortunately. The AWS ElasticSearch Service has limited settings I can change and it doesn’t appear that I can make the change you have recommended
Just to add some additional info here - AWS ElasticSearch Service includes absolute disk watermark settings (low=25GB, high=22GB, flood=1GB) and does not support changing these values via the /_cluster/settings REST endpoint. When running dedicated master nodes on certain instance types (such as r5.large.elasticsearch), the available disk space will always be below the high watermark.
While it would be nice for Graylog to include a setting such as “ignore master node disk watermark notifications”, I think this is more of an AWS problem than a Graylog problem. AWS should either allow ES users to update cluster settings for disk watermark or include enough disk space on their elasticsearch instance types so that their own hard-coded settings don’t produce warnings.