Elasticsearch nodes disk usage above high watermark

Hi All,

I wonder if any one can help.

We’ve been using Graylog for a number of months now at my company in production, but since updating to the latest version of graylog-server 4.1.0+4eb2147, we keep getting the warning ‘Elasticsearch nodes disk usage above high watermark’.

For context, we’re using an Elasticsearch AWS cluster v7.10, there’s 9 nodes in total (3x master nodes, one in each AZ and 6 data nodes each with 200 EBS storage). The Elasticsearch cluster is in otherwise good health too.

{
“cluster_name” : “[REMOVED]”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 9,
“number_of_data_nodes” : 6,
“discovered_master” : true,
“active_primary_shards” : 145,
“active_shards” : 146,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}

Is this a Graylog bug or am I missing something obvious here?

Thanks in advance

Print screen of the warning within the Graylog Web Interface, it wouldn’t let me add this to the initial post

Hi @ripcentos and Welcome!

I have faced this problem a while ago… this is what I did to solve it:

2 Likes

Hey @reimlima thanks for your reply, but it doesn’t look like this is the fix for me unfortunately. The AWS ElasticSearch Service has limited settings I can change and it doesn’t appear that I can make the change you have recommended

Just to add some additional info here - AWS ElasticSearch Service includes absolute disk watermark settings (low=25GB, high=22GB, flood=1GB) and does not support changing these values via the /_cluster/settings REST endpoint. When running dedicated master nodes on certain instance types (such as r5.large.elasticsearch), the available disk space will always be below the high watermark.

While it would be nice for Graylog to include a setting such as “ignore master node disk watermark notifications”, I think this is more of an AWS problem than a Graylog problem. AWS should either allow ES users to update cluster settings for disk watermark or include enough disk space on their elasticsearch instance types so that their own hard-coded settings don’t produce warnings.

1 Like

Hi @amclaughlin
If you think, that this feature could be helpful for other people, create feature request in github:

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.