### Elasticsearch nodes disk usage above high watermark (triggered a day ago)
There are Elasticsearch nodes in the cluster with almost no free disk, their disk usage is above the high watermark. For this reason Elasticsearch will attempt to relocate shards away from the affected nodes. The affected nodes are: [127.0.0.1] Check Disk-based shard allocation | Elasticsearch Reference [master] | Elastic for more details.
You’re currently at 93% used. Presumably at some point you were over 95% as well.
cluster.routing.allocation.disk.watermark.low
Controls the low watermark for disk usage. It defaults to 85% , meaning that Elasticsearch will not allocate shards to nodes that have more than 85% disk used. It can also be set to an absolute byte value (like 500mb ) to prevent Elasticsearch from allocating shards if less than the specified amount of space is available. This setting has no effect on the primary shards of newly-created indices or, specifically, any shards that have never previously been allocated.
cluster.routing.allocation.disk.watermark.high
Controls the high watermark. It defaults to 90% , meaning that Elasticsearch will attempt to relocate shards away from a node whose disk usage is above 90%. It can also be set to an absolute byte value (similarly to the low watermark) to relocate shards away from a node if it has less than the specified amount of free space. This setting affects the allocation of all shards, whether previously allocated or not.
Controls the flood stage watermark, which defaults to 95%. Elasticsearch enforces a read-only index block ( index.blocks.read_only_allow_delete ) on every index that has one or more shards allocated on the node, and that has at least one disk exceeding the flood stage. This setting is a last resort to prevent nodes from running out of disk space. The index block must be released manually when the disk utilization falls below the high watermark.
You need to investigate your disk utilization. Do the math on your indexes. For each index set, what’s the average size of an index? How many are being retained? When you add everything up, does it approach or exceed the total available disk space for elasticsearch?
Hi @ttsandrew
I hope you’re doing well @ttsandrew
I added some disk space for graylog, but I’m sure I have a problem with the elasticsearch
Elasticsearch nodes disk usage above low watermark (triggered 12 days ago)
There are Elasticsearch nodes in the cluster running out of disk space, their disk usage is above the low watermark. For this reason Elasticsearch will not allocate new shards to the affected nodes. The affected nodes are: [127.0.0.1] Check Disk-based shard allocation | Elasticsearch Reference [master] | Elastic for more details.
[root@AYOUB ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 14G 0 14G 0% /dev
tmpfs 14G 0 14G 0% /dev/shm
tmpfs 14G 8.6M 14G 1% /run
tmpfs 14G 0 14G 0% /sys/fs/cgroup
/dev/mapper/centos-root 2.0T 1.7T 286G 86% /
/dev/xvda1 1014M 243M 772M 24% /boot
tmpfs 2.8G 0 2.8G 0% /run/user/0
2021-02-08T08:01:06.479+01:00 ERROR [IndexRotationThread] Couldn’t point deflector to a new index