Elasticsearch nodes disk usage

hello everyone,

Elasticsearch nodes disk usage above high watermark

Journal utilization is too high

Uncommited messages delted from jorunal

I need your help . I receive a large amount of logs, which implies problems with my elasticsearch…

I made an indexation rotation for a specific period. I need your advice to keep my server available

Your clue here is the very first error message you posted. Check your disk utilization and free up some space.

1 Like

### Elasticsearch nodes disk usage above high watermark (triggered a day ago)

There are Elasticsearch nodes in the cluster with almost no free disk, their disk usage is above the high watermark. For this reason Elasticsearch will attempt to relocate shards away from the affected nodes. The affected nodes are: [127.0.0.1] Check Disk-based shard allocation | Elasticsearch Reference [master] | Elastic for more details.

[root@AYOUB ~]# curl -XGET ‘localhost:9200/_cat/allocation?v’
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
132 541.7gb 1.6tb 107.4gb 1.7tb 93 127.0.0.1 127.0.0.1 2wAN6HV
224 UNASSIGNED

[root@ayoub ~]# curl -XGET ‘localhost:9200/_cluster/health?pretty’
{
“cluster_name” : “graylog”,
“status” : “red”,
“timed_out” : false,
“number_of_nodes” : 1,
“number_of_data_nodes” : 1,
“active_primary_shards” : 161,
“active_shards” : 161,
“relocating_shards” : 0,
“initializing_shards” : 4,
“unassigned_shards” : 191,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 2,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 71197,
“active_shards_percent_as_number” : 45.2247191011236
}

You’re currently at 93% used. Presumably at some point you were over 95% as well.

cluster.routing.allocation.disk.watermark.low

Controls the low watermark for disk usage. It defaults to 85% , meaning that Elasticsearch will not allocate shards to nodes that have more than 85% disk used. It can also be set to an absolute byte value (like 500mb ) to prevent Elasticsearch from allocating shards if less than the specified amount of space is available. This setting has no effect on the primary shards of newly-created indices or, specifically, any shards that have never previously been allocated.

cluster.routing.allocation.disk.watermark.high

Controls the high watermark. It defaults to 90% , meaning that Elasticsearch will attempt to relocate shards away from a node whose disk usage is above 90%. It can also be set to an absolute byte value (similarly to the low watermark) to relocate shards away from a node if it has less than the specified amount of free space. This setting affects the allocation of all shards, whether previously allocated or not.

cluster.routing.allocation.disk.watermark.flood_stage

Controls the flood stage watermark, which defaults to 95%. Elasticsearch enforces a read-only index block ( index.blocks.read_only_allow_delete ) on every index that has one or more shards allocated on the node, and that has at least one disk exceeding the flood stage. This setting is a last resort to prevent nodes from running out of disk space. The index block must be released manually when the disk utilization falls below the high watermark.

You need to investigate your disk utilization. Do the math on your indexes. For each index set, what’s the average size of an index? How many are being retained? When you add everything up, does it approach or exceed the total available disk space for elasticsearch?

1 Like

Hi @ttsandrew
I hope you’re doing well @ttsandrew
I added some disk space for graylog, but I’m sure I have a problem with the elasticsearch :cold_sweat:

Elasticsearch nodes disk usage above low watermark (triggered 12 days ago)

There are Elasticsearch nodes in the cluster running out of disk space, their disk usage is above the low watermark. For this reason Elasticsearch will not allocate new shards to the affected nodes. The affected nodes are: [127.0.0.1] Check Disk-based shard allocation | Elasticsearch Reference [master] | Elastic for more details.
[root@AYOUB ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 14G 0 14G 0% /dev
tmpfs 14G 0 14G 0% /dev/shm
tmpfs 14G 8.6M 14G 1% /run
tmpfs 14G 0 14G 0% /sys/fs/cgroup
/dev/mapper/centos-root 2.0T 1.7T 286G 86% /
/dev/xvda1 1014M 243M 772M 24% /boot
tmpfs 2.8G 0 2.8G 0% /run/user/0

2021-02-08T08:01:06.479+01:00 ERROR [IndexRotationThread] Couldn’t point deflector to a new index

what’s your ES cluster health say? Your error says that the cluster is not available.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.