Graylog stopped with timeout

i have graylog 4.x on ubuntu 20.04. standard installation.
I can log in to my gralog but all data is one week old.
services says running but : graylog-server.service: Failed with result ‘timeout’.
in graylog log i have warning
WARN [MessagesAdapterES6] Failed to index message: index=<graylog_0> id= error=<{“type”:“cluster_block_exception”,“reason”:“blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}>

in syslog i have error :
2021-11-18T23:25:02.093084+00:00 rsyslog systemd[1]: motd-news.service: Failed with result ‘exit-code’.
2021-11-18T23:25:02.094389+00:00 rsyslog systemd[1]: Failed to start Message of the Day.

This usually means you have hit the highwater mark on the disk where your Elasticsearch keeps it’s data.

You haven’t posted which versions of Elasticsearch you have but this Elasticsearch article should give you the information you need about Elasticsearch disk thresholds and how to set/clear them. You will need to either roll of data from indexes or increase disk space before you clear the read-only on the graylog_0 index.

If you can attach your Graylog and Elasticsearch logs we can offer more specific advice, but as above it’s likely you’re running out of ES storage.

my elasticsearch is :
~# curl localhost:9200
{
“name” : “peoR6Gc”,
“cluster_name” : “graylog”,
“cluster_uuid” : “o-clJDFLSz2BcV-fK5ld_Q”,
“version” : {
"number" : "6.8.20",
“build_flavor” : “oss”,
“build_type” : “deb”,
“build_hash” : “c859302”,
“build_date” : “2021-10-07T22:00:24.085009Z”,
“build_snapshot” : false,
“lucene_version” : “7.7.3”,
“minimum_wire_compatibility_version” : “5.6.0”,
“minimum_index_compatibility_version” : “5.0.0”
},
“tagline” : “You Know, for Search”

and graylog version 4.2

logfile /var/log/elas…/graylog say
[2021-11-19T22:23:35,163][WARN ][o.e.c.r.a.DiskThresholdMonitor] [peoR6Gc] high disk watermark [90%] exceeded on [peoR6GcRQpqhJZlebPSo5g][peoR6Gc][/var/lib/elasticsearch/nodes/0] free: 1.4gb[9.3%], shards will be relocated away from this node


[2021-11-20T03:45:38,447][INFO ][o.e.c.r.a.DiskThresholdMonitor] [peoR6Gc] low disk watermark [85%] exceeded on [peoR6GcRQpqhJZlebPSo5g][peoR6Gc][/var/lib/elasticsearch/nodes/0] free: 2.3gb[14.8%], replicas will not be assigned to this node

@tmacgbay ,
how to increase disk space. i have 12 gig disk space

Increasing your disk space goes beyond Graylog… I know little about what you have and how you have it set up. I am sure there are plenty of posts on the internet that you could Google and find that are relevant to your system.

This depends on if your Graylog server resides on physical hardware or if its on a virtual machine.
If its hardware either you need a new HDD and clone you graylog server to the larger drive, if your current dive has more space you can extend the portion.

If your Graylog server is on a virtual machine it easy to add more space to the drive. Once you increase the volume you then need to add it to the correct portion on the Graylog server.

https://help.ubuntu.com/stable/ubuntu-help/disk-resize.html.en
Hope that helps

i know how to resize disk but my issue is there is 12 gig back on my ubuntu, why they say no disk

Hello,
My apologies, I misread your post.

  • I believe Its because of this statement in your logs.
 low disk watermark [85%] exceeded on
  • Definition of low disk watermark [85%] exceeded.

When disk usage on a host hits 85 percent, the Elasticsearch service prevents shard allocation and stops working. This disk usage threshold is an Elasticsearch configuration. By default, the cluster.routing.allocation.disk.watermark.low watermark is set to 85% to prevent Elasticsearch from allocating new shards to hosts once disk usage on the host exceeds 85 percent.

  • The Next stage is High Disk Watermark

The next level is the High Disk Watermark stage. The default value is 95%. Elasticsearch enforces a read-only index block (index.blocks.read_only_allow_delete) on every index that has one or more shards allocated on the node, and that has at least one disk exceeding the flood stage. This setting is the last resort to prevent nodes from running out of disk space. Depending on your Elasticsearch version, the release mechanism is different:

  • Before Elasticsearch 7, the index block must be released manually when the disk utilization falls below the high watermark.

  • Since Elasticsearch 7, the index block is automatically released when the disk utilization falls below the high watermark.

  • Maybe these post will enlighten you on what’s going on.

Hope that helps

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.