Graylog not showing messages in seach view

Hi,

I installed OVA image (3.1.x) before and had the same issue after some time, it works for a day or so then it stops, yet I can see messages come in via System/Inputs.

So then I installed graylog (Graylog 3.1.2+9e96b08) from scratch on Ubuntu 18.04 server and it worked fine for like 8-9 hours. Messages showed up in search, but then last night at 10 pm it stopped showing messages in seach view again, yet I can see the messages coming via System/Inputs view.

Also, the disk is not full, so that is not the problem and elasticsearch and graylog-server is running and is active when I run the following commands.

systemctl status elasticsearch.service
systemctl status graylog-server.service

FYI, I added about 20 servers and a few firewalls to send syslogs via udp to the graylog server and it worked for several hours as mentioned above, then stopped show messages in search view again.

Is their a limit as to how much data that can be stored and that is the issue? that I reach max limit of messages to be stored or something.

Here it says that some messages failed, not sure why though.

/Markus

what does the error show when you click on the blue button “show errors” in the Index failure section?

Jan,

See attached screenshot or errors

/Makrus

Hi,

I see in the graylog that it says disk watermark [90] or [95} exceeded. see below.
Yet my disk space is nowhere near 95% full. I know I need to increase partition but currently only 65% is used.

How do I resolve this?

root@host1:/var/log/elasticsearch# df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.6G 0 5.6G 0% /dev
tmpfs 1.2G 644K 1.2G 1% /run
/dev/sda2 6.9G 4.3G 2.4G 65% /
tmpfs 5.7G 0 5.7G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 5.7G 0 5.7G 0% /sys/fs/cgroup
/dev/sda1 922M 77M 782M 9% /boot
tmpfs 1.2G 0 1.2G 0% /run/user/1000

[2019-09-20T00:54:31,566][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 363.5mb[5.1%], shards will be relocated away from this node

[2019-09-20T00:55:01,572][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 361.4mb[5.1%], shards will be relocated away from this node

[2019-09-20T00:55:01,573][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] rerouting shards: [high disk watermark exceeded on one or more nodes]

[2019-09-20T00:55:31,579][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 359.5mb[5%], shards will be relocated away from this node

[2019-09-20T00:56:01,583][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 357.5mb[5%], shards will be relocated away from this node

[2019-09-20T00:56:01,584][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] rerouting shards: [high disk watermark exceeded on one or more nodes]

[2019-09-20T00:56:31,591][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 355.1mb[5%], shards will be relocated away from this node

[2019-09-20T00:57:01,596][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 353.5mb[5%], shards will be relocated away from this node

[2019-09-20T00:57:01,597][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] rerouting shards: [high disk watermark exceeded on one or more nodes]

[2019-09-20T00:57:31,603][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 351mb[4.9%], all indices on this node will be marked read-only

[2019-09-20T00:58:01,609][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 349.1mb[4.9%], all indices on this node will be marked read-only

[2019-09-20T00:58:31,614][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 346.9mb[4.9%], all indices on this node will be marked read-only

[2019-09-20T00:59:01,620][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 344.3mb[4.8%], all indices on this node will be marked read-only

[2019-09-20T00:59:31,626][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 342.7mb[4.8%], all indices on this node will be marked read-only

[2019-09-20T01:00:01,632][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 340.3mb[4.8%], all indices on this node will be marked read-only

[2019-09-20T01:00:31,637][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 338.3mb[4.7%], all indices on this node will be marked read-only

[2019-09-20T01:01:01,641][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 336.8mb[4.7%], all indices on this node will be marked read-only

[2019-09-20T01:01:31,647][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 335.1mb[4.7%], all indices on this node will be marked read-only

[2019-09-20T01:02:01,651][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 334.1mb[4.7%], all indices on this node will be marked read-only

[2019-09-20T01:02:31,654][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 333.1mb[4.7%], all indices on this node will be marked read-only

[2019-09-20T01:03:01,657][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 331.9mb[4.7%], all indices on this node will be marked read-only

[2019-09-20T01:03:31,662][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 330.7mb[4.6%], all indices on this node will be marked read-only

[2019-09-20T01:04:01,668][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 329.4mb[4.6%], all indices on this node will be marked read-only

[2019-09-20T01:04:31,674][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 328mb[4.6%], all indices on this node will be marked read-only

[2019-09-20T01:05:01,678][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] flood stage disk watermark [95%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 327.1mb[4.6%], all indices on this node will be marked read-only

[2019-09-20T01:05:31,683][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 374.7mb[5.3%], shards will be relocated away from this node

[2019-09-20T01:05:31,684][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] rerouting shards: [high disk watermark exceeded on one or more nodes]

[2019-09-20T01:06:01,689][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 373.5mb[5.2%], shards will be relocated away from this node

[2019-09-20T01:06:31,694][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 372.4mb[5.2%], shards will be relocated away from this node

[2019-09-20T01:06:31,695][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] rerouting shards: [high disk watermark exceeded on one or more nodes]

[2019-09-20T01:07:01,707][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 370.9mb[5.2%], shards will be relocated away from this node

[2019-09-20T01:07:31,712][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ufVPdwi] high disk watermark [90%] exceeded on [ufVPdwi4TzyGdFdCV2BIfA][ufVPdwi][/var/lib/elasticsearch/nodes/0] free: 370mb[5.2%], shards will be relocated away from this node

/Markus

please us the search within this community - we have written the solution multiple times.

1 Like

Hi,

I didn’t see any posts related to this error, but maybe I searched with the incorrect key works.
Would be nice if you can send me the link if you say it was posted before.

Anyway, this is what I found by googling the web and it resolved it.

Add the following in elasticseach.yml

/etc/elasticsearch#
vim elasticsearch.yml

cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 5gb
cluster.routing.allocation.disk.watermark.low: 30gb
cluster.routing.allocation.disk.watermark.high: 20gb

Then run this
curl -XPUT -H “Content-Type: application/json” http://localhost:9200/_all/_settings -d ‘{“index.blocks.read_only_allow_delete”: null}’

Not sure if this was best practice, but maybe you do?

/Markus

the settings regarding the limits are up to you - but the way to resolve is the only one.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.