Graylog does not search!

Hi everyone,
I set up a graylog server but after a day the server logs do not fall. Can you help me urgently? And I keep getting this error.

WARN [Messages] Failed to index message: index=<graylog_0> id=<580e2050-9323-11e9-b787-2a3dba43ac3d> error=<{“type”:“cluster_block_exception”,“reason”:“blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}>
2019-06-20T09:19:22.589+03:00 ERROR [Messages] Failed to index [1] messages. Please check the index error log in your web interface for the reason. Error: One or more of the items in the Bulk request failed, check BulkResult.getItems() for more information.

Check the disk space on your ES server.
ES has set the index to read-only meaning that Graylog is unable to write messages to it.

This can be caused by the ES host running out of disk space so, that’s the first thing I’d check.

Also check your ES log file for any errors relating to this, file is normally in /var/log/elasticsearch/graylog.log

As for making the index writable again, this post should help:

Hi , thanks for your reply /var/log/elasticsearch/graylog.log The contents of the file are as follows.
also I can not find the places you will change in the link below. Where do I need to look.

][WARN ][o.e.c.r.a.DiskThresholdMonitor] [qzb4SQO] flood stage disk watermark [95%] exceeded on [qzb4SQOrTUeGlj3JelllCg][qzb4SQO][/var/lib/elasticsearch/nodes/0] free: 147.4mb[3.7%], all indices on this node will be marked read-only
[2019-06-20T12:19:11,132][WARN ][o.e.c.r.a.DiskThresholdMonitor] [qzb4SQO] flood stage disk watermark [95%] exceeded on [qzb4SQOrTUeGlj3JelllCg][qzb4SQO][/var/lib/elasticsearch/nodes/0] free: 147.4mb[3.7%], all indices on this node will be marked read-only
[2019-06-20T12:19:41,137][WARN ][o.e.c.r.a.DiskThresholdMonitor] [qzb4SQO] flood stage disk watermark [95%] exceeded on [qzb4SQOrTUeGlj3JelllCg][qzb4SQO][/var/lib/elasticsearch/nodes/0] free: 147.4mb[3.7%], all indices on this node will be marked read-only

Yep. Looks like you are/were out of disk space.

Check available disk space, increase as required and then follow the steps to make the index writable again and that should resolve your issue.

How do I change the index. Can you help? don’t know which file is where

You need to fix your disk space issue and then look at the stackoverflow thread I linked in my original response.

I can’t find places to change to stackoverflow. Which files do I need to look at?

Your question doesn’t makes sense.

Check your available disk space. Clear out unnecessary files to free up space or increase the size of the disk/add an additional disk for ES to use.

Then look at the below thread on how to make your index writable again.

do I need to replace elasticsearch.yml or add code to it?

No. Please re-read my responses and look at the thread I have linked.

I can’t find places on your link, there’s no place to write.

I will try to explain slower.

Your disk partition, where elasticsearch indexes are stored, is full. That’s the reason why indexes are set to read-only. There simply is no room to write. From your log it seems your partition is only 4GB, it’s definitely too small.

Increase disk partition if there is more space on physical disk, or replace physical disk to larger and transfer indexes to it (that’s separate story).

After that click on link in this post and read how to change index from read-only back to read-write.

Karlis thanks for your ask but do I run code at I don know ?

is it enough to enter this code?
curl -X PUT "localhost:9200/_all/_settings" -H 'Content-Type: application/json' -d'{ "index.blocks.read_only_allow_delete" : false } }'

Yes, you can use the CURL command but as the next commenter in that thread states, you should be setting it to null rather than false…check on the page and make sure it is documentation for the correct version of ES. The Elasticsearch documentation allows you to copy the CURL version of the command to your clipboard.

ALSO of note, if you just use the CURL command and haven’t cleared up the disk space issue, you will run into the same problem pretty quickly.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.