Restart collecting messages

We use Graylog 2.5.2 on Centos.
Last Monday Graylog stops collection messages because the disk partition on which the elasticsearch indices where store, was hardly full.
We cleaned up some indices and moved the elasticsearch data store to a bigger disk partition.
But Graylog still doesn’t collect messages.
We see that Graylog receives messages “in 2,879 / Out 2,879 msg/s”.
But the last stored messages are still from the moment Graylog stop with a full disk.
How can I restart the collecting of messages by Graylog/elasticsearch?

Just I found the next message in /var/log/graylog-server/server.log
2019-04-05T16:42:59.816+02:00 WARN [Messages] Failed to index message: index=<graylog_15> id=<1bc1949f-57b1-11e9-8e1a-005056b6d056> error=<{“type”:“cluster_block_exception”,“reason”:“blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}>
2019-04-05T16:42:59.816+02:00 ERROR [Messages] Failed to index [500] messages. Please check the index error log in your web interface for the reason. Error: One or more of the items in the Bulk request failed, check BulkResult.getItems() for more information.

please read the error message again - I have made something bold:

2019-04-05T16:42:59.816+02:00 WARN [Messages] Failed to index message: index=<graylog_15> id=<1bc1949f-57b1-11e9-8e1a-005056b6d056> error=<{“type”:“cluster_block_exception”,“reason”:“blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}>

I saw already the “index read-only”.
My question is now on what “level” is the index read-only?
On Linux level all the directories and files are writeable by user “elasticsearch”.
Just like before the move.
ls -ld /home/elasticsearch/nodes/0/indices/Umc_Kz9XRGCdZxKweEWQqg
drwxr-xr-x. 7 elasticsearch elasticsearch 56 Apr 5 16:59 /home/elasticsearch/nodes/0/indices/Umc_Kz9XRGCdZxKweEWQqg

elasticsearch
use elasticsearch api to query data, but first I suggest to check elastic’s log.
In the community there are a lot of same problems.

According to:

just check your storage first. when it’s low, kibana auto changes its config to read-only mode. to deal with it, go to your dev tools console and
run below command:

PUT .kibana/_settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}

Now I had to find out what a “Kibana dev tools console” is.
Can you give my some help?

great start
How kibana’s problem related to your graylog problem?
I suggest use curl to set it via elastic API.


I spent 2 mins, and I found the graylog version.

Kibana is not direct related to my Graylog problem.
I got the suggestion to use Kibana to set the read_only flag to false.
I installed Kibana and execute the PUT command but the the read_only flag is still on.
I try now to use curl to set the read_only flag to false.
The command below give no error message:
curl -X PUT “http://127.0.0.1:9200/_cluster/settings?pretty” -H ‘Content-Type: application/json’ -d ‘{“persistent”: {“cluster.routing.allocation.enable”: “all”}}’

Now I have to find the right command to set the read_only flag to false.
This new for me, so I don’t know what must remain the same and what must be changed in the curl command above.
Can anyone help me?

The command below solved my problem:
$ curl -X PUT “http://127.0.0.1:9200/settings?pretty” -H ‘Content-Type: application/json’ -d ‘{“index”: {“blocks”: {“read_only_allow_delete”: “false”}}}’
{
“acknowledged” : true,
“shards_acknowledged” : true,
“index” : “settings”
}

Thanks to everyone who helped me

1 Like

Did you check to see the usage on your disks where ES stores it’s data? Because otherwise your solution will be very, very temporary.

The elasticsearch indices have been moved to a disk partition with 6 times more disk space.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.