I need help with resetting the read-only index block on the index

Encountered space issue on the drive due to which the Elasticsearch cluster is red with 2 unassigned.
In the logs, found the below erors.

Failed to index message: index=<graylog_118> id=<2ef7bf82-410e-11ee-8c48-00163e09df48> error=<{“type”:“cluster_block_exception”,“reason”:“blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}>

I need help reset the read-only index block on the index.

OS Information: Linux 4.18.0-477.15.1.el8_8.x86_64
Package Version: 4.2.4+b643d2b

Followed the steps given in the page below to run the curl command but its giving me “type apierror message http 404 not found”

I need help reset the read-only index block on the index.

Can you share what indexer/backend you are using? Elasticsearch, OpenSearch? And what version?

Its Elasticsearch with version - 6.8.22-1

Restarted elastic search and when checked the service status it gives the below error.

2023-08-30 07:45:11,967 main ERROR Null object returned for RollingFile in Appenders.
Aug 30 07:45:11 elasticsearch[906215]: 2023-08-30 07:45:11,968 main ERROR Unable to locate appender “deprecation_rolling” for logg

Also the curl command gives me error - {“type”:“ApiError”,“message”:“HTTP 405 Method Not Allowed”}

I’m seeing conflicting answers in this thread. This comment specifically though does share the same command:

curl -X PUT -H "Content-Type: application/json" -d '{"index.blocks.read_only_allow_delete": null}'

Other commends say the cluster should automatically remove the read only state once there is more than 5% free disk space.

Regarding the ERROR Null object returned for RollingFile in Appenders error: this thread indicates it may be a problem with access to the data for elasticsearch, either the path, permissions, or something related.

hope that helps

I have tried the solutions as per your reference. But did not work.
I may sound a little stupid here. But here if my question.

I am using a https link with a different DNS name and with port 9000. I tried to run the curl command in my windows laptop locally but i get {“type”:“ApiError”,“message”:“HTTP 404 Not Found”} error.
Is there a better way to do this?

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'

Second thing - Since we could not add space to the current drive where elastic search was pointing to in the yml file , i created another storage disk and copied all the indices over to the new drive and the pointed the path in elasticsearch.yml file. Since then the space issue is resolved but i need to remove the read only so it can start writing. However, i can see the elasticsearch status is red in the index.

A good way to verify the base URL is to navigate to the root of it, so for example, i wanted to query my opensearch cluster, i would use my servers hostname (or IP) and the port that opensearch is bound to (which is 9200 by default). A query would look like http://hostname.domain.tld:9200/ and i can run that in my web browser to verify i get a response:


once you can confirm you have the correct hostname and port you can add that into the curl command.

Regarding copying/moving the files. Can you confirm that the owner and permissions match the original directory?

For example if i check my data path i can see the owner group and user is opensearch: (3rd and 4th columns)

an example command to change the owner:

sudo chown -R user:group /path/to/folder

replace user:group with the appropriate owner.
replace /path/to/folder with the folder you want to change the owner of
-R means to recursively change the owner not only on the folder but all sub folders as well

Where can i find the base url? Is it http_bind_address in server.conf file?

My permissions are fine. Here it is.

Hi @Charan.Raj, You have to fill in the blanks here to get your base URL,


fill in your hostname, domain and top level domain, then run the query against your server. You should get a reply that looks like the one @drewmiranda-gl showed.

You can use your ip address instead, if you are not using FQDN.

Hi Chris,
Not sure what i am missing here. But i used the REST API browser and below is the website it was able to authenticate, and it’s the same value found in the server.conf file under http binding address.
However, i have https instead of http.

Just use a regular web browser. Be sure to include the 9200 port number in the URL. To be clear, you are trying to reach OpenSearch with this query, not Graylog.

thanks Chris. But i am using elastic search and not opensearch.
QQ - Will this issue get resolved if i create a new index?

My issue is resolved. The default binding address was hashed out in server.conf file due to which i was unable to execute the curl command.
QQ - During my initial troubleshooting, I rotated active write index on the below index and i could see missing documents. is there a way to get them back. ?


Any help here is much appreciated. Thank you.

If that index has been not been deleted, the logs should still be in there.

Thank you Chris. What would be the best way to get those data reflecting in the index?

They don’t reflect in an index. The logs are just available when you search. Searches cover multiple indices. How many you have at any one moment depends on your rotation and retention settings though. If you have configured your system to delete all but 1 index, you won’t have logs older than your current index. I doubt you’ve done that though, so the logs should still be there.

Please feel free to share your rotation and retention settings if you have questions about how its set up.

thanks Chris. Unfortunately, I am seeing an issue where the index does not load. It keeps rotating and does nothing.

Also, I created a dashboard, and I am getting the time out error while loading. To fix this issue,
I modified the JVM file and changed the value from 4 to 6. It works for a brief period after restarting the elastic search service but times out after some time.

While retrieving data for this widget, the following error(s) occurred:

  • Read timed out.

What could be the issue here?


Look at the Elasticsearch/Opensearch health status on System/Overview. Please post a screenshot of that widget.

Also, are all components loaded on a single host? If so, how much system RAM does the host have? The rule of thumb for heap space is not to exceed half of the system RAM.