Previously my index was set to store logs for about 4 years. That was my first, initial, and not quite a wise setting. But it was, and it worked.
Unfortunatelly partition where folder /var is mounted (where I think logs from Graylog are stored) is occupied in 55%. I think this causes that index in Graylog won’t accept new logs - it shows " There were 204,800 failed indexing attempts in the last 24 hours."
Few days ago I changed index roration period from 4 years to P1W (7d, 7 days) but It does not delete old logs. Previously index has 20.7GB and today also it have exact the same size.
Yes, elasticsearch is on the same host, and it is in read only state. I know that and I thought this is due to the decreasing amount of free space. I tried to fix this by reducing time for index rotation.
Am I wrong?
Below are last entries from server.log
WARN [Messages] Failed to index message: index=<graylog_0> id=<052224a0-b375-11e9-bcfb-00155d034305> error=<{“type”:“cluster_block_exception”,“reason”:“blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}>
ERROR [Messages] Failed to index [19] messages. Please check the index error log in your web interface for the reason. Error: One or more of the items in the Bulk request failed, check BulkResult.getItems() for more information.
Yes Jan - I know that cause of this problem is elasticsearch in read only state. But I don’t know how can I change that state …
Any suggestions how to solve this problem?