Messages not writing to index

Graylog successfully worked for 2 months than after server reboot stopped writing messages to index.
There is 15Gb free disk space left, CPU avg load is 20%, RAM usage 50% from 8Gb.

I see that new messages are coming in web interface top right corner, input page and stream page.
In indices page index size and Document count doesn’t change. In All messages stream no messages appear. when I click on default_index, page is loading forever and nothing shows up.
Graylog-server and elasticsearch services are started.

What I tried:
*Restart server.
*Rotate index.
*Recalculate index range.
*Create new index and write incoming messages to new index.

Errors and warnings from logs
[WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [KzLSGRi] Failed to clear cache for realms []
[WARN ][o.e.d.c.ParseField ] [KzLSGRi] Deprecated field [inline] used, expected [source] instead
[WARN ][o.e.d.s.a.MultiBucketConsumerService] [KzLSGRi] This aggregation creates too many buckets (10001) and will throw an error in future versions. You should update the [search.max_buckets] cluster setting or use the [composite] aggregation to paginate all buckets in multiple requests.
[WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] [KzLSGRi] Deprecated field [template] used, replaced by [index_patterns]

ERROR [IndexRotationThread] Couldn’t point deflector to a new index
org.graylog2.indexer.ElasticsearchException: Couldn’t remove alias graylog_deflector from indices [graylog_4]

blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];

COMMAND [conn25] command graylog.$cmd command: update { update: “system_messages”, ordered: true, $db: “graylog” } exception: Exec error resulting in state DEAD :: caused by :: errmsg: “interrupted at shutdown” code:InterruptedAtShutdown numYields:0 reslen:180 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_query 117ms

you should check the disk space and the elasticsearch log file

[FORBIDDEN/12/index read-only

the key is that Elasticsearch made your index read-only and the log will show why that has happend. Resolve the reason for that and make your index read-write again. That will resolve all issues you have.

1 Like

I took a deeper look at elasticsearch logs and found that indeed disk space according to elasticsearch went low:

flood stage disk watermark [95%] exceeded on [/var/lib/elasticsearch/nodes/0] free: 1.2gb[4.4%], all indices on this node will be marked read-only

However when I checked disk space from OS, I didn’t see any problems:

I was able to write to index again by executing following command from terminal:

curl -XPUT -H “Content-Type: application/json” -d ‘{“index.blocks.read_only_allow_delete”: null}’

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.