Process buffer filling

I found a few articles here about process buffering filling but and I followed lots of the advice but I’m not a Linux guy so I’m having a few issues finding the solution.

Initially I found the GrayLog server in VMWare using an abnormal amount of CPU time. I thought it was a case that it was only running with two cores so I upped it to 4 thinking that would relieve the pressure of the load. It didn’t.

I started digging around and found that the Process buffer had 10+million messages pending so that lead to me finding that the issues was with the ElasticSearch6. I’m not sure how/where to go from here. I am getting the following errors in /var/log/graylog-server/server.log:
2021-03-26T13:56:33.990-04:00 WARN [MessagesAdapterES6] Failed to index message: index=<graylog_0> id=<68883590-8e5a-11eb-805b-00505680cf22> error=<{“type”:“cluster_block_exception”,“reason”:“blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}>
I guess Warning wou
ld be more appropriate but those are happening multiple times a second.

I am also getting the following Warning/Error pairing as well:
2021-03-26T13:57:00.903-04:00 WARN [IndexRotationThread] Deflector is pointing to [gl-system-events_3], not the newest one: [gl-system-events_4]. Re-pointing.
2021-03-26T13:57:00.905-04:00 ERROR [IndexRotationThread] Couldn’t point deflector to a new index
org.graylog2.indexer.ElasticsearchException: Couldn’t switch alias gl-system-events_deflector from index gl-system-events_3 to index gl-system-events_4

blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
at org.graylog.storage.elasticsearch6.jest.JestUtils.specificException(JestUtils.java:122) ~[?:?]
at org.graylog.storage.elasticsearch6.jest.JestUtils.execute(JestUtils.java:65) ~[?:?]
at org.graylog.storage.elasticsearch6.jest.JestUtils.execute(JestUtils.java:70) ~[?:?]
at org.graylog.storage.elasticsearch6.IndicesAdapterES6.cycleAlias(IndicesAdapterES6.java:580) ~[?:?]
at org.graylog2.indexer.indices.Indices.cycleAlias(Indices.java:318) ~[graylog.jar:?]
at org.graylog2.indexer.MongoIndexSet.pointTo(MongoIndexSet.java:357) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.checkAndRepair(IndexRotationThread.java:166) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:76) ~[graylog.jar:?]
at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_282]
at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_282]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_282]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_282]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_282]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_282]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_282]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]

What I don’t get is that everything was working fine for a long while but has now stopped. I’m guessing all of this has something to do with the fact I can’t even get into the folder /var/lib/elasticsearch/.

Any help or guidance would be well received.

Elasticsearch is in read only mode. Did the disk holding ES data fill? You need to resolve the underlying condition that lead to read only and then you can take it out of read only mode.

curl -X PUT “localhost:9200/_all/_settings” -H ‘Content-Type: application/json’ -d’{ “index.blocks.read_only_allow_delete” : null } }’
1 Like

I pulled up a df and everything pretty much said 0%use except for the entry below:
Filesystem: /dev/mapper/graylog–vg-root
1K-blocks: 19035388
Used: 14302628
Available: 3742760
Use%: 80%
Mounted on: /

So I don’t think the disk would have been full. I tried the API call that you provided but it just returned a bunch of errors. I did notice a trailing } which I removed which I thought correct the error. It didn’t work out that way. Is there a UI way to make this change?

It looks like the site helpfully styled my double quotes, so you’ll want to replace those with the usual U+0022. No GUI way that I’m aware of.

Stupid systems trying to be helpful! A pox on you and all your generations after you.

Seriously though I found this was the solution:
curl -X PUT “http://127.0.0.1:9200/_all/_settings” -H “Content-Type: application/json” -d’{ “index.blocks.read_only_allow_delete” : null }’
For some reason LOCALHOST didn’t do the trick but 127.0.0.1 did. \_o_/ The command returned a:
{“acknowledged”:true}
By the time I wrote the above it looks like the message queue that was being held has all been released and everything is good. Thank you ttsandrew for all your help.

1 Like

All i get is error 404 not found
{“type”:“ApiError”,“message”:“HTTP 404 Not Found”}
We are running a cluster, so tried _cluster instead of _all but that didnt work either.
Any ideas? Running latest version

-edit-
FOund the issue. I thought my port was 9000 not 9200… and i had to use the ip adress of the machine, not 127.0.0.1 or localhost. Thnx guys

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.