Buffer Process 100% couldn't point deflector to a new index

Hi,
I had a disk full and a expended my lvm. But now i have a Buffer process 100% and some error on logs of graylogs. and i dont have a messages on the search tab.

Hello @vanilsonSantos

How many message you have in the journal, resources (i.e., CPU,etc…) will depend on how long it take to ingest those back logs. While those are at 100% you wont be able to search for a few until the system is finished.

What Errors are shown in Elasticsearch logs? Did you check the cluster health of Elasticsearch?

curl -XGET localhost:9200/_cluster/health?pretty

if you have CPU cores to spare you can increase the thread count processor_buffer & output_buffer settings in Graylog configuration file. but if those settings are to high youll run into issues.

You could also try to manually rotate you index set on the Web UI.

EDIT: Just a thought, you mentioned the Disk was full. I would make sure Elasticsearch is not in read mode.

Hi @gsmith,
Thnaks for your answer.

Elsaticsearch health

Elsaticsearch logs

image

Journal

How can i see de CPU ?

Hey @vanilsonSantos

Yep,
thats a lot of logs you have in the journal, I had with 12 cores and 12 GB RAM it took me 4 hours to clean out the journal similar to that size. Your Process_buffer would be your heavy hitter, that would need the most of your CPU and if this is on One node and depending on your reasources this may take some time to clean up. rebooting your system will only delay the process. If the Journal doesnt go down in few hours OR is getting worse I would look into add more resources and adjusting you Graylog Configuration file to accommodate that amount of logs.

If your refering to you Graylog Server CPU I would imagine you can use TOP or HTOP command.
If your refering to increasing buffer count then it would be in this section:

processbuffer_processors = 5
outputbuffer_processors = 3

If you do dont forget to restart GL service.

I see that was yesterday, does it still look like that today?

Yes it is still.

I alredy have this configuration.
image

Hi

Can i clean the process buffer and Output buffer or Journal ?

If yes How can i clean ?

Thanks

When the disk fills up, it goes into read only mode to protect itself. Did you issue the command to return it to read-write?

Hi @joe.gross ,

Not yet.

What is the command to put on read-white ?

not sure what version you have but likely something like this:

curl -X PUT "localhost:9200/<your_index>/_settings?pretty" -H 'Content-Type: application/json' -d'
{
  "index.blocks.read_only_allow_delete": null }'
3 Likes

Thanks @tmacgbay, it took me a while to get back to this. That should set it back to read-write.

@vanilsonSantos, be sure to set your retention settings to prevent it from filling up again.

Rotation strategy should be set to Time based at a period of one day (P1D). Then set your retention settings to keep up to your max retention in days. So max 30 or 60 or 90 indices gets you 30, 60, or 90 days of retention in Opensearch.

Be sure you know how much storage you have and how many days you can keep. Set it to keep only enough indices that you don’t go above 75% of total disk capacity.

3 Likes

Hi @tmacgbay,

Thanks its working now.

Thanks every one.

Hey @vanilsonSantos

What was the solution?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.