Seek help! New logs cannot be viewed after log storage expansion

Hello and Welocome.

Well… judging from a picture I can see there is something wrong but it could be a multiply of issues.
Without more details Im not able to help you.
You might want to look at this, it may help us to help you.

Before you ask your question

The feeling is due to this error report, but I don’t know how to deal with it

Only one photo can be sent at a time

Delete 112, data can not be written to 111

Thanks for the added details but those picture dont really show a problem.

Elasticsearch is in the green which mean it run good.
What do you mean by:

If you trying to delete the newest index thats probably not a good idea. If you need to delete a index, I would probably delete the oldest which is Graylog_111. Why are you wanting to delete an index?
Your input seams to be running, judging from the picture.
What is your exact issue?

Do you see any issue in your Elasticsearch, MongoDb, and Graylog logs?
Is all your services “Status” look good?
When you stated “log Storage expansion” are you refering to your Server harddrive?

The environment I use here is graylog-3.3.8, because the image storage is only 20g, so I added another 500g hard disk to store data in elasticsearch. Just a few days ago, I found that there was an error in graylog web access. After entering the virtual machine, I found that the 500g hard disk data was full. This is a virtualized environment, and the hard disk can be directly expanded, After the expansion to 2T, I found that the new log could not be seen in the graylog web interface. The log was empty in the last five minutes, but the data stored in the virtual machine hard disk has been rising. I don’t know how to query the latest log in the gray log. As shown in the picture below, the new log generated after the expansion cannot be queried.

I don’t know where the problem is. Everything seems normal.

@jhwxj

Did you check these?

EDIT:
Have you tried to manually rotate your indices?

Hello, elasticsearch, mongodb and graylog logs. I know how to see if there is any problem, but I tried according to the way you provided the picture, and now I can see the traffic.

However, by querying the latest log, it is still blank, and the interface is always loading

Now I don’t know what to do, elasticsearch doesn’t store data.

Hello,

From the pictures it seams like everything is running. You could do a couple simple commands.

This checks the heath of Elasticsearch.

curl -XGET http://localhost:9200/_cluster/health?pretty=true

If the output doesnt not show green, then run this command to see why.

curl -XGET http://localhost:9200/_cluster/allocation/explain?pretty

It would be helpfull to show you configuration file’s ( elasticsearch.yml, and Graylog server.conf)
Maybe you have something misconfigured.

I see you running 183 ( 6 months) Indices, Thats a lot. You sure you have enough CPU’s and Memory?

curl -XGET http://localhost:9200/_cluster/health?pretty=true


curl -XGET http://localhost:9200/_cluster/allocation/explain?pretty

Everything is normal before the storage capacity is expanded. After the storage capacity is expanded to 2T, the data received by elastic search cannot be displayed in graylog, only the log received before the storage capacity is expanded. I don’t think there is any problem with the configuration file. I haven’t modified the configuration file after the expansion.

Now it seems that elasticsearch can receive data but not write it to the storage

Through the use of the command to check the data successfully solved the problem, thank you for your help
curl -H "Content-Type: application/json" -XPUT http://127.0.0.1:9200/_settings -d '{"index": {"blocks": {"read_only_allow_delete": "false"}}}'

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.