Unprocessed Messages in Journal


(GT) #1

We keep getting the issue of the journal filling with unprocessed messages. I have found the solutions of: Removing and recreating the internal ‘server.log’ text file or fully deleting the ‘journal’ file.
What I would like to know is why does this issue keep occurring? It is not viable for us to have to go into the back-end of our Graylog each day to resolve this issue and I would like the be able to stop it all together, has anyone got any ideas as to why this occurs?

Regards,

G


#2

What error messages do you see in your logs?


(GT) #3

I don’t believe we see anything, however I will have a look again when it goes down next.
I will also look into the ElasticSearch logs.

Regards,

G


#4

Check your ES logs, I just corrected the same thing because ES was getting to many many fields for a single index (1000). Perhaps this can help you.


(GT) #5

Checked the logs and all I could find was the disk usage was over 85% and then the disk went 100% full. I believe that this was the cause of the issue, I will give it a few hours to ensure we don’t run into this problem again.

Regards,

G


#6

That will absolutely cause issues. Make sure you have enough space for your journal and ES to be happy or the system will crash.


(system) #7

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.