Despite increasing heap memory in the config file, new logs are not coming through. Both processed buffer and output buffer are at 100% utilization. We’d greatly appreciate your expert assistance to resolve this problem
That’s not really enough information to make suggestions. Can you tell us a little more?
What version is it? (Mongo, GL, ES/OS)
What is your architecture? One server, three, five, etc.?
What is your ingestion volume?
Is this problem new?
Have you made any changes lately?
What have you tried so far?
Have you checked the Elasticsearch/Opensearch logs?
etc.
Hi @Rizwan,
So 16GB Heap memory is very big and that doesn’t address the problem.
How many events per second come in via Inputs?
Glass sphere: A possible error can come from pipeline rules, e.g. DNS request-timeouts, Query-Latency etc
But that won’t work without more information about the environment.
look at the log at /var/log/graylog-server/server.log You may find an error here?
sorry H077E i am really not a programmer. and not having much experience in graylog
below you can find my configuration.
How many events per second come in via Inputs
i dont know the exact count but i believe per day log size would be more than 100 gb.
@Rizwan Elasticsearch/Opensearch has run out of space.
This article should help explain what is happening. TL;DR, you need to either free up storage, or add more storage. Then you need to enable read/write on the indices again.
Thank you chris , i will be planning to delete all the nodes and wanted to configure if the logs size reaches 500+ gb old logs should auto be deleted , can you help me with that configuration