Graylogs works very well, no latency, this is just a question because I would like to know the reason of this behaviour.
It’s a solo Graylog instance (2 vCPU + 8Go RAM). Multiple inputs but it’s not related to it because it’s like that since the begining when just one input was used.
When the instance is restarted due to updates, the number of processes drastically decrease and start increasing again day after day.
You have Graylog and Elasticsearch running in the same system? You wrote Graylog solo what makes me think that you have only Graylog running on this server.
What is the configured HEAP for Elasticsearch? What else is running on this System? Why is using memory for buffers bad in your environment? Did you have something that shows you what is using the memory?
You might want to do this => https://stackoverflow.com/a/44005809 for your Elastic.
I have dedicated machines for Elastic and they were eating memory like crazy… Never had such problems with Graylog so maybe make sure that Elastic is fine 100% first, it will take you 60 seconds to finish those 4 steps from stackoverflow.
If Graylog is using 1G I would recommend setting 3G for Elastic.
This use of memory is completely fine for me, it’s just that I was wondering if it was a normal behaviour.
Yes, it is not unusual.
With more details what is using the memory in details and with information about the JVM we would be able to look into more, but in general - this is a common behavior.