I have been upgrading GrayLog recently to more recent versions in an effort to modernize things however I have noticed that the Java JVM used memory seems to follow a new pattern. We recently also had a crash associated with out of memory error:
Jan 08 06:37:15 localhost graylog-server[2735961]: Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "I/O dispatcher 20"
Jan 08 06:37:15 localhost graylog-server[2735961]: Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "MaintenanceTimer-4-thread-1"
Jan 08 06:37:15 localhost graylog-server[2735961]: java.lang.OutOfMemoryError: Java heap space
…and it seems now the used memory is climbing again, this never happened on previous versions of Graylog (we recently upgraded from 4.3.5 to 5.0.13, and now 6.1.3). Our current GrayLog 6.1.3 runs on Ubuntu 20.04, we’ve been on 6.1.3 (Graylog 6.1.3+73526ba) for little bit over a month now.
Specs: Intel Xeon Gold 6136 CPU @ 3GHz 24 core, 128 GB RAM baremetal host. We use this GL with Elasticsearch 7.10.2 with ES_JAVA_OPTS=“-Xms31g -Xmx31g”.
The previous Graylog server Java opts we had on 4.3.5 and 5.0.13:
#GRAYLOG_SERVER_JAVA_OPTS="-Xms2g -Xmx2g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow"
The current Graylog server java opts in use now (recommended here on forums):
GRAYLOG_SERVER_JAVA_OPTS="-Xms2g -Xmx2g -server -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow"
We’ve used 2g for a while now on prior versions with no issue. Do we need to adjust this value now on 6.1.3? Or is there another setting we need to adjust to alleviate this?
Regards,
James