Graylog servers memory usage - very high

Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question.
Don’t forget to select tags to help index your topic!

**1. Describe your incident: Hi All, I am pretty much new for Graylog. An could not figure out how to manage consumed memory on my server. Opensearch java has 1609,2g VIRT memory and I cloud not reduce it. In the config files opensearch has 1G and graylog has 3G to use. (-Xms3g -Xmx3g and -Xms1g -Xmx1g). Its always on high load. Apologize if my question is trivial for some of you but it is pain in the ass for me. Please let me know if you would like more details! :slight_smile:

2. Describe your environment:

  • OS Information: Ubuntu, VERSION=“20.04.6 LTS (Focal Fossa)” in cloud environment.

  • Package Version:
    graylog-5.2-repository/stable,stable,now 1-2 all [telepítve]
    graylog-server/stable,now 5.2.4-1 amd64 [telepítve]
    mongodb-mongosh/focal/mongodb-org/7.0,now 2.1.5 amd64 [telepítve]
    mongodb-org-database-tools-extra/focal/mongodb-org/7.0,now 7.0.5 amd64 [telepítve]
    mongodb-org-database/focal/mongodb-org/7.0,now 7.0.5 amd64 [telepítve]
    mongodb-org-mongos/focal/mongodb-org/7.0,now 7.0.5 amd64 [telepítve, automatikus]
    mongodb-org-server/focal/mongodb-org/7.0,now 7.0.5 amd64 [telepítve]
    mongodb-org-shell/focal/mongodb-org/7.0,now 7.0.5 amd64 [telepítve]
    opensearch/stable,now 2.11.1 amd64 [telepítve, frissíthető erre: 2.12.0]

  • Service logs, configurations, and environment variables:
    ps aux | grep java
    opensea+ 504 84.5 63.7 1687703332 7793452 ? Ssl febr16 11945:06 /usr/share/opensearch/jdk/bin/java -Xshare:auto -Dopensearch.networkaddress.cache.ttl=60 -Dopensearch.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=SPI,COMPAT -Xms1g -Xmx1g -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -Djava.io.tmpdir=/tmp/opensearch-5169375741182426274 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/opensearch -XX:ErrorFile=/var/log/opensearch/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/opensearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Dclk.tck=100 -Djdk.attach.allowAttachSelf=true -Djava.security.policy=file:///etc/opensearch/opensearch-performance-analyzer/opensearch_security.policy --add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED -XX:MaxDirectMemorySize=536870912 -Dopensearch.path.home=/usr/share/opensearch -Dopensearch.path.conf=/etc/opensearch -Dopensearch.distribution.type=deb -Dopensearch.bundled_jdk=true -cp /usr/share/opensearch/lib/* org.opensearch.bootstrap.OpenSearch -p /var/run/opensearch/opensearch.pid --quiet
    graylog 3766 46.8 17.3 9479348 2122092 ? Sl febr16 6618:57 /usr/bin/java -Xms3g -Xmx3g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:-OmitStackTraceInFastThrow -Djdk.tls.acknowledgeCloseNotify=true -Dlog4j2.formatMsgNoLookups=true -jar -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Dgraylog2.installation_source=deb /usr/share/graylog-server/graylog.jar server -f /etc/graylog/server/server.conf -np

memory info:
free -h
total used free shared buff/cache available
Mem: 11Gi 4,3Gi 137Mi 0,0Ki 7,3Gi 7,1Gi
Swap: 6,0Gi 2,2Gi 3,8Gi

3. What steps have you already taken to try and solve the problem?
I tried to modify Graylogs(java) memory options, Opensearch(java) memory options and changing swap file size. As I can see in top command result the java virtual memory is very high:
top - 10:05:34 up 9 days, 19:28, 2 users, load average: 3,49, 4,24, 4,00
Tasks: 194 total, 1 running, 193 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20,8 us, 2,5 sy, 0,0 ni, 60,0 id, 14,5 wa, 0,0 hi, 2,3 si, 0,0 st
MiB Mem : 11947,4 total, 146,0 free, 3534,7 used, 8266,8 buff/cache
MiB Swap: 6144,0 total, 3268,7 free, 2875,3 used. 8098,1 avail Mem

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
504 opensea+  20   0 1609,2g   6,8g   5,9g S 135,7  58,1  11939:21 java

3766 graylog 20 0 9472324 2,0g 4824 S 59,3 17,2 6615:17 java

4. How can the community help?

How can I reduce memory usage to 85% of physical memory(12G)? I believe there are some other options which I cloud not find until now.
Any help very much appreciated!

Helpful Posting Tips: Tips for Posting Questions that Get Answers [Hold down CTRL and link on link to open tips documents in a separate tab]

Hey @Zoltan

This depend on the amount of logs ingested. You can Java heap like this…

-Xms2g 
-Xmx3g 

The flag Xmx specifies the maximum memory allocation pool for a Java Virtual Machine (JVM), while Xms specifies the initial memory allocation pool.

Hi G,

You hit the spot! :slight_smile:
Thanks for your suggestion! I will try and come back with the result!
Am I right if I assume that the VIRT ammount in TOP command should be the Xmx3g value, in this case 3g for java process?

ps.: daily log amount is around 400GiB

Have a nice day!
Regards,
Z

Hey

Thats kind of a lot you may need to raise it.

Ya unless you are doing almost nothing to those logs that will quickly not be enough for Graylog, what’s more concerning is you are saying to assigned 1GB to opensearch, opensearch won’t even last a couple hours before having issues at that ingestion level.

How long are you keeping the data for?

Hi Joel,
Thanks for your comment!
Yes, I just see that my queries are not run and I have increased the opensearch max limit to 1.5G. It runs now.

I believe we have a 10 days rotation period.
Regards,
Z

If you are running 400GB a day and keeping that for 10 days you will probably need at least 8GB of heap just for opensearch if using optimal shard sizes in Opensearch in the 40GB/shard range.

Thanks Joel for the prompt response!
So a lot more memory needed for this amount of data(4TB).

Hey @gsmith,

I have increased the physical memory of system to 14G RAM.
And adjusted a bit the configuration.
set vm.max_map_count=262144

-Xms3g
-Xmx3g
-XX:MaxDirectMemorySize=3g

And still don’t understand why does it consume (10G) a lot more. Can anyone help?
● opensearch.service - OpenSearch
Loaded: loaded (/lib/systemd/system/opensearch.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2024-03-08 12:14:10 CET; 4min 39s ago
Docs: https://opensearch.org/
Main PID: 2418787 (java)
Tasks: 135 (limit: 14295)
Memory: 10.2G

Thanks in advance!

This may explain some of it High memory usage on master nodes - #2 by radu.gheorghe - OpenSearch - OpenSearch

1 Like

Hey @Zoltan

OS/ES can be a memory hog. Since your getting 400Gb daily and your VIRT memory is high, most users would resort to creating a cluster. On my single node I was pull 50-60 GB daily. I had 20 GB RAM & 12 CPU and my heap was -Xms5g -Xmx10g. that kept it from failing. As @Joel_Duffield pointed out that post. If you set you heap -Xms same as you max heap it will always use the max amount of memory even if it doesnt need it, but im tell ya, 400gb daily is getting up there for resources. You may want to rethink your setup.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.