Graylog datanode 6.3 - Data Node Heap Size Warning that doesn't go away

1. Describe your incident:
After migrating to Graylog datanode, I’ve tried to set the datanode heap size to 6 GB - as suggested by documentation provided: Data Node Configuration

To my surprise, a warning has been triggered in GUI:
There are data node nodes in the cluster which could potentially run with a higher configured heap size for better performance.

Data node REDACTED only has 1 GB Java Heap assigned, out of a total of 13 GB RAM.
Currently, there is 9 GB free memory available on the node. We recommend to make an additional half of this available to the Java Heap.

Note: For production performance, it is recommended to configure this node to use 6 GB Java Heap (50% of RAM).
The Java Heap can be configured using the opensearch_heap configuration parameter in the node’s configuration file (datanode.conf).

I’ve verified that the parameter is set correctly, also checking ps output:

graylog+ 48423 104 50.6 22093496 7229776 ? Sl 10:21 421:53 /usr/share/graylog-datanode/dist/opensearch-2.15.0-linux-x64/jdk/bin/java -Xshare:auto -Dopensearch.networkaddress.cache.ttl=60 -Dopensearch.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.security.manager=allow -Djava.locale.providers=SPI,COMPAT -Xms1g -Xmx1g -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -Djava.io.tmpdir=/tmp/opensearch-4314876711978476423 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=/tmp/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/tmp/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.security.manager=allow -Xms6g -Xmx6g -Dopensearch.transport.cname_in_publish_address=true … (more output follows, not relevant to the issue)

Please note, that -Xms and -Xmx parameters are present twice - once with (default) 1g, and once with (set by me) 6g.

It seems that the warning only takes the first encountered value (1g) into account.

2. Describe your environment:

  • OS Information: Ubuntu 22.04.5 LTS

  • Package Version: graylog-datanode/stable,now 6.3.7-1 amd64

  • Service logs, configurations, and environment variables: Graylog server 6.3.7 + datanode 6.3.7 + MongoDB 7.0 on single VM, opensearch_heap = 6g set in datanode.conf

3. What steps have you already taken to try and solve the problem?

Commenting and/or setting 6g in following files in addition to datanode.conf:

  1. /etc/graylog/datanode/jvm.options
  2. /usr/share/graylog-datanode/dist/opensearch-2.15.0-linux-x64/config/jvm.options

Different combinations with service restarts didn’t change the behaviour (in some cases changing the value for the other graylog-datanode process running, but not for OpenSearch)

I’ve also come across this post: https://mybroadband.co.za/forum/threads/i’m-going-mad-graylog.1320773/ but it seems to me that these steps set the values for server, not datanode.

Additionally, I’ve installed a clean instance of this stack on a fresh VM (also Ubuntu 22.04), confirmed the same behaviour, and then upgraded to Graylog 7.0 - again, same issue.

4. How can the community help?

How should I set the heap value correctly? Additionally, how can I verify the actual value used by OpenSearch? (how can I check if it is 1g or 6g in my case? I can ignore the warning if heap is actually set correctly)

Thanks in advance :slight_smile:
Wojciech

The log file will tell you assigned heap of both data node and Opensearch during a restart of the data node service. If you’re having trouble finding that post an output here.

You should only be setting opensearch_heap within the datanode.conf to alter the heap settings for Opensearch, editing the jvm.options isn’t recommended but I can see why you attempted it there.

To add to this, it is a little confusing because there are two services the opensearch which is the one you want set to 6gb and should be done via opensearch_heap in datanode.conf, but then the datanode service also has a heap setting technically, but it almost never needs to be changed from the default because it is just kind of managing the opensearch service, and thats what’s doing all the heavy lifting.

Good morning,

thanks for your input - your answers do clarify the situation a fair bit, but some questions are still left open.

As described by @Wine_Merchant - opensearch_heap parameter in datanode.conf is indeed the current configuration, all the other options I’ve mentioned were experimentation, trying to get a better understanding of the problem :slight_smile:

While the datanode.log says…
2026-01-07T10:21:27.839+01:00 INFO [OpensearchProcessImpl] [2026-01-07T10:21:27,838][INFO ][o.o.e.NodeEnvironment ] [REDACTED] heap size [6gb], compressed ordinary object pointers [true]

… confusingly, just before the message above it shows the JVM arguments that match the ps output - note double -Xms and -Xmx parameters:
2026-01-07T10:21:24.335+01:00 INFO [OpensearchProcessImpl] [2026-01-07T10:21:24,335][INFO ][o.o.n.Node ] [REDACTED] JVM arguments [-Xshare:auto, -Dopensearch.networkaddress.cache.ttl=60, -Dopensearch.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.security.manager=allow, -Djava.locale.providers=SPI,COMPAT, -Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/opensearch-4314876711978476423, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=/tmp/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/tmp/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.security.manager=allow, -Xms6g, -Xmx6g, -Dopensearch.transport.cname_in_publish_address=true, … (more irrelevant output here)

To clarify, in response to @Joel_Duffield - it seems that I’ve identified the relevant services correctly according to your post, but just to be on the same page - please check the ps output for both services below - note 1g set for process no. 47901, and double parameters (1g and 6g) for process no. 48423:

ps aux | grep -i datanode
root 22027 0.0 0.0 6612 2368 pts/1 S+ 09:35 0:00 grep --color=auto -i datanode
graylog+ 47901 2.9 7.3 6844592 1047604 ? Ssl Jan07 40:41 /usr/share/graylog-datanode/jvm/bin/java -Dlog4j.configurationFile=file:///etc/graylog/datanode/log4j2.xml -Xms1g -Xmx1g -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+UnlockExperimentalVMOptions -Djdk.tls.acknowledgeCloseNotify=true -jar /usr/share/graylog-datanode/graylog-datanode.jar datanode -f /etc/graylog/datanode/datanode.conf
graylog+ 48423 105 50.7 21959416 7246120 ? Sl Jan07 1474:11 /usr/share/graylog-datanode/dist/opensearch-2.15.0-linux-x64/jdk/bin/java -Xshare:auto -Dopensearch.networkaddress.cache.ttl=60 -Dopensearch.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.security.manager=allow -Djava.locale.providers=SPI,COMPAT -Xms1g -Xmx1g -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -Djava.io.tmpdir=/tmp/opensearch-4314876711978476423 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=/tmp/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/tmp/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.security.manager=allow -Xms6g -Xmx6g -Dopensearch.transport.cname_in_publish_address=true (truststore data redacted) -XX:MaxDirectMemorySize=3221225472 -Dopensearch.path.home=/usr/share/graylog-datanode/dist/opensearch-2.15.0-linux-x64 -Dopensearch.path.conf=/var/lib/graylog-datanode/opensearch/config/opensearch16121731408525866320 -Dopensearch.distribution.type=tar -Dopensearch.bundled_jdk=true -cp /usr/share/graylog-datanode/dist/opensearch-2.15.0-linux-x64/lib/* org.opensearch.bootstrap.OpenSearch

To reiterate - if the log entry confirms that heap value is set correctly, then it is fine for me. In that case, though, the issue of false-positive warning still remains, and, as you can see, can be very confusing to investigate :slight_smile:

Best Regards,
Wojciech

It’s a bit confusing and I’m not sure how to interpret this behavior. I have set the following values:

/etc/graylog/datanode/datanode.conf:

opensearch_heap = 123M

/etc/graylog/datanode/jvm.options:

-Xms512M
-Xmx512M

After restarting the graylog-datanode service, I checked the opensearch process:

graylog+ 24787  8.5 21.5 2914816 425368 ?      Sl   09:15   2:18 /usr/share/graylog-datanode/dist/opensearch-2.15.0-linux-x64/jdk/bin/java -Xshare:auto -Dopensearch.networkaddress.cache.ttl=60 -Dopensearch.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.security.manager=allow -Djava.locale.providers=SPI,COMPAT -Xms1g -Xmx1g -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -Djava.io.tmpdir=/tmp/opensearch-5097016897734639382 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=/tmp/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/tmp/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.security.manager=allow -Xms123M -Xmx123M -Dopensearch.transport.cname_in_publish_address=true -Djavax.net.ssl.trustStore=datanode-truststore.p12 -Djavax.net.ssl.trustStorePassword=REDACTED_HERE -Djavax.net.ssl.trustStoreType=pkcs12 -XX:MaxDirectMemorySize=65011712 -Dopensearch.path.home=/usr/share/graylog-datanode/dist/opensearch-2.15.0-linux-x64 -Dopensearch.path.conf=/var/lib/graylog-datanode/opensearch/config/opensearch6416476773951123953 -Dopensearch.distribution.type=tar -Dopensearch.bundled_jdk=true -cp /usr/share/graylog-datanode/dist/opensearch-2.15.0-linux-x64/lib/* org.opensearch.bootstrap.OpenSearch

As you can see, the opensearch_heap (123M) was passed to the process, but there are also -Xms1g and -Xmx1g entries. These likely come from the jvm.options file located in /var/lib/graylog-datanode/opensearch/config/opensearch6416476773951123953/:

grep -i xm /var/lib/graylog-datanode/opensearch/config/opensearch6416476773951123953/jvm.options
## -Xms4g
## -Xmx4g
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms1g
-Xmx1g

Which value is actually binding in this case?

Is the jvm.options file ephemeral and created in a controlled manner by the graylog-datanode service? Is it possible to define a permanent location for this file? I’ve noticed that the configuration directory changes from time to time.

Best regards

Hey @roman_the ,

I agree that this is a bit confusing. If you have repeated JVM options, the latter will be used. The 1g is a fallback in this specific case, coming from the jvm.options file, which can’t be modified and will be regenerated during each datanode startup. The whole config directory is immutable and always regenerated, as you noticed.

You can verify your actual JVM memory settings by calling

jcmd _your_pid_ GC.heap_info

➜ ~ jps -v | grep opensearch
152969 OpenSearch -Xshare:auto -Dopensearch.networkaddress.cache.ttl=60 -Dopensearch.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.security.manager=allow -Djava.locale.providers=SPI,COMPAT -Xms1g -Xmx1g -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -Djava.io.tmpdir=/tmp/opensearch-12180839499668351114 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=/tmp/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/tmp/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.security.manager=allow -Djava.security.policy=file:///home/tdvorak/bin/datanode/config/opensearch1120374852140703251/opensearch.policy -Xms4g -Xmx4g -Dopens
➜ ~ jcmd 152969 GC.heap_info
152969:
garbage-first heap total 4194304K, used 484099K [0x0000000700000000, 0x0000000800000000)
region size 2048K, 214 young (438272K), 26 survivors (53248K)
Metaspace used 134347K, committed 137472K, reserved 1179648K
class space used 17884K, committed 19328K, reserved 1048576K

the 4g is, in my example above, the configuration value opensearch_heap coming from datanode.conf.

What you set in /etc/graylog/datanode/jvm.options doesn’t influence the opensearch process itself, only the datanode service, the wrapper/manager of the opensearch process. This value may be quite low, as there is no significant memory usage.

@Tdvorak thanks for the details - I’ve now checked the parameters using the commands provided, and indeed, correct value of 6GB is shown.

While we now established that the OpenSearch service works as intended, the issue of incorrect warning still remains.

I’ll clear this warning on our instance right now, and I’ll monitor if it comes back - to confirm the original problem that led to this discussion. If it pops up again, I’ll provide information here.

Thanks,
Wojciech

Thanks for testing and feedback! I don’t think the warning would disappear automatically, so clearing it manually is the way to go. If you notice anything suspicious again, please let us know.

Meanwhile, I tried to simplify the jvm.options file and remove duplicated memory settings, so they don’t appear twice, causing unnecessary confusion: Remove not needed and confusing settings in opensearch jvm.options by todvora · Pull Request #24669 · Graylog2/graylog2-server · GitHub