Memory consumption above configuration limit

Hello

1. Describe your incident:

My machine is going out of memory after a few hours, graylog-datanode service is consuming more than the max memory assigned in configuration

2. Describe your environment:

  • OS Information: Redhat 9.6 , VM 20vcpu , 128Go RAM

  • Package Version: Graylog 6.3.3

  • Service logs, configurations, and environment variables:

  • around 400G logs per day

/etc/graylog/datanode/datanode.conf

node_id_file = /etc/graylog/datanode/node-id
config_location = /etc/graylog/datanode
mongodb_uri = mongodb://localhost/graylog
bind_address = 0.0.0.0
opensearch_location = /usr/share/graylog-datanode/dist
opensearch_config_location = /var/lib/graylog-datanode/opensearch/config
opensearch_data_location = /logdata/opensearch/data
opensearch_logs_location = /data/log/graylog-datanode/opensearch
opensearch_heap = 62g

/etc/graylog/server/server.conf
is_leader = true
node_id_file = /etc/graylog/server/node-id
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = 192.168.50.106:9000
http_enable_tls = false
stream_aware_field_types=false
disabled_retention_strategies = none,close
allow_leading_wildcard_searches = false
allow_highlighting = false
field_value_suggestion_mode = on
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 10
outputbuffer_processors = 10
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_wait_strategy = blocking
inputbuffer_processors = 2
message_journal_enabled = true
message_journal_dir = /data/graylog-server/journal
message_journal_max_age = 72h
message_journal_max_size = 60gb
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000

/etc/sysconfig/graylog-server
GRAYLOG_SERVER_JAVA_OPTS=“-Xms48g -Xmx48g -server -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow”
GRAYLOG_SERVER_JAVA_OPTS=“$GRAYLOG_SERVER_JAVA_OPTS -Djdk.tls.acknowledgeCloseNotify=true -Djavax.net.ssl.trustStore=/etc/graylog/graylog.jks”
GRAYLOG_SERVER_JAVA_OPTS=“$GRAYLOG_SERVER_JAVA_OPTS -Dlog4j2.formatMsgNoLookups=true”
GRAYLOG_SERVER_ARGS=“”
GRAYLOG_COMMAND_WRAPPER=“”

/etc/graylog/datanode/jvm.options
-Xms48g
-Xmx48g
-XX:+UseG1GC
-XX:-OmitStackTraceInFastThrow
-XX:+UnlockExperimentalVMOptions
-Djdk.tls.acknowledgeCloseNotify=true

3. What steps have you already taken to try and solve the problem?

I have tried increasing or decreasing -Xmsg values and also the different buffers but with this configuration i have 82G memory for graylog-datanode.service

What am i missing or doing wrong here?

Thanks

-Xmx is for JVM heap size but JVM consumes more memory than heap size : Google Search

This should be max half of present memory and is advised not to go beyond 32GB.

So you should have X nodes with 64GB and java memory configured at 32GB.

Does not mean the other 32GB is not used.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.