Grayog 100% CPU Usage

Hello,

Graylog consume 100% CPU all the time and the VM freeze.

Graylog version 6.0.5.1
Java version 17.0.12

The VM has 16 vCPU, 32GB RAM.

I tried to put lower buffers values.

server.conf :

node_id_file = /etc/graylog/server/node-id

bin_dir = /usr/share/graylog-server/bin

data_dir = /var/lib/graylog-server

plugin_dir = /usr/share/graylog-server/plugin

http_bind_address = 0.0.0.0:9000

stream_aware_field_types=false

elasticsearch_hosts = http://127.0.0.1:9200

disabled_retention_strategies = none,close

allow_leading_wildcard_searches = false

allow_highlighting = false

field_value_suggestion_mode = on

output_batch_size = 500

output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

processbuffer_processors = 2

outputbuffer_processors = 2

processor_wait_strategy = blocking

ring_size = 65536

inputbuffer_ring_size = 65536
inputbuffer_wait_strategy = blocking

inputbuffer_processors = 1

message_journal_enabled = true

message_journal_dir = /var/lib/graylog-server/journal

lb_recognition_period_seconds = 3

integrations_scripts_dir = /usr/share/graylog-server/scripts

thx in advance.

Best Regards,

Could this be a Java issue and not using the internal Java that comes with Graylog

Hello,

How can i check ?

As mentioned in the topic:

Can you try to “comment out” (add a # to the line) this line, JAVA=/usr/bin/java, in your JVM settings file? Should be /etc/sysconfig/graylog-server for RPM/yum installs.

Should look like this when commented out
image

This will allow graylog to use its bundled JDK and no longer rely on the OS jdk.

And the bottem linu in graylog website shoud state something like:

Graylog 5.2.10+c04b5a4 on (Eclipse Adoptium 17.0.12 on Linux 4.18.0-553.16.1.el8_10.x86_64)

Hi,

I dont have graylog-server in /etc/sysconfig/ ; Only opensearch.

On the opensearch conf, i don’t have these lines : “JAVA=/user/bin/java”



# k-NN Lib Path
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/share/opensearch/plugins/opensearch-knn/lib

# OpenSearch Java path
#OPENSEARCH_JAVA_HOME=/usr/lib/jvm/java-11-amazon-corretto

# OpenSearch configuration directory
# Note: this setting will be shared with command-line tools
OPENSEARCH_PATH_CONF=/etc/opensearch

# OpenSearch PID directory
PID_DIR=/var/run/opensearch

# Additional Java OPTS
#OPENSEARCH_JAVA_OPTS=

# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true

################################
# OpenSearch service
################################

# The number of seconds to wait before checking if OpenSearch started successfully as a daemon process
OPENSEARCH_STARTUP_SLEEP_TIME=5

# Notification for systemd
OPENSEARCH_SD_NOTIFY=true

################################
# System properties
################################

# Specifies the maximum file descriptor number that can be opened by this process
# When using Systemd, this setting is ignored and the LimitNOFILE defined in
# /usr/lib/systemd/system/opensearch.service takes precedence
#MAX_OPEN_FILES=65535

# The maximum number of bytes of memory that may be locked into RAM
# Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option
# in opensearch.yml.
# When using systemd, LimitMEMLOCK must be set in a unit file such as
# /etc/systemd/system/opensearch.service.d/override.conf.
#MAX_LOCKED_MEMORY=unlimited

# Maximum number of VMA (Virtual Memory Areas) a process can own
# When using Systemd, this setting is ignored and the 'vm.max_map_count'
# property is set

You can find the option under /etc/default/graylog-server assuming this is Ubuntu.

the option was commented, I tried to uncomment it but still the same problem

It appears Graylog and Opensearch are installed to the same node, is the host hitting 100% CPU without ingest or are you currently ingesting data. If you are ingesting data, how much per day?

When looking under system/nodes are both the process and output buffer full or just the process buffer?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.