How can I configure my graylog server to be abble treat all the messages contain in journal

1. Describe your incident:

In the journal, the unprocessed messages increased always, how can the messages be treat , I received 350 messages /s in and there is only 40/s out
I have the following alert:

Journal utilization is too high (triggered 2 hours ago)

Journal utilization is too high and may go over the limit soon. Please verify that your Elasticsearch cluster is healthy and fast enough. You may also want to review your Graylog journal settings and set a higher limit

2. Describe your environment:

  • OS Information: 1 node : Debian 11.7 on VMWARE 8vCPU, Memory 16GB

  • Package Version:
    mongo:5
    elasticsearch-oss:7.10.2
    graylog:5.1.5

  • Service logs, configurations, and environment variables:
    My docker configuration file:
    version: ‘2’
    services:
    mongodb:
    image: mongo:5
    networks:
    - graylog
    #DB in share for persistence
    volumes:
    - /mongo_data:/data/db

    elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    #data folder in share for persistence
    volumes:
    - /es_data:/usr/share/elasticsearch/data
    environment:
    - http.host=0.0.0.0
    - transport.host=localhost
    - network.host=0.0.0.0

- “ES_JAVA_OPTS=-Xms512m -Xmx512m”

  - "ES_JAVA_OPTS=-Xms8g -Xmx8g"
ulimits:
  memlock:
    soft: -1
    hard: -1
mem_limit: 8g
networks:
  - graylog

graylog:
image: graylog/graylog:5.1.5
#journal and config directories in local NFS share for persistence
volumes:
- /graylog_journal:/usr/share/graylog/data/journal
- /graylog_plugin:/usr/share/graylog/plugin
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=XXXXXXXXXXXXXXXXXXXXXXX

  - GRAYLOG_ROOT_PASSWORD_SHA2=XXXXXXXXXXXXXXXXXXXX
  - GRAYLOG_HTTP_EXTERNAL_URI=http://10.1.200.73:9000/
  - GRAYLOG_HTTP_BIND_ADDRESS=0.0.0.0

entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 --  /docker-entrypoint.sh
networks:
  - graylog
links:
  - mongodb:mongo
  - elasticsearch
restart: always
depends_on:
  - mongodb
  - elasticsearch
ports:
  # Graylog web interface and REST API
  - 9000:9000
  # Syslog TCP
  - 1514:1514
  # Syslog UDP
  - 1514:1514/udp
  # GELF TCP
  - 12201:12201
  # GELF UDP
  - 12201:12201/udp
  # FORTIGATE RAW
  - 12514:12514/udp
  # FORTIGATE CEF
  - 12513:12513/udp
  # FORTIGATE CEF TCP
  - 12513:12513
  # DARKTRACE
  - 12518:12518/udp

volumes:
mongo_data:
driver: local
es_data:
driver: local
graylog_journal:
driver: local
networks:
graylog:
driver: bridge

3. What steps have you already taken to try and solve the problem?
I have increase memory and vCPU whith no effect.
I have upgraded ES_JAVA_OPTS to 8g

4. How can the community help?

I want know what is wrong in my configuration., the server need to be ugraded (vCPU/RAM)?
Is there a parameter to modify to increase the number of message’s treatement by second?

Slow message output, my first guess is always opensearch performance. So you are running both of these on a single server, you assigned 8gb to opensearch, how much does graulog have assigned? Also you probably shouldnt go more than half total with the two combined.

thanks for your answer, for graylog I use the default value because when I try to enter the following parameter to set the graylog value, I have the following message and the server doesn’t start:
- GRAYLOG_SERVER_JAVA_OPTS=“-Xms3500m -Xmx3500m -XX:NewRatio=1 -XX:MaxMetaspaceSize=256m -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow”
Error message in logs:
wait-for-it: waiting 15 seconds for elasticsearch:9200
wait-for-it: elasticsearch:9200 is available after 0 seconds
adding environment opts
Error: Could not find or load main class "-Xms3500m
Caused by: java.lang.ClassNotFoundException: "-Xms3500m

Hi,

Finallay I find the root cause of both Problem:

  • To assign the JVM memory for graylog in the dockercompose file, it was a quote issue:
    - GRAYLOG_SERVER_JAVA_OPTS=-Xms3500m -Xmx3500m

  • The initial problem ( messages unprocessed in the journal increase continiously) was caused by an output plugin: graylog-plugin-http-output-1.0.6.jar , but I don’t find yet the solution.
    An Idea, will be welcome.

Community plugins normally don’t have their own buffer or cache, because of this if the output has an issue or cannot deliver the messages, it can back up the entire system (because the data backup up into the shared output buffer, which outputs and opensearch use), and even cause data to stop being written to opensearch.

So something is wrong in the config, the plugin literally has a bug and doesn’t work anymore, or the messages cannot be delivered.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.