CPU max on single Node Cluster

Hi Folks,

i have a strange problem with my Graylog Installation

1. Describe your incident:

The CPU is always maxed out, even when Inputs are disabled and processing is paused.
The Journal is empty (i had an issue with some 10000s of messages in queue an processing not going further but i deleted the Journal and now its utilized <0,1% during running Input and processing)
The Disk was not full.
I deleted all Grok Patterns and have no Regex (processing paused anyway)
The Installation was running fine for months, the CPU issue started about a week ago.
I did perform Updates in the meantime.

2. Describe your environment:
Homelab environment
Ubuntu 22.04.3 LTS on VMWare ESXi
VM has 2 Cores, 8 GB Ram, 120 GB HDD

  • Package Version:

graylog-enterprise/stable,now 5.1.5-1
opensearch/stable,now 2.9.0
mongodb-org/jammy,now 6.0.9

  • Service logs, configurations, and environment variables:
server.conf

is_leader = true

node_id_file = /etc/graylog/server/node-id

password_secret = *******

root_username = *****

root_password_sha2 = ******

root_timezone = Europe/Berlin

bin_dir = /usr/share/graylog-server/bin

data_dir = /var/lib/graylog-server

plugin_dir = /usr/share/graylog-server/plugin

http_bind_address = 0.0.0.0

http_publish_uri = http://graylog.x.y:9000

stream_aware_field_types=false

elasticsearch_hosts = http://192.168.100.19:9200

allow_leading_wildcard_searches = false

allow_highlighting = false

output_batch_size = 500

output_flush_interval = 1

output_fault_count_threshold = 5

output_fault_penalty_seconds = 30

processbuffer_processors = 1 (was 3 before, no impact)

outputbuffer_processors = 1 (was 2 before, no impact)

udp_recvbuffer_sizes = 1048576

processor_wait_strategy = yielding

ring_size = 65536

inputbuffer_ring_size = 65536

inputbuffer_processors = 1(was 2 before, no impact)

inputbuffer_wait_strategy = blocking

message_journal_enabled = true

message_journal_dir = /var/lib/graylog-server/journal

lb_recognition_period_seconds = 3

mongodb_uri = mongodb://localhost/graylog

mongodb_max_connections = 1000

opensearch.yml

cluster.name: graylog

node.name: ${HOSTNAME}

path.data: /var/lib/opensearch

path.logs: /var/log/opensearch

network.host: 0.0.0.0

discovery.type: single-node

action.auto_create_index: false

plugins.security.ssl.transport.pemcert_filepath: esnode.pem

plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem

plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem

plugins.security.ssl.transport.enforce_hostname_verification: false

plugins.security.ssl.http.enabled: true

plugins.security.ssl.http.pemcert_filepath: esnode.pem

plugins.security.ssl.http.pemkey_filepath: esnode-key.pem

plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem

plugins.security.allow_unsafe_democertificates: true

plugins.security.allow_default_init_securityindex: true

plugins.security.authcz.admin_dn:

  • CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.disabled: true

plugins.security.audit.type: internal_opensearch

plugins.security.enable_snapshot_restore_privilege: true

plugins.security.check_snapshot_restore_write_privileges: true

plugins.security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]

plugins.security.system_indices.enabled: true

plugins.security.system_indices.indices: [“.plugins-ml-model”, “.plugins-ml-task”, “.opendistro-alerting-config”, “.opendistro-alerting-alert*”, “.opendistro-anomaly-results*”, “.opendistro-anomaly-detector*”, “.opendistro-anomaly-checkpoints”, “.opendistro-anomaly-detection-state”, “.opendistro-reports-", ".opensearch-notifications-”, “.opensearch-notebooks”, “.opensearch-observability”, “.ql-datasources”, “.opendistro-asynchronous-search-response*”, “.replication-metadata-store”, “.opensearch-knn-models”]

node.max_local_storage_nodes: 3

jvm.options

-Xms2g

-Xmx2g

8-10:-XX:+UseConcMarkSweepGC

8-10:-XX:CMSInitiatingOccupancyFraction=75

8-10:-XX:+UseCMSInitiatingOccupancyOnly

11-:-XX:+UseG1GC

11-:-XX:G1ReservePercent=25

11-:-XX:InitiatingHeapOccupancyPercent=30

-Djava.io.tmpdir=${OPENSEARCH_TMPDIR}

-XX:+HeapDumpOnOutOfMemoryError

-XX:HeapDumpPath=/var/lib/opensearch

-XX:ErrorFile=/var/log/opensearch/hs_err_pid%p.log

8:-XX:+PrintGCDetails

8:-XX:+PrintGCDateStamps

8:-XX:+PrintTenuringDistribution

8:-XX:+PrintGCApplicationStoppedTime

8:-Xloggc:/var/log/opensearch/gc.log

8:-XX:+UseGCLogFileRotation

8:-XX:NumberOfGCLogFiles=32

8:-XX:GCLogFileSize=64m

9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/opensearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m

18-:-Djava.security.manager=allow

-Dclk.tck=100

-Djdk.attach.allowAttachSelf=true

-Djava.security.policy=file:///etc/opensearch/opensearch-performance-analyzer/opensearch_security.policy

–add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED

top

top - 20:16:21 up 20 min, 1 user, load average: 1,70, 0,55, 0,30
Tasks: 206 total, 1 running, 205 sleeping, 0 stopped, 0 zombie
%Cpu(s): 59,7 us, 5,7 sy, 0,0 ni, 18,6 id, 15,5 wa, 0,0 hi, 0,5 si, 0,0 st
MiB Mem : 7938,0 total, 4051,9 free, 3041,8 used, 844,3 buff/cache
MiB Swap: 3925,0 total, 3925,0 free, 0,0 used. 4648,0 avail Mem

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                             
954 graylog   20   0 3639876 431588  23332 S 105,6   5,3   0:44.54 java                                                                                                                                

1636 opensea+ 20 0 5654204 2,3g 38084 S 21,8 30,2 1:10.14 java
914 mongodb 20 0 2636948 134784 62224 S 2,3 1,7 0:14.15 mongod
22 root 20 0 0 0 0 S 0,3 0,0 0:00.17 ksoftirqd/1
809 root 20 0 241284 8868 7496 S 0,3 0,1 0:02.15 vmtoolsd
1872 matthias 20 0 10632 4232 3364 R 0,3 0,1 0:00.18 top
1 root 20 0 166232 11492 8200 S 0,0 0,1 0:10.29 systemd
2 root 20 0 0 0 0 S 0,0 0,0 0:00.02 kthreadd
3 root 0 -20 0 0 0 I 0,0 0,0 0:00.00 rcu_gp

3. What steps have you already taken to try and solve the problem?

Opensearch Cluster is green (was yellow before, i got rid of unassigned shards)
/var/log/graylog/server.log and /var/log/opensearch/graylog.log do not have suspicious entries
Journal deleted (was filling up, i had a maintenance event with many logs being sent, processing did not proceed, maybe due to to high CPU load)
Trying different processors settings
Loaded the VM onto a different host, assigned 32 Cores (2x Intel Xeon E5-2630L) and 96 GB RAM,
the machine did fully utilize the CPUs…
Deleted all Grok Patterns (nothing self written) and extractors → no impact

I created a second VM from scratch, applied all the Patches and installed Graylog acc. to the Guide on the GL Website.
All Packages are the same revision. No Problem with CPU load.

Did a lot of google searches and checked different solutions, nothing applied to my system

4. How can the community help?

i need advice on how to find the process that keeps the CPU utilization high and how to fix it.

What does the graylog server.log look like? Any errors or anything of interest? Are you saying that when you stop the graylog-server service CPU usage goes to 0 but when you start it it goes to 100% and top is showing that graylog-server is consuming 100% of the cpu?

Also starting from scratch you cannot recreate the problem with the same settings and configuration?

Do you have any custom plugins in /usr/share/graylog-server/plugin?

I’m not seeing anything jump out in the graylog config that can explain what you describe.

Hello Drew,

thanks for your answer!
That´s right, as soon as i “systemctl stop graylog-server.service”, the CPU load is well below 10%, with opensearch accounting for 1-5%. When i start the service again, it shoots up to 200 -1000 % (don´t know where the figures come from, maybe dual-processor / Hyperthreading ?!)

I took the second machine into “production” and i had an issue where processing took the CPU to it´s max.: I ingested very complex and large messages from a vcenter server AND had some extractors for pfsense from the marketplace (i know, i said i had no regex, i forgot about these - i´m by far not an expert in graylog). Messages piled up and processing for a single message went up to like 120 sec.
As soon as i stopped sending logs from the vcenter server, and the processing went beyond that “clog” everything was fine again.

My first machine however, doesn´t have any logs in queue, process buffer dump:

“ProcessBufferProcessor #0”: “idle”

but still is maxed out…

It´s not as neccessary to me anymore because i replaced it with the second machine, but it would be nice to know anyway what caused this and how to avoid it in the future.

/usr/share/graylog-server/plugin

graylog-plugin-aws-5.1.5.jar
graylog-plugin-enterprise-5.1.5.jar
graylog-plugin-enterprise-integrations-5.1.5.jar
graylog-plugin-integrations-5.1.5.jar
graylog-storage-opensearch2-5.1.5.jar
graylog-plugin-collector-5.1.5.jar
graylog-plugin-enterprise-es7-5.1.5.jar
graylog-plugin-enterprise-os2-5.1.5.jar
graylog-storage-elasticsearch7-5.1.5.jar

Server.log from booting up to running

2023-09-10T18:50:47.206+02:00 INFO [ImmutableFeatureFlagsCollector] Following feature flags are used: {default properties file=[cloud_inputs=on, scripting_api_preview=on, search_filter=on, preflight_web=off]}
2023-09-10T18:50:48.921+02:00 INFO [CmdLineTool] Loaded plugin: AWS plugins 5.1.5 [org.graylog.aws.AWSPlugin]
2023-09-10T18:50:48.961+02:00 INFO [CmdLineTool] Loaded plugin: Enterprise Integrations 5.1.5 [org.graylog.enterprise.integrations.EnterpriseIntegrationsPlugin]
2023-09-10T18:50:48.974+02:00 INFO [CmdLineTool] Loaded plugin: Integrations 5.1.5 [org.graylog.integrations.IntegrationsPlugin]
2023-09-10T18:50:48.989+02:00 INFO [CmdLineTool] Loaded plugin: Collector 5.1.5 [org.graylog.plugins.collector.CollectorPlugin]
2023-09-10T18:50:49.005+02:00 INFO [CmdLineTool] Loaded plugin: Graylog Enterprise 5.1.5 [org.graylog.plugins.enterprise.EnterprisePlugin]
2023-09-10T18:50:49.010+02:00 INFO [CmdLineTool] Loaded plugin: Graylog Enterprise (ES7 Support) 5.1.5 [org.graylog.plugins.enterprise.org.graylog.plugins.enterprise.es7.EnterpriseES7Plugin]
2023-09-10T18:50:49.015+02:00 INFO [CmdLineTool] Loaded plugin: Graylog Enterprise (OpenSearch 2 Support) 5.1.5 [org.graylog.plugins.enterprise.org.graylog.plugins.enterprise.os2.EnterpriseOS2Plugin]
2023-09-10T18:50:49.018+02:00 INFO [CmdLineTool] Loaded plugin: Threat Intelligence Plugin 5.1.5+993cd0f [org.graylog.plugins.threatintel.ThreatIntelPlugin]
2023-09-10T18:50:49.021+02:00 INFO [CmdLineTool] Loaded plugin: Elasticsearch 7 Support 5.1.5+993cd0f [org.graylog.storage.elasticsearch7.Elasticsearch7Plugin]
2023-09-10T18:50:49.025+02:00 INFO [CmdLineTool] Loaded plugin: OpenSearch 2 Support 5.1.5+993cd0f [org.graylog.storage.opensearch2.OpenSearch2Plugin]
2023-09-10T18:50:49.198+02:00 INFO [CmdLineTool] Running with JVM arguments: -Xms1g -Xmx1g -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -Djdk.tls.acknowledgeCloseNotify=true -Dlog4j2.formatMsgNoLookups=true -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb
2023-09-10T18:50:49.519+02:00 INFO [client] MongoClient with metadata {“driver”: {“name”: “mongo-java-driver|legacy”, “version”: “4.8.1”}, “os”: {“type”: “Linux”, “name”: “Linux”, “architecture”: “amd64”, “version”: “5.15.0-83-generic”}, “platform”: “Java/Eclipse Adoptium/17.0.8+7”} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=, codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@55e42449]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName=‘null’, serverSelector=‘null’, clusterListeners=‘’, serverSelectionTimeout=‘30000 ms’, localThreshold=‘30000 ms’}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=, maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners=‘’, serverMonitorListeners=‘’}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName=‘null’, compressorList=, uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, contextProvider=null}
2023-09-10T18:50:49.523+02:00 INFO [client] MongoClient with metadata {“driver”: {“name”: “mongo-java-driver|legacy”, “version”: “4.8.1”}, “os”: {“type”: “Linux”, “name”: “Linux”, “architecture”: “amd64”, “version”: “5.15.0-83-generic”}, “platform”: “Java/Eclipse Adoptium/17.0.8+7”} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=, codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@55e42449]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName=‘null’, serverSelector=‘null’, clusterListeners=‘’, serverSelectionTimeout=‘30000 ms’, localThreshold=‘30000 ms’}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=, maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners=‘’, serverMonitorListeners=‘’}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName=‘null’, compressorList=, uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, contextProvider=null}
2023-09-10T18:50:49.548+02:00 INFO [cluster] Cluster description not yet available. Waiting for 30000 ms before timing out
2023-09-10T18:50:49.571+02:00 INFO [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=40897660}
2023-09-10T18:50:49.703+02:00 INFO [MongoDBPreflightCheck] Connected to MongoDB version 6.0.9
2023-09-10T18:50:50.386+02:00 INFO [FilePersistedNodeIdProvider] Node ID: 547358a0-77f6-4094-83aa-5a0b4a08ab64
2023-09-10T18:50:50.772+02:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: Failed to connect to /192.168.x.x:9200. - Connection refused.
2023-09-10T18:50:50.774+02:00 INFO [VersionProbe] Elasticsearch is not available. Retry #1
2023-09-10T18:50:55.777+02:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: Failed to connect to /192.168.x.x:9200. - Connection refused.
2023-09-10T18:50:55.779+02:00 INFO [VersionProbe] Elasticsearch is not available. Retry #2
2023-09-10T18:51:00.781+02:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: Failed to connect to /192.168.x.x:9200. - Connection refused.
2023-09-10T18:51:00.783+02:00 INFO [VersionProbe] Elasticsearch is not available. Retry #3
2023-09-10T18:51:05.786+02:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: Failed to connect to /192.168.x.x:9200. - Connection refused.
2023-09-10T18:51:05.787+02:00 INFO [VersionProbe] Elasticsearch is not available. Retry #4
2023-09-10T18:51:11.152+02:00 INFO [SearchDbPreflightCheck] Connected to (Elastic/Open)Search version OpenSearch:2.9.0
2023-09-10T18:51:11.495+02:00 INFO [Version] HV000001: Hibernate Validator null
2023-09-10T18:50:38.838+02:00 INFO [InputBufferImpl] Message journal is enabled.
2023-09-10T18:50:38.866+02:00 INFO [FilePersistedNodeIdProvider] Node ID: 547358a0-77f6-4094-83aa-5a0b4a08ab64
2023-09-10T18:50:39.241+02:00 INFO [LogManager] Loading logs.
2023-09-10T18:50:39.283+02:00 WARN [Log] Found a corrupted index file, /var/lib/graylog-server/journal/messagejournal-0/00000000000000153130.index, deleting and rebuilding index…
2023-09-10T18:50:39.336+02:00 INFO [LogManager] Logs loading complete.
2023-09-10T18:50:39.341+02:00 INFO [LocalKafkaJournal] Initialized Kafka based journal at /var/lib/graylog-server/journal
2023-09-10T18:50:39.354+02:00 INFO [client] MongoClient with metadata {“driver”: {“name”: “mongo-java-driver|legacy”, “version”: “4.8.1”}, “os”: {“type”: “Linux”, “name”: “Linux”, “architecture”: “amd64”, “version”: “5.15.0-83-generic”}, “platform”: “Java/Eclipse Adoptium/17.0.8+7”} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=, codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@55e42449]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName=‘null’, serverSelector=‘null’, clusterListeners=‘’, serverSelectionTimeout=‘30000 ms’, localThreshold=‘30000 ms’}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=, maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners=‘’, serverMonitorListeners=‘’}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName=‘null’, compressorList=, uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, contextProvider=null}
2023-09-10T18:50:39.357+02:00 INFO [client] MongoClient with metadata {“driver”: {“name”: “mongo-java-driver|legacy”, “version”: “4.8.1”}, “os”: {“type”: “Linux”, “name”: “Linux”, “architecture”: “amd64”, “version”: “5.15.0-83-generic”}, “platform”: “Java/Eclipse Adoptium/17.0.8+7”} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=, codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@55e42449]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName=‘null’, serverSelector=‘null’, clusterListeners=‘’, serverSelectionTimeout=‘30000 ms’, localThreshold=‘30000 ms’}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=, maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners=‘’, serverMonitorListeners=‘’}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName=‘null’, compressorList=, uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, contextProvider=null}
2023-09-10T18:50:39.358+02:00 INFO [cluster] Cluster description not yet available. Waiting for 30000 ms before timing out
2023-09-10T18:50:39.358+02:00 INFO [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=3676705}
2023-09-10T18:50:39.574+02:00 INFO [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy , running 1 parallel message handlers.
2023-09-10T18:50:40.505+02:00 INFO [ElasticsearchVersionProvider] Elasticsearch cluster is running OpenSearch:2.9.0
2023-09-10T18:50:41.624+02:00 INFO [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy .
2023-09-10T18:50:41.766+02:00 INFO [OutputBuffer] Initialized OutputBuffer with ring size <65536> and wait strategy .
2023-09-10T18:50:44.074+02:00 INFO [DbEntitiesScanner] 16 entities have been scanned and added to DB Entity Catalog, it took 1.734 s
2023-09-10T18:50:44.770+02:00 INFO [ServerBootstrap] Graylog server 5.1.5+993cd0f starting up
2023-09-10T18:50:44.771+02:00 INFO [ServerBootstrap] JRE: Eclipse Adoptium 17.0.8 on Linux 5.15.0-83-generic
2023-09-10T18:50:44.772+02:00 INFO [ServerBootstrap] Deployment: deb
2023-09-10T18:50:44.772+02:00 INFO [ServerBootstrap] OS: Ubuntu 22.04.3 LTS (jammy)
2023-09-10T18:50:44.773+02:00 INFO [ServerBootstrap] Arch: amd64
2023-09-10T18:50:45.013+02:00 INFO [ServerBootstrap] Running 84 migrations…
2023-09-10T18:50:47.312+02:00 INFO [PeriodicalsService] Starting 38 periodicals …
2023-09-10T18:50:47.320+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.ThroughputCalculator] periodical in [0s], polling every [1s].
2023-09-10T18:50:47.326+02:00 INFO [Periodicals] Starting [org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration] periodical, running forever.
2023-09-10T18:50:47.344+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] periodical in [0s], polling every [1s].
2023-09-10T18:50:47.347+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [120s], polling every [20s].
2023-09-10T18:50:47.348+02:00 INFO [PeriodicalsService] Not starting [org.graylog2.periodical.ContentPackLoaderPeriodical] periodical. Not configured to run on this node.
2023-09-10T18:50:47.349+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2023-09-10T18:50:47.351+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.IndexBlockCheck] periodical in [0s], polling every [30s].
2023-09-10T18:50:47.353+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.IndexRetentionThread] periodical in [0s], polling every [300s].
2023-09-10T18:50:47.355+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.IndexRotationThread] periodical in [0s], polling every [10s].
2023-09-10T18:50:47.356+02:00 INFO [LookupTableService] Data Adapter watchlist-mongo/6463dfd0128cb1034f01ec9f [@1824526e] STARTING
2023-09-10T18:50:47.357+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.NodePingThread] periodical in [0s], polling every [1s].
2023-09-10T18:50:47.359+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.VersionCheckThread] periodical in [300s], polling every [1800s].
2023-09-10T18:50:47.360+02:00 INFO [LookupTableService] Data Adapter watchlist-mongo/6463dfd0128cb1034f01ec9f [@1824526e] RUNNING
2023-09-10T18:50:47.361+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.ThrottleStateUpdaterThread] periodical in [1s], polling every [1s].
2023-09-10T18:50:47.363+02:00 INFO [LegacyDefaultStreamMigration] Legacy default stream has no connections, no migration needed.
2023-09-10T18:50:47.367+02:00 INFO [Periodicals] Starting [org.graylog2.events.ClusterEventPeriodical] periodical in [0s], polling every [1s].
2023-09-10T18:50:47.368+02:00 INFO [Periodicals] Starting [org.graylog2.events.ClusterEventCleanupPeriodical] periodical in [0s], polling every [86400s].
2023-09-10T18:50:47.370+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.IndexRangesCleanupPeriodical] periodical in [15s], polling every [3600s].
2023-09-10T18:50:47.375+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.TrafficCounterCalculator] periodical in [0s], polling every [1s].
2023-09-10T18:50:47.378+02:00 INFO [Periodicals] Starting [org.graylog2.indexer.fieldtypes.IndexFieldTypePollerPeriodical] periodical in [0s], polling every [1s].
2023-09-10T18:50:47.384+02:00 INFO [Periodicals] Starting [org.graylog.scheduler.periodicals.ScheduleTriggerCleanUp] periodical in [120s], polling every [86400s].
2023-09-10T18:50:47.385+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.ESVersionCheckPeriodical] periodical in [0s], polling every [30s].
2023-09-10T18:50:47.386+02:00 INFO [Periodicals] Starting [org.graylog2.periodical.UserSessionTerminationPeriodical] periodical, running forever.
2023-09-10T18:50:47.387+02:00 INFO [Periodicals] Starting [org.graylog2.telemetry.cluster.TelemetryClusterInfoPeriodical] periodical in [0s], polling every [540s].
2023-09-10T18:50:47.388+02:00 INFO [Periodicals] Starting [org.graylog.plugins.sidecar.periodical.PurgeExpiredSidecarsThread] periodical in [0s], polling every [600s].
2023-09-10T18:50:47.394+02:00 INFO [Periodicals] Starting [org.graylog.plugins.sidecar.periodical.PurgeExpiredConfigurationUploads] periodical in [0s], polling every [600s].
2023-09-10T18:50:47.396+02:00 INFO [Periodicals] Starting [org.graylog.plugins.views.search.db.SearchesCleanUpJob] periodical in [3600s], polling every [28800s].
2023-09-10T18:50:47.398+02:00 INFO [Periodicals] Starting [org.graylog.events.periodicals.EventNotificationStatusCleanUp] periodical in [120s], polling every [86400s].
2023-09-10T18:50:47.398+02:00 INFO [Periodicals] Starting [org.graylog.enterprise.integrations.azure.external.AzureCheckpointStoreCleanupService] periodical, running forever.
2023-09-10T18:50:47.401+02:00 INFO [Periodicals] Starting [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread] periodical in [0s], polling every [3600s].
2023-09-10T18:50:47.402+02:00 INFO [Periodicals] Starting [org.graylog.plugins.forwarder.ForwarderStatePeriodical] periodical in [0s], polling every [15s].
2023-09-10T18:50:47.402+02:00 INFO [Periodicals] Starting [org.graylog.plugins.license.LicenseManagerPeriodical] periodical in [0s], polling every [300s].
2023-09-10T18:50:47.403+02:00 INFO [Periodicals] Starting [org.graylog.plugins.license.LicenseReportPeriodical] periodical in [300s], polling every [3600s].
2023-09-10T18:50:47.403+02:00 INFO [Periodicals] Starting [org.graylog.plugins.archive.deletion.ArchiveDeletionPeriodical] periodical in [0s], polling every [3600s].
2023-09-10T18:50:47.407+02:00 INFO [Periodicals] Starting [org.graylog.plugins.auditlog.mongodb.MongoAuditLogPeriodical] periodical in [0s], polling every [3600s].
2023-09-10T18:50:47.413+02:00 INFO [Periodicals] Starting [org.graylog.plugins.files.CleanupPeriodical] periodical in [0s], polling every [86400s].
2023-09-10T18:50:47.415+02:00 INFO [Periodicals] Starting [org.graylog.plugins.illuminate.status.IlluminateStatusPeriodical] periodical in [0s], polling every [1800s].
2023-09-10T18:50:47.418+02:00 INFO [Periodicals] Starting [org.graylog.plugins.illuminate.illuminatehub.IlluminateHubNewVersionCheckPeriodical] periodical in [0s], polling every [43200s].
2023-09-10T18:50:47.438+02:00 INFO [Periodicals] Starting [org.graylog.plugins.securityapp.anomaly.retrieval.AnomalyRetrievalPeriodical] periodical in [0s], polling every [300s].
2023-09-10T18:50:47.441+02:00 INFO [Periodicals] Starting [org.graylog.plugins.securityapp.anomaly.DetectorStatusSyncPeriodical] periodical in [0s], polling every [60s].
2023-09-10T18:50:47.443+02:00 INFO [Periodicals] Starting [org.graylog.plugins.securityapp.sigma.SigmaRuleStatusSyncPeriodical] periodical in [0s], polling every [900s].
2023-09-10T18:50:47.465+02:00 INFO [LookupTableService] Cache watchlist-cache/6463dfd0128cb1034f01ec9d [@53e5751c] STARTING
2023-09-10T18:50:47.469+02:00 INFO [LookupTableService] Cache watchlist-cache/6463dfd0128cb1034f01ec9d [@53e5751c] RUNNING
2023-09-10T18:50:47.486+02:00 INFO [LookupTableService] Starting lookup table watchlist/6463dfd0128cb1034f01eca1 [@5d2b6e3b] using cache watchlist-cache/6463dfd0128cb1034f01ec9d [@53e5751c], data adapter watchlist-mongo/6463dfd0128cb1034f01ec9f [@1824526e]
2023-09-10T18:50:52.962+02:00 INFO [NetworkListener] Started listener bound to [0.0.0.0:9000]
2023-09-10T18:50:52.965+02:00 INFO [HttpServer] [HttpServer] Started.
2023-09-10T18:50:52.966+02:00 INFO [JerseyService] Started REST API at <0.0.0.0:9000>
2023-09-10T18:50:52.967+02:00 INFO [ServiceManagerListener] Services are healthy
2023-09-10T18:50:52.971+02:00 INFO [InputSetupService] Triggering launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] to Running [LB:ALIVE]
2023-09-10T18:50:52.973+02:00 INFO [ServerBootstrap] Services started, startup times in ms: {BufferSynchronizerService [RUNNING]=0, LocalKafkaMessageQueueWriter [RUNNING]=0, GracefulShutdownService [RUNNING]=1, UserSessionTerminationService [RUNNING]=2, FailureHandlingService [RUNNING]=2, UrlWhitelistService [RUNNING]=2, ProcessingConfigurationManager [RUNNING]=2, ConfigurationEtagService [RUNNING]=2, LocalKafkaMessageQueueReader [RUNNING]=4, GeoIpDbFileChangeMonitorService [RUNNING]=4, DevelopmentDirectoryObserverService [RUNNING]=4, OutputSetupService [RUNNING]=7, InputSetupService [RUNNING]=7, LocalKafkaJournal [RUNNING]=9, PrometheusExporter [RUNNING]=11, EtagService [RUNNING]=11, JobSchedulerService [RUNNING]=13, StreamCacheService [RUNNING]=18, MongoDBProcessingStatusRecorderService [RUNNING]=40, PeriodicalsService [RUNNING]=143, LookupTableService [RUNNING]=171, JerseyService [RUNNING]=5656}
2023-09-10T18:50:53.032+02:00 INFO [ServerBootstrap] Graylog server up and running.
2023-09-10T18:50:53.046+02:00 INFO [InputLauncher] Launching input [Syslog UDP/syslog_udp/6463de53e5d0987e175f92f8] - desired state is RUNNING
2023-09-10T18:50:53.060+02:00 INFO [InputStateListener] Input [Syslog UDP/syslog_udp/6463de53e5d0987e175f92f8] is now STARTING
2023-09-10T18:50:53.597+02:00 INFO [InputStateListener] Input [Syslog UDP/syslog_udp/6463de53e5d0987e175f92f8] is now RUNNING

Regards
Matthias

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.