Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question.
Don’t forget to select tags to help index your topic!
1. Describe your incident: I am attempting to migrate from ElasticSearch to OpenSearch 2.19.3 for Graylog 6.3.5 on Ubuntu 24.04. I am following the steps here: Secure Login
My configuration is a single VM that runs Graylog and Elasticsearch on the same VM. It’s a pretty small deployment, so it works fine all on one machine.
On the instructions, one of the final steps says to run this command:”
curl -X GET "``http://127.0.0.1:9200/_cluster/health?pretty=true``"
When I run that command, I get the following:
{
“error” : {
“root_cause” : [
{
“type” : “cluster_manager_not_discovered_exception”,
“reason” : null
}
],
“type” : “cluster_manager_not_discovered_exception”,
“reason” : null
},
“status” : 503
}
I cannot access the graylog web interface, and I have a number of errors in /var/log/graylog-server/graylog.log and /var/log/opensearch/graylog.log which I will append at the end of this post.
2. Describe your environment:
-
OS Information: Ubuntu 24.04
-
Package Version: 6.3.5
-
Service logs, configurations, and environment variables:
Syslog:
2025-11-10T13:58:51.827996-06:00 graylog opensearch[921]: org.opensearch.discovery.ClusterManagerNotDiscoveredException: null
2025-11-10T13:58:51.828117-06:00 graylog opensearch[921]: #011at org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction$AsyncSingleAction$1.onTimeout(TransportClusterManagerNodeAction.java:345) [opensearch-2.19.3.jar:2.19.3]
2025-11-10T13:58:51.828206-06:00 graylog opensearch[921]: #011at org.opensearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:394) [opensearch-2.19.3.jar:2.19.3]
2025-11-10T13:58:51.828290-06:00 graylog opensearch[921]: #011at org.opensearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:294) [opensearch-2.19.3.jar:2.19.3]
2025-11-10T13:58:51.828373-06:00 graylog opensearch[921]: #011at org.opensearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:749) [opensearch-2.19.3.jar:2.19.3]
2025-11-10T13:58:51.828504-06:00 graylog opensearch[921]: #011at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:955) [opensearch-2.19.3.jar:2.19.3]
2025-11-10T13:58:51.828588-06:00 graylog opensearch[921]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) [?:?]
2025-11-10T13:58:51.828670-06:00 graylog opensearch[921]: #011at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) [?:?]
2025-11-10T13:58:51.828750-06:00 graylog opensearch[921]: #011at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
2025-11-10T13:58:52.252919-06:00 graylog systemd[1]: graylog-server.service: Main process exited, code=exited, status=1/FAILURE
2025-11-10T13:58:52.253203-06:00 graylog systemd[1]: graylog-server.service: Failed with result 'exit-code'.
2025-11-10T13:58:52.253315-06:00 graylog systemd[1]: graylog-server.service: Consumed 29.603s CPU time.
2025-11-10T13:59:01.322120-06:00 graylog opensearch[921]: [2025-11-10T13:59:01,321][WARN ][o.o.c.c.ClusterFormationFailureHelper] [127.0.0.1] cluster-manager not discovered yet: have discovered [{127.0.0.1}{MNukywr4TTaGkBlEYorLjA}{oGOKggKUT6mOZFX8t1qI6A}{127.0.0.1}{127.0.0.1:9300}{d}{shard_indexing_pressure_enabled=true}]; discovery will continue using [] from hosts providers and [] from last-known cluster state; node term 163, last-accepted version 33477 in term 163
2025-11-10T13:59:02.422115-06:00 graylog systemd[1]: graylog-server.service: Scheduled restart job, restart counter is at 36.
Graylog-Server Log
2025-11-10T13:59:41.631-06:00 ERROR [ServerBootstrap] Exception while running migrations
org.graylog.shaded.opensearch2.org.opensearch.OpenSearchException: Couldn't read cluster state for reopened indices [graylog_*]
at org.graylog.storage.opensearch2.OpenSearchClient.exceptionFrom(OpenSearchClient.java:211) ~[?:?]
at org.graylog.storage.opensearch2.OpenSearchClient.execute(OpenSearchClient.java:153) ~[?:?]
at org.graylog.storage.opensearch2.PlainJsonApi.perform(PlainJsonApi.java:38) ~[?:?]
at org.graylog.storage.opensearch2.migrations.V20170607164210_MigrateReopenedIndicesToAliasesClusterStateOS2.getForIndices(V20170607164210_MigrateReopenedIndicesToAliasesClusterStateOS2.java:38) ~[?:?]
at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.getReopenedIndices(V20170607164210_MigrateReopenedIndicesToAliases.java:92) ~[graylog.jar:?]
at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.getReopenedIndices(V20170607164210_MigrateReopenedIndicesToAliases.java:142) ~[graylog.jar:?]
at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.lambda$upgrade$0(V20170607164210_MigrateReopenedIndicesToAliases.java:84) ~[graylog.jar:?]
at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) ~[?:?]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source) ~[?:?]
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Unknown Source) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:?]
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown Source) ~[?:?]
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown Source) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) ~[?:?]
at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.upgrade(V20170607164210_MigrateReopenedIndicesToAliases.java:86) ~[graylog.jar:?]
at org.graylog2.bootstrap.ServerBootstrap.lambda$runMigrations$4(ServerBootstrap.java:405) ~[graylog.jar:?]
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(Unknown Source) ~[?:?]
at java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source) ~[?:?]
at com.google.common.collect.CollectSpliterators$1WithCharacteristics.lambda$forEachRemaining$0(CollectSpliterators.java:70) ~[graylog.jar:?]
at java.base/java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Unknown Source) ~[?:?]
at com.google.common.collect.CollectSpliterators$1WithCharacteristics.forEachRemaining(CollectSpliterators.java:70) ~[graylog.jar:?]
at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:?]
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown Source) ~[?:?]
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown Source) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) ~[?:?]
at org.graylog2.bootstrap.ServerBootstrap.runMigrations(ServerBootstrap.java:402) ~[graylog.jar:?]
at org.graylog2.bootstrap.ServerBootstrap.startCommand(ServerBootstrap.java:328) [graylog.jar:?]
at org.graylog2.bootstrap.CmdLineTool.doRun(CmdLineTool.java:382) [graylog.jar:?]
at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:287) [graylog.jar:?]
at org.graylog2.bootstrap.Main.main(Main.java:57) [graylog.jar:?]
Caused by: org.graylog.shaded.opensearch2.org.opensearch.client.ResponseException: method [GET], host [http://127.0.0.1:9200], URI [/_cluster/state/metadata/graylog_*], status line [HTTP/1.1 503 Service Unavailable]
{"error":{"root_cause":[{"type":"cluster_manager_not_discovered_exception","reason":null}],"type":"cluster_manager_not_discovered_exception","reason":null},"status":503}
at org.graylog.shaded.opensearch2.org.opensearch.client.RestClient.convertResponse(RestClient.java:479) ~[?:?]
at org.graylog.shaded.opensearch2.org.opensearch.client.RestClient.performRequest(RestClient.java:371) ~[?:?]
at org.graylog.shaded.opensearch2.org.opensearch.client.RestClient.performRequest(RestClient.java:346) ~[?:?]
at org.graylog.storage.opensearch2.PlainJsonApi.lambda$perform$0(PlainJsonApi.java:40) ~[?:?]
at org.graylog.storage.opensearch2.OpenSearchClient.execute(OpenSearchClient.java:151) ~[?:?]
... 32 more
2025-11-10T13:59:53.645-06:00 INFO [ImmutableFeatureFlagsCollector] Following feature flags are used: {default properties file=[show_security_events_in_pedt=off, data_tiering_cloud=off, preflight_web=on, configurable_value_units=on, setup_mode=on, cloud_inputs=on, investigation_report_by_ai=on, show_executive_dashboard_page=off, composable_index_templates=off, data_node_migration=on, remote_reindex_migration=off, instant_archiving=off, data_warehouse_search=on, threat_coverage=on, external_data_lake_search=off]}
2025-11-10T13:59:54.281-06:00 INFO [CmdLineTool] Loaded plugin: AWS plugins 6.3.6+6e1136b [org.graylog.aws.AWSPlugin]
2025-11-10T13:59:54.281-06:00 INFO [CmdLineTool] Loaded plugin: Integrations 6.3.6+6e1136b [org.graylog.integrations.IntegrationsPlugin]
2025-11-10T13:59:54.282-06:00 INFO [CmdLineTool] Loaded plugin: Threat Intelligence Plugin 6.3.6+6e1136b [org.graylog.plugins.threatintel.ThreatIntelPlugin]
2025-11-10T13:59:54.282-06:00 INFO [CmdLineTool] Loaded plugin: Elasticsearch 7 Support 6.3.6+6e1136b [org.graylog.storage.elasticsearch7.Elasticsearch7Plugin]
2025-11-10T13:59:54.283-06:00 INFO [CmdLineTool] Loaded plugin: OpenSearch 2 Support 6.3.6+6e1136b [org.graylog.storage.opensearch2.OpenSearch2Plugin]
2025-11-10T13:59:54.301-06:00 INFO [CmdLineTool] Running with JVM arguments: -Xms2g -Xmx2g -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -Djdk.tls.acknowledgeCloseNotify=true -Dlog4j2.formatMsgNoLookups=true -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Dgraylog2.installation_source=deb
2025-11-10T13:59:54.492-06:00 INFO [client] MongoClient with metadata {"driver": {"name": "mongo-java-driver|legacy", "version": "5.5.1"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "6.8.0-87-generic"}, "platform": "Java/Eclipse Adoptium/17.0.16+8"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, transportSettings=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@18b8d173, com.mongodb.Jep395RecordCodecProvider@73844119, com.mongodb.KotlinCodecProvider@44f24a20]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverMonitoringMode=AUTO, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null, timeoutMS=null}
2025-11-10T13:59:54.496-06:00 INFO [client] MongoClient with metadata {"driver": {"name": "mongo-java-driver", "version": "5.5.1"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "6.8.0-87-generic"}, "platform": "Java/Eclipse Adoptium/17.0.16+8"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, transportSettings=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@18b8d173, com.mongodb.Jep395RecordCodecProvider@73844119, com.mongodb.KotlinCodecProvider@44f24a20]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverMonitoringMode=AUTO, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null, timeoutMS=null}
2025-11-10T13:59:54.529-06:00 INFO [cluster] Waiting for server to become available for operation with ID 3. Remaining time: 29993 ms. Selector: ReadPreferenceServerSelector{readPreference=primary}, topology description: {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}].
2025-11-10T13:59:54.530-06:00 INFO [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, cryptd=false, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=21258448, minRoundTripTimeNanos=0}
2025-11-10T13:59:54.592-06:00 INFO [MongoDBPreflightCheck] Connected to MongoDB version 6.0.19
2025-11-10T13:59:55.049-06:00 INFO [client] MongoClient with metadata {"driver": {"name": "mongo-java-driver|legacy", "version": "5.5.1"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "6.8.0-87-generic"}, "platform": "Java/Eclipse Adoptium/17.0.16+8"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, transportSettings=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@18b8d173, com.mongodb.Jep395RecordCodecProvider@73844119, com.mongodb.KotlinCodecProvider@44f24a20]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverMonitoringMode=AUTO, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null, timeoutMS=null}
2025-11-10T13:59:55.051-06:00 INFO [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, cryptd=false, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1769180, minRoundTripTimeNanos=0}
2025-11-10T13:59:55.052-06:00 INFO [client] MongoClient with metadata {"driver": {"name": "mongo-java-driver", "version": "5.5.1"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "6.8.0-87-generic"}, "platform": "Java/Eclipse Adoptium/17.0.16+8"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, transportSettings=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@18b8d173, com.mongodb.Jep395RecordCodecProvider@73844119, com.mongodb.KotlinCodecProvider@44f24a20]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverMonitoringMode=AUTO, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null, timeoutMS=null}
2025-11-10T13:59:55.218-06:00 INFO [IndexerDiscoveryProvider] No indexer hosts configured, using fallback http://127.0.0.1:9200
2025-11-10T13:59:55.615-06:00 INFO [ServerBootstrap] Running 2 migrations...
2025-11-10T13:59:55.801-06:00 INFO [IndexerDiscoveryProvider] No indexer hosts configured, using fallback http://127.0.0.1:9200
2025-11-10T13:59:55.896-06:00 INFO [SearchDbPreflightCheck] Connected to (Elastic/Open)Search version <OpenSearch:2.19.3>
2025-11-10T13:59:56.096-06:00 INFO [Version] HV000001: Hibernate Validator null
2025-11-10T13:59:58.470-06:00 INFO [InputBufferImpl] Message journal is enabled.
2025-11-10T13:59:58.609-06:00 INFO [LogManager] Loading logs.
2025-11-10T13:59:58.639-06:00 WARN [Log] Found a corrupted index file, /var/lib/graylog-server/journal/messagejournal-0/00000000001189892504.index, deleting and rebuilding index...
2025-11-10T13:59:59.473-06:00 INFO [LogManager] Logs loading complete.
2025-11-10T13:59:59.478-06:00 INFO [LocalKafkaJournal] Initialized Kafka based journal at /var/lib/graylog-server/journal
2025-11-10T13:59:59.500-06:00 INFO [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy <BlockingWaitStrategy>, running 2 parallel message handlers.
2025-11-10T13:59:59.864-06:00 INFO [IndexerDiscoveryProvider] No indexer hosts configured, using fallback http://127.0.0.1:9200
2025-11-10T13:59:59.883-06:00 INFO [ElasticsearchVersionProvider] Elasticsearch cluster is running OpenSearch:2.19.3
2025-11-10T14:00:00.366-06:00 WARN [PipelineResolver] Cannot resolve rule <Reverse DNS Lookup DstIP> referenced by stage #3 within pipeline <6398e5f537f2da45d1a68e06>
2025-11-10T14:00:00.367-06:00 WARN [PipelineResolver] Cannot resolve rule <Reverse DNS Lookup SrcIP> referenced by stage #3 within pipeline <6398e5f537f2da45d1a68e06>
2025-11-10T14:00:00.515-06:00 INFO [OutputBuffer] Initialized OutputBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>, running 2 parallel buffer processors.
2025-11-10T14:00:00.616-06:00 INFO [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>, running 6 parallel buffer processors.
2025-11-10T14:00:01.125-06:00 INFO [ServerBootstrap] Graylog server 6.3.6+6e1136b starting up
2025-11-10T14:00:01.126-06:00 INFO [ServerBootstrap] JRE: Eclipse Adoptium 17.0.16 on Linux 6.8.0-87-generic
2025-11-10T14:00:01.126-06:00 INFO [ServerBootstrap] Deployment: deb
2025-11-10T14:00:01.126-06:00 INFO [ServerBootstrap] OS: Ubuntu 24.04.3 LTS (noble)
2025-11-10T14:00:01.127-06:00 INFO [ServerBootstrap] Arch: amd64
2025-11-10T14:00:01.127-06:00 INFO [ServerBootstrap] Node ID: e8f92f10-d0d7-477d-b22b-ea2cd664862c
2025-11-10T14:00:01.198-06:00 INFO [ServerBootstrap] Running 77 migrations...
Opensearch Log
[2025-11-10T14:01:21,337][WARN ][o.o.c.c.ClusterFormationFailureHelper] [127.0.0.1] cluster-manager not discovered yet: have discovered [{127.0.0.1}{MNukywr4TTaGkBlEYorLjA}{oGOKggKUT6mOZFX8t1qI6A}{127.0.0.1}{127.0.0.1:9300}{d}{shard_indexing_pressure_enabled=true}]; discovery will continue using [] from hosts providers and [] from last-known cluster state; node term 163, last-accepted version 33477 in term 163
[2025-11-10T14:01:31,338][WARN ][o.o.c.c.ClusterFormationFailureHelper] [127.0.0.1] cluster-manager not discovered yet: have discovered [{127.0.0.1}{MNukywr4TTaGkBlEYorLjA}{oGOKggKUT6mOZFX8t1qI6A}{127.0.0.1}{127.0.0.1:9300}{d}{shard_indexing_pressure_enabled=true}]; discovery will continue using [] from hosts providers and [] from last-known cluster state; node term 163, last-accepted version 33477 in term 163
[2025-11-10T14:01:41,339][WARN ][o.o.c.c.ClusterFormationFailureHelper] [127.0.0.1] cluster-manager not discovered yet: have discovered [{127.0.0.1}{MNukywr4TTaGkBlEYorLjA}{oGOKggKUT6mOZFX8t1qI6A}{127.0.0.1}{127.0.0.1:9300}{d}{shard_indexing_pressure_enabled=true}]; discovery will continue using [] from hosts providers and [] from last-known cluster state; node term 163, last-accepted version 33477 in term 163
[2025-11-10T14:01:51,340][WARN ][o.o.c.c.ClusterFormationFailureHelper] [127.0.0.1] cluster-manager not discovered yet: have discovered [{127.0.0.1}{MNukywr4TTaGkBlEYorLjA}{oGOKggKUT6mOZFX8t1qI6A}{127.0.0.1}{127.0.0.1:9300}{d}{shard_indexing_pressure_enabled=true}]; discovery will continue using [] from hosts providers and [] from last-known cluster state; node term 163, last-accepted version 33477 in term 163
[2025-11-10T14:02:01,341][WARN ][o.o.c.c.ClusterFormationFailureHelper] [127.0.0.1] cluster-manager not discovered yet: have discovered [{127.0.0.1}{MNukywr4TTaGkBlEYorLjA}{oGOKggKUT6mOZFX8t1qI6A}{127.0.0.1}{127.0.0.1:9300}{d}{shard_indexing_pressure_enabled=true}]; discovery will continue using [] from hosts providers and [] from last-known cluster state; node term 163, last-accepted version 33477 in term 163
Opensearch Config:
# ======================== OpenSearch Configuration =========================
#
# NOTE: OpenSearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.opensearch.org
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.roles: ['data']
action.auto_create_index: false
cluster.name: opensearch
#discovery.type: single-node
discovery.seed_hosts: 127.0.0.1
#node2,node3
cluster.initial_master_nodes: 127.0.0.1
#,node2,node3
node.name: 127.0.0.1
path.data: /var/lib/opensearch
path.logs: /var/log/opensearch
plugins.security.disabled: true
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# OpenSearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of cluster-manager-eligible nodes:
#
#cluster.initial_cluster_manager_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# ---------------------------------- Remote Store -----------------------------------
# Controls whether cluster imposes index creation only with remote store enabled
# cluster.remote_store.enabled: true
#
# Repository to use for segment upload while enforcing remote store for an index
# node.attr.remote_store.segment.repository: my-repo-1
#
# Repository to use for translog upload while enforcing remote store for an index
# node.attr.remote_store.translog.repository: my-repo-1
#
# ---------------------------------- Experimental Features -----------------------------------
# Gates the visibility of the experimental segment replication features until they are production ready.
#
#opensearch.experimental.feature.segment_replication_experimental.enabled: false
#
# Gates the functionality of a new parameter to the snapshot restore API
# that allows for creation of a new index type that searches a snapshot
# directly in a remote repository without restoring all index data to disk
# ahead of time.
#
#opensearch.experimental.feature.searchable_snapshot.enabled: false
#
#
# Gates the functionality of enabling extensions to work with OpenSearch.
# This feature enables applications to extend features of OpenSearch outside of
# the core.
#
#opensearch.experimental.feature.extensions.enabled: false
#
#
# Gates the optimization of datetime formatters caching along with change in default datetime formatter
# Once there is no observed impact on performance, this feature flag can be removed.
#
#opensearch.experimental.optimization.datetime_formatter_caching.enabled: false
#
# Gates the functionality of enabling Opensearch to use pluggable caches with respective store names via setting.
#
#opensearch.experimental.feature.pluggable.caching.enabled: false
#
# Gates the functionality of star tree index, which improves the performance of search aggregations.
#
#opensearch.experimental.feature.composite_index.star_tree.enabled: false
What steps have you already taken to try and solve the problem?
- Restart Graylog server
- Review Opensearch configuration
- Review Graylog configuration
4. How can the community help?
- Suggestion additional troubleshooting or point out configuration errors.
Helpful Posting Tips: Tips for Posting Questions that Get Answers [Hold down CTRL and link on link to open tips documents in a separate tab]