Be a part of Graylog Development - Call for testers!

Dear Graylog Community!

We are thrilled to announce the development of a migration tool to move your data from ElasticSearch/OpenSearch into a Graylog Data Node, and we need YOUR expertise to ensure it’s as robust and user-friendly as possible!

What is the Graylog Data Node and why do we need a migration?

The Graylog Data Node is a management component designed to configure and optimize OpenSearch for use with Graylog. This feature also enhances the security of the data layer in Graylog by implementing certificates, managing cluster membership, and facilitating the addition of new nodes. Graylog Data Node ensures that the correct version of OpenSearch and its necessary extensions are installed to enable proper functionality of Graylog.

Currently, the Graylog Data Node is available for new users only. We are now working on a migration tool to enable users to migrate from existing ElasticSearch or OpenSearch clusters into the Graylog Data Node to benefit from its advantages.

The Data Node Migration intends to migrate data from an existing Graylog instance and its indexer (ElasticSearch or OpenSearch) into the same Graylog instance with Data node as an indexer. It’s a way to transport data from one indexer into another, without touching and migrating anything stored in the mongodb. No streams, no dashboards, no pipelines will be migrated. It’s an upgrade process, not import mechanism. Its use case is NOT the import of data from a Graylog instance with data into a brand new or any other one without any data.

Why We Need Testers

As you know, the heart of open-source lies in collaboration and community input. By volunteering to test this new feature, you will:

  • Help Identify Bugs: Your feedback will help us catch bugs and edge cases that we might have missed.
  • Enhance User Experience: Provide valuable insights on how the migration feels and functions, ensuring it’s intuitive and useful for all users.
  • Contribute to Quality Assurance: Your testing will help maintain the high standards of quality and reliability our community expects.

How You Can Help

If you have a working Graylog home lab or instance with some data in it and are not using the Data Node yet, we appreciate your support to test the migration! Please be aware that issues might occur as this is an early alpha test. Therefore, we ask you to follow below steps to lower the risk of any data loss or system failures!

  1. Backup your Graylog Server setup with a MongoDB dump and filesystem backup of your configuration files: Backup
  2. Install the latest alpha version (6.1.0-alpha.6 or newer) of Graylog and Graylog Data Node. Find them in our repositories or use the Tarballs for manual installation:
  1. In the Graylog User Interface, start the migration by following through the wizard under System/Data Nodes. Depending on which indexer version you have been using, you will be provided with the best migration option. Follow through the different steps and record any feedback or problems!
  2. Share your feedback with us here in the forum!

To share your feedback on the migration experience, please fill out as many details as possible in the following table and post it in this thread or as a direct message to me (if you don’t want to share this publicly). Feel free to let us know if you want to participate early, I can give you a heads-up if we are planning a new alpha version in case we fixed some issues before you get started.

Graylog version e.g. 6.1.0-alpha.6
details about the Graylog server e.g. # of servers, running in AWS, docker, …
OS version of server
Data Node Server (if different)
License type
Data amount, # of indices
ElasticSearch/OpenSearch start version
Migration method
Result e.g. Success or Failure of the migration
Associated Github Issue (link) (in case of problems)
Feedback

If you run into a problem, we appreciate you submitting a bug report (Sign in to GitHub · GitHub) with as many details as possible to enable us to reproduce the issue.

Resources

To assist you in the testing process, we’ve prepared some resources:

  • Documentation: Comprehensive guides and documentation for the migration are still in the works, however the Data Node itself is already included in the official documentation Graylog Data Node - Getting Started .
  • Support Channels: Use our community forum here to ask questions and share feedback.

Timeline

We aim to complete the testing phase by August 18th. Your timely feedback is crucial to help us meet this goal and roll out the feature to all users.

A Big Thank You

We cannot express enough how valuable your contribution is to this project. By participating in the testing process, you’re helping to shape the future of Graylog and ensuring it remains a reliable and powerful tool for everyone.

Happy Testing!

The Graylog dev Team

1 Like

I see the graylog-datanode package has OpenSearch 2.12 bundled, but I have OpenSearch 2.13 installed. Should I have any hope of a successful in-place migration?

Thanks. -Steve

Hi Steve!
We have successfully tested the in-place migration from newer OS versions such as OS2.14 on our dev environments, so this should work. Feel free to give it a try and let us know how it went! Thanks a lot for your help!
Martina

Ubuntu 22.04 server with Graylog 6.0.4 and OpenSearch 2.13
Upgraded Graylog to 6.1.0-6.alpha.6, installed graylog-datanode. Added password_secret to /etc/graylog/datanode/datanode.conf and changed opensearch_data_location = /var/lib/opensearch. Added graylog-datanode user to opensearch group, and chmod -R g+w /var/lib/opensearch.
Enabled and started service, server is present in System/Data Nodes with a status of unconfigured.
Data Nodes Migration - Migration steps - In-Place migration, when I click ‘Run directory compatibilty check’, I get this error:


Found this repeated in /var/log/graylog-server/server.log:

2024-08-06T12:53:08.815Z ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource
java.lang.RuntimeException: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('<' (code 60)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
 at [Source: (okio.RealBufferedSource$inputStream$1); line: 1, column: 1]
        at org.graylog2.rest.resources.datanodes.DatanodeRestApiProxy.lambda$runOnAllNodes$0(DatanodeRestApiProxy.java:94) ~[graylog.jar:?]
        at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Unknown Source) ~[?:?]
        at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(Unknown Source) ~[?:?]
        at java.base/java.util.HashMap$ValueSpliterator.forEachRemaining(Unknown Source) ~[?:?]
        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:?]
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:?]
        at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(Unknown Source) ~[?:?]
        at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(Unknown Source) ~[?:?]
        at java.base/java.util.stream.AbstractTask.compute(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.CountedCompleter.exec(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.ForkJoinTask.invoke(Unknown Source) ~[?:?]
        at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateParallel(Unknown Source) ~[?:?]
        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:?]
        at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source) ~[?:?]
        at org.graylog2.rest.resources.datanodes.DatanodeRestApiProxy.runOnAllNodes(DatanodeRestApiProxy.java:89) ~[graylog.jar:?]
        at org.graylog2.rest.resources.datanodes.DatanodeRestApiProxy.request(DatanodeRestApiProxy.java:121) ~[graylog.jar:?]
        at org.graylog2.rest.resources.datanodes.DataNodeRestApiProxyResource.request(DataNodeRestApiProxyResource.java:126) ~[graylog.jar:?]
        at org.graylog2.rest.resources.datanodes.DataNodeRestApiProxyResource.requestGet(DataNodeRestApiProxyResource.java:81) ~[graylog.jar:?]
        at jdk.internal.reflect.GeneratedMethodAccessor336.invoke(Unknown Source) ~[?:?]
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[?:?]
        at java.base/java.lang.reflect.Method.invoke(Unknown Source) ~[?:?]
        at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) ~[graylog.jar:?]
        at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:146) ~[graylog.jar:?]
        at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:189) ~[graylog.jar:?]
        at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:176) ~[graylog.jar:?]
        at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:93) ~[graylog.jar:?]
        at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:478) ~[graylog.jar:?]
        at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:400) ~[graylog.jar:?]
        at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81) ~[graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:274) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors.process(Errors.java:292) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors.process(Errors.java:274) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors.process(Errors.java:244) [graylog.jar:?]
        at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:266) [graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:253) [graylog.jar:?]
        at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:696) [graylog.jar:?]
        at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:367) [graylog.jar:?]
        at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:190) [graylog.jar:?]
        at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:212) [graylog.jar:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
        at java.base/java.lang.Thread.run(Unknown Source) [?:?]
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('<' (code 60)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
 at [Source: (okio.RealBufferedSource$inputStream$1); line: 1, column: 1]
        at com.fasterxml.jackson.core.JsonParser._constructReadException(JsonParser.java:2648) ~[graylog.jar:?]
        at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:685) ~[graylog.jar:?]
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2750) ~[graylog.jar:?]
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:867) ~[graylog.jar:?]
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:753) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:4992) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4898) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3885) ~[graylog.jar:?]
        at org.graylog2.rest.resources.datanodes.DatanodeRestApiProxy.lambda$runOnAllNodes$0(DatanodeRestApiProxy.java:92) ~[graylog.jar:?]
        ... 44 more

Not sure if this is a bug, or I did something wrong.
Thanks. -Steve

Hi Steve, thanks for reporting this. Can you please check the data node logs in /var/log/graylog-datanode. The error occurs in the data node when checking the directory.
Thanks, Matthias

Matthias - no error messages in /var/log/graylog-datanode/datanode.log, only startup messages

2024-08-08T11:37:28.656Z INFO  [CmdLineTool] Running with JVM arguments: -Dlog4j.configurationFile=file:///etc/graylog/datanode/log4j2.xml -Xms1g -Xmx1g -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+UnlockExperimentalVMOptions -Djdk.tls.acknowledgeCloseNotify=true
2024-08-08T11:37:29.539Z INFO  [client] MongoClient with metadata {"driver": {"name": "mongo-java-driver|legacy", "version": "5.1.2"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "5.15.0-117-generic"}, "platform": "Java/Eclipse Adoptium/17.0.12+7"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, transportSettings=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@260a3a5e, com.mongodb.Jep395RecordCodecProvider@49206065, com.mongodb.KotlinCodecProvider@3c0bbc9f]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverMonitoringMode=AUTO, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null}
2024-08-08T11:37:29.550Z INFO  [client] MongoClient with metadata {"driver": {"name": "mongo-java-driver", "version": "5.1.2"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "5.15.0-117-generic"}, "platform": "Java/Eclipse Adoptium/17.0.12+7"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, transportSettings=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@260a3a5e, com.mongodb.Jep395RecordCodecProvider@49206065, com.mongodb.KotlinCodecProvider@3c0bbc9f]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverMonitoringMode=AUTO, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null}
2024-08-08T11:37:29.694Z INFO  [cluster] Waiting for server to become available for operation with ID 1. Remaining time: 30000 ms. Selector: ReadPreferenceServerSelector{readPreference=primary}, topology description: {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}].
2024-08-08T11:37:29.713Z INFO  [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=56761470}
2024-08-08T11:37:29.895Z INFO  [MongoDBPreflightCheck] Connected to MongoDB version 6.0.16
2024-08-08T11:37:30.227Z INFO  [DatanodeDirectories] Opensearch of the node 3c11b4da-6a86-4716-8e4a-d31664424963 uses following directories as its storage: DatanodeDirectories{dataTargetDir='/var/lib/opensearch', logsTargetDir='/var/log/graylog-datanode/opensearch', configurationSourceDir='Optional[/etc/graylog/datanode]', configurationTargetDir='/var/lib/graylog-datanode/opensearch/config', opensearchProcessConfigurationDir='/var/lib/graylog-datanode/opensearch/config/opensearch'}
2024-08-08T11:37:30.467Z INFO  [client] MongoClient with metadata {"driver": {"name": "mongo-java-driver|legacy", "version": "5.1.2"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "5.15.0-117-generic"}, "platform": "Java/Eclipse Adoptium/17.0.12+7"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, transportSettings=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@260a3a5e, com.mongodb.Jep395RecordCodecProvider@49206065, com.mongodb.KotlinCodecProvider@3c0bbc9f]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverMonitoringMode=AUTO, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null}
2024-08-08T11:37:30.474Z INFO  [client] MongoClient with metadata {"driver": {"name": "mongo-java-driver", "version": "5.1.2"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "5.15.0-117-generic"}, "platform": "Java/Eclipse Adoptium/17.0.12+7"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, transportSettings=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@260a3a5e, com.mongodb.Jep395RecordCodecProvider@49206065, com.mongodb.KotlinCodecProvider@3c0bbc9f]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=1000, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverMonitoringMode=AUTO, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null}
2024-08-08T11:37:30.479Z INFO  [cluster] Waiting for server to become available for operation with ID 10. Remaining time: 30000 ms. Selector: ReadPreferenceServerSelector{readPreference=primary}, topology description: {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}].
2024-08-08T11:37:30.479Z INFO  [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=2309691}
2024-08-08T11:37:30.494Z INFO  [OpensearchConfigSync] Directory used for Opensearch process configuration is /var/lib/graylog-datanode/opensearch/config/opensearch
2024-08-08T11:37:30.651Z INFO  [OpensearchConfigSync] Synchronizing Opensearch configuration
2024-08-08T11:37:30.680Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/log4j2.properties
2024-08-08T11:37:30.681Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/jvm.options
2024-08-08T11:37:30.681Z INFO  [FullDirSync] Deleting obsolete directory /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-observability
2024-08-08T11:37:30.762Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-observability/observability.yml
2024-08-08T11:37:30.765Z INFO  [FullDirSync] Deleting obsolete directory /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security
2024-08-08T11:37:30.769Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/roles.yml
2024-08-08T11:37:30.770Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/roles_mapping.yml
2024-08-08T11:37:30.771Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/audit.yml
2024-08-08T11:37:30.772Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/whitelist.yml
2024-08-08T11:37:30.773Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/nodes_dn.yml
2024-08-08T11:37:30.773Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/opensearch.yml.example
2024-08-08T11:37:30.774Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/internal_users.yml
2024-08-08T11:37:30.774Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/tenants.yml
2024-08-08T11:37:30.775Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/allowlist.yml
2024-08-08T11:37:30.775Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/action_groups.yml
2024-08-08T11:37:30.776Z INFO  [FullDirSync] Deleting obsolete file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/config.yml
2024-08-08T11:37:30.779Z INFO  [FullDirSync] Synchronizing directory /var/lib/graylog-datanode/opensearch/config/opensearch
2024-08-08T11:37:30.783Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/log4j2.properties
2024-08-08T11:37:30.784Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/jvm.options
2024-08-08T11:37:30.785Z INFO  [FullDirSync] Synchronizing directory /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-observability
2024-08-08T11:37:30.786Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-observability/observability.yml
2024-08-08T11:37:30.787Z INFO  [FullDirSync] Synchronizing directory /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security
2024-08-08T11:37:30.788Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/whitelist.yml
2024-08-08T11:37:30.789Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/opensearch.yml.example
2024-08-08T11:37:30.792Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/allowlist.yml
2024-08-08T11:37:30.793Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/roles_mapping.yml
2024-08-08T11:37:30.794Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/internal_users.yml
2024-08-08T11:37:30.798Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/action_groups.yml
2024-08-08T11:37:30.799Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/audit.yml
2024-08-08T11:37:30.802Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/config.yml
2024-08-08T11:37:30.803Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/tenants.yml
2024-08-08T11:37:30.806Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/nodes_dn.yml
2024-08-08T11:37:30.807Z INFO  [FullDirSync] Synchronizing file /var/lib/graylog-datanode/opensearch/config/opensearch/opensearch-security/roles.yml
2024-08-08T11:37:30.817Z INFO  [OpensearchDistributionProvider] Found following opensearch distributions: [/usr/share/graylog-datanode/dist/opensearch-2.12.0-linux-x64]
2024-08-08T11:37:32.077Z INFO  [DatanodeDirectories] Opensearch of the node 3c11b4da-6a86-4716-8e4a-d31664424963 uses following directories as its storage: DatanodeDirectories{dataTargetDir='/var/lib/opensearch', logsTargetDir='/var/log/graylog-datanode/opensearch', configurationSourceDir='Optional[/etc/graylog/datanode]', configurationTargetDir='/var/lib/graylog-datanode/opensearch/config', opensearchProcessConfigurationDir='/var/lib/graylog-datanode/opensearch/config/opensearch'}
2024-08-08T11:37:36.099Z INFO  [DbEntitiesScanner] 16 entities have been scanned and added to DB Entity Catalog, it took 2.921 s
2024-08-08T11:37:36.253Z INFO  [ServerBootstrap] Graylog datanode 6.1.0-alpha.6+2a1ff8c starting up
2024-08-08T11:37:36.255Z INFO  [ServerBootstrap] JRE: Eclipse Adoptium 17.0.12 on Linux 5.15.0-117-generic
2024-08-08T11:37:36.256Z INFO  [ServerBootstrap] Deployment: deb
2024-08-08T11:37:36.256Z INFO  [ServerBootstrap] OS: Ubuntu 22.04.4 LTS (jammy)
2024-08-08T11:37:36.257Z INFO  [ServerBootstrap] Arch: amd64
2024-08-08T11:37:36.427Z INFO  [PeriodicalsService] Starting 6 periodicals ...
2024-08-08T11:37:36.428Z INFO  [PeriodicalsService] Delaying start of 1 periodicals until this node becomes leader ...
2024-08-08T11:37:36.429Z INFO  [Periodicals] Starting [org.graylog.datanode.periodicals.OpensearchNodeHeartbeat] periodical in [0s], polling every [10s].
2024-08-08T11:37:36.431Z INFO  [Periodicals] Starting [org.graylog.datanode.bootstrap.preflight.DataNodeCertRenewalPeriodical] periodical in [0s], polling every [1800s].
2024-08-08T11:37:36.433Z INFO  [Periodicals] Starting [org.graylog.datanode.bootstrap.preflight.DataNodeConfigurationPeriodical] periodical in [0s], polling every [2s].
2024-08-08T11:37:36.435Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventPeriodical] periodical in [0s], polling every [1s].
2024-08-08T11:37:36.502Z INFO  [OpensearchDistributionProvider] Found following opensearch distributions: [/usr/share/graylog-datanode/dist/opensearch-2.12.0-linux-x64]
2024-08-08T11:37:36.506Z INFO  [Periodicals] Starting [org.graylog.datanode.periodicals.NodePingPeriodical] periodical in [0s], polling every [1s].
2024-08-08T11:37:36.535Z INFO  [JerseyService] Starting Data node REST API
2024-08-08T11:37:36.540Z INFO  [OpensearchProcessService]

========================================================================================================
It seems you are starting Data node for the first time. The current configuration is not sufficient to
start the indexer process because a security configuration is missing. You have to either provide http
and transport SSL certificates or use the Graylog preflight interface to configure this Data node remotely.
========================================================================================================

2024-08-08T11:37:36.540Z INFO  [Periodicals] Starting [org.graylog.datanode.periodicals.MetricsCollector] periodical in [0s], polling every [60s].
2024-08-08T11:37:36.550Z INFO  [ServerBootstrap] Services started, startup times in ms: {GracefulShutdownService [RUNNING]=2, OpensearchProcessService [RUNNING]=4, OpensearchConfigurationService [RUNNING]=104, PeriodicalsService [RUNNING]=138}
2024-08-08T11:37:36.550Z INFO  [ServerBootstrap] Graylog DataNode datanode up and running.
2024-08-08T11:37:37.713Z INFO  [Version] HV000001: Hibernate Validator 8.0.1.Final
2024-08-08T11:37:38.443Z INFO  [NetworkListener] Started listener bound to [0.0.0.0:8999]
2024-08-08T11:37:38.445Z INFO  [HttpServer] [HttpServer] Started.
2024-08-08T11:37:38.445Z INFO  [JerseyService] Started REST API at <0.0.0.0:8999>

The previously posted error in server.log appears as soon as I select In-place migration type, clicking the Run directory compatibility check only pops the error in the webui, nothing lands in logs.

Thanks. -Steve

Hi Steve, thanks for hanging in there.
We have figured out that only certain errors are handed over to Graylog from the directory compatibility check. Seems like the one you are experiencing was not on our list of expected errors.
We will extend the error handling on that part, so thank you already for the valuable input (created Improve data node directory compatibility check error response · Issue #20138 · Graylog2/graylog2-server · GitHub for that).
In the meantime, to solve your problem, could you please go to the API browser (found in System → Nodes) and do a call to the directory compatibility check directly (see values in screenshot)


Thanks,
Matthias

Matthias - here’s what I get from the API browser:

Request URL
https://x.x.x.x:9000/api/datanodes/any/rest/indices-directory%2Fcompatibility
Response Body
 
Internal Server Error
Failed to open index for read
     1: org.graylog.datanode.filesystem.index.indexreader.ShardStatsParserImpl.read(ShardStatsParserImpl.java:45)
     2: org.graylog.datanode.filesystem.index.IndicesDirectoryParser.getShardInformation(IndicesDirectoryParser.java:124)
     3: java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)
     4: java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)
     5: java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)
     6: java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)
     7: java.base/java.util.Iterator.forEachRemaining(Unknown Source)
     8: java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Unknown Source)
     9: java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
    10: java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
        ... 54 more
Root Cause: org.graylog.shaded.opensearch2.org.apache.lucene.index.IndexFormatTooNewException: Format version is not supported (resource BufferedChecksumIndexInput(ByteBufferIndexInput(path="/var/lib/opensearch/nodes/0/indices/RM-mqxZSRn-uJpzKnLD74A/0/index/_ai.fnm"))): 1 (needs to be between 0 and 0)
     1: org.graylog.shaded.opensearch2.org.apache.lucene.codecs.CodecUtil.checkHeaderNoMagic(CodecUtil.java:214)
     2: org.graylog.shaded.opensearch2.org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:194)
     3: org.graylog.shaded.opensearch2.org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:254)
     4: org.graylog.shaded.opensearch2.org.apache.lucene.codecs.lucene94.Lucene94FieldInfosFormat.read(Lucene94FieldInfosFormat.java:134)
     5: org.graylog.shaded.opensearch2.org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:112)
     6: org.graylog.shaded.opensearch2.org.apache.lucene.index.SegmentReader.(SegmentReader.java:96)
     7: org.graylog.shaded.opensearch2.org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:94)
     8: org.graylog.shaded.opensearch2.org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:77)
     9: org.graylog.shaded.opensearch2.org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:820)
    10: org.graylog.shaded.opensearch2.org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:109)
        ... 66 more
Please see the log for more detail.
Grizzly 4.0.2
Response Code
500
Response Headers
{"Connection":"close","Content-Encoding":"gzip","Content-Type":"text/html;charset=ISO-8859-1","Transfer-Encoding":"chunked","X-Content-Type-Options":"nosniff","X-Frame-Options":"DENY","X-Graylog-Node-Id":"a63c2e2a-1390-4011-a3cb-26b338ceeed1"}
Response Content Type 
application/json

Looked in server and datanode log files for more details, but nothing new landed there.

Seeing IndexFormatTooNewException takes me back to my earlier post, is my running version of OpenSearch too new for Datanode migration?

Thanks. -Steve

I see Graylog 6.1.0-alpha.7 was recently published, and it appears to have OpenSearch 2.15.0 bundled with Data Node. Tried Migration again, passed Directory compatibility check this time and was able to complete the migration.

1 Like

You are quick, I was about to tell you this :slight_smile:
Awesome, thanks a lot for trying again! So everything went smoothly now? Can you tell me the details of your setup (as mentioned in the table above)?
Let me know if you have any feedback on the migration process. And sorry about the issue - we have migrated “down” successfully before, but apparently it depends on the version. Thanks to your feedback we will now discourage downgrades for the in-place migration.

This was my small test server, here’s the info:

Graylog version 6.1.0-alpha.7
details about the Graylog server single server on VMware
OS version of server Ubuntu 22.04
Data Node Server (if different) 6.1.0-alpha.7
License type Open
Data amount, # of indices 31G, 24 indices
OpenSearch start version 2.13
Migration method In-Place
Result Success
1 Like

Thank you so much, Steve! This is really helpful.
If anyone else wants to try it out, we appreciate the help and feedback!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.