Integrating wazuh indexer with Graylog

Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question.
Don’t forget to select tags to help index your topic!

1. Describe your incident:

I am integrating Graylog with wazuh indexer
The indexer working as expected.

2. Describe your environment:

  • OS Information:
    Static hostname: soclab
    Icon name: computer-vm
    Chassis: vm
    Machine ID: b05f434d05e54eb08a2452dfc2b2d5a4
    Boot ID: 23c2609e1cf142bf9e2cc033ca7edecd
    Virtualization: vmware
    Operating System: Ubuntu 20.04.5 LTS
    Kernel: Linux 5.4.0-131-generic
    Architecture: x86-64

  • Package Version:

  • Service logs, configurations, and environment variables:

3. What steps have you already taken to try and solve the problem?

here is log from graylog

2022-11-06T22:23:19.436Z INFO [ImmutableFeatureFlagsCollector] Following feature flags are used: {}
2022-11-06T22:23:20.672Z INFO [CmdLineTool] Loaded plugin: AWS plugins 4.3.9 []
2022-11-06T22:23:20.673Z INFO [CmdLineTool] Loaded plugin: Integrations 4.3.9 [org.graylog.integrations.IntegrationsPlugin]
2022-11-06T22:23:20.675Z INFO [CmdLineTool] Loaded plugin: Collector 4.3.9 [org.graylog.plugins.collector.CollectorPlugin]
2022-11-06T22:23:20.676Z INFO [CmdLineTool] Loaded plugin: Threat Intelligence Plugin 4.3.9 [org.graylog.plugins.threatintel.ThreatIntelPlugin]
2022-11-06T22:23:20.677Z INFO [CmdLineTool] Loaded plugin: Elasticsearch 6 Support 4.3.9+e2c6648 []
2022-11-06T22:23:20.677Z INFO [CmdLineTool] Loaded plugin: Elasticsearch 7 Support 4.3.9+e2c6648 []
2022-11-06T22:23:20.713Z INFO [CmdLineTool] Running with JVM arguments: -Xms1g -Xmx1g -XX:NewRatio=1 -XX:+ResizeTLAB -XX:-OmitStackTraceInFastThrow -Djdk.tls.acknowledgeCloseNotify=true -Dlog4j2.formatMsgNoLookups=true -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb
2022-11-06T22:23:21.285Z INFO [PreflightCheckService] Skipping preflight checks
2022-11-06T22:23:21.427Z INFO [Version] HV000001: Hibernate Validator null
2022-11-06T22:23:24.796Z INFO [InputBufferImpl] Message journal is enabled.
2022-11-06T22:23:24.825Z INFO [NodeId] Node ID: a2a102fe-958d-4e68-93f9-c8d039c2069a
2022-11-06T22:23:25.121Z INFO [LogManager] Loading logs.
2022-11-06T22:23:25.189Z WARN [Log] Found a corrupted index file, /var/lib/graylog-server/journal/messagejournal-0/00000000000000000000.index, deleting and rebuilding index…
2022-11-06T22:23:25.241Z INFO [LogManager] Logs loading complete.
2022-11-06T22:23:25.245Z INFO [LocalKafkaJournal] Initialized Kafka based journal at /var/lib/graylog-server/journal
2022-11-06T22:23:25.288Z INFO [cluster] Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout=‘30000 ms’, maxWaitQueueSize=5000}
2022-11-06T22:23:25.345Z INFO [cluster] Cluster description not yet available. Waiting for 30000 ms before timing out
2022-11-06T22:23:25.385Z INFO [connection] Opened connection [connectionId{localValue:1, serverValue:186}] to localhost:27017
2022-11-06T22:23:25.394Z INFO [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 4, 17]}, minWireVersion=0, maxWireVersion=9, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=4006426}
2022-11-06T22:23:25.420Z INFO [connection] Opened connection [connectionId{localValue:2, serverValue:187}] to localhost:27017
2022-11-06T22:23:25.689Z INFO [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy , running 2 parallel message handlers.
2022-11-06T22:23:26.065Z INFO [ElasticsearchVersionProvider] Elasticsearch version set to Elasticsearch:7.0.0 - disabling version probe.
2022-11-06T22:23:26.986Z INFO [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy .
2022-11-06T22:23:27.095Z INFO [OutputBuffer] Initialized OutputBuffer with ring size <65536> and wait strategy .
2022-11-06T22:23:27.107Z INFO [connection] Opened connection [connectionId{localValue:3, serverValue:188}] to localhost:27017
2022-11-06T22:23:27.135Z INFO [connection] Opened connection [connectionId{localValue:4, serverValue:189}] to localhost:27017
2022-11-06T22:23:27.219Z INFO [connection] Opened connection [connectionId{localValue:5, serverValue:190}] to localhost:27017
2022-11-06T22:23:27.259Z INFO [connection] Opened connection [connectionId{localValue:6, serverValue:191}] to localhost:27017
2022-11-06T22:23:27.334Z INFO [connection] Opened connection [connectionId{localValue:7, serverValue:192}] to localhost:27017
2022-11-06T22:23:28.830Z INFO [ServerBootstrap] Graylog server 4.3.9+e2c6648 starting up
2022-11-06T22:23:28.843Z INFO [ServerBootstrap] JRE: Ubuntu 11.0.16 on Linux 5.4.0-131-generic
2022-11-06T22:23:28.843Z INFO [ServerBootstrap] Deployment: deb
2022-11-06T22:23:28.844Z INFO [ServerBootstrap] OS: Ubuntu 20.04.5 LTS (focal)
2022-11-06T22:23:28.844Z INFO [ServerBootstrap] Arch: amd64
2022-11-06T22:23:29.057Z INFO [ServerBootstrap] Running 46 migrations…
2022-11-06T22:23:30.501Z WARN [ServerBootstrap] Exception while running migrations Unable to retrieve cluster information
at ~[?:?]
at ~[?:?]
at ~[?:?]
at ~[?:?]
at org.graylog2.indexer.cluster.Node.getVersion( ~[graylog.jar:?]
at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.getReopenedIndices( ~[graylog.jar:?]
at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.getReopenedIndices( ~[graylog.jar:?]
at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.lambda$upgrade$0( ~[graylog.jar:?]
at$7$1.accept( ~[?:?]
at$3$1.accept( ~[?:?]
at java.util.Collections$2.tryAdvance( ~[?:?]
at java.util.Collections$2.forEachRemaining( ~[?:?]
at ~[?:?]
at ~[?:?]
at$ForEachOp.evaluateSequential( ~[?:?]
at$ForEachOp$OfRef.evaluateSequential( ~[?:?]
at ~[?:?]
at ~[?:?]
at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.upgrade( ~[graylog.jar:?]
at org.graylog2.bootstrap.ServerBootstrap.lambda$runMigrations$0( ~[graylog.jar:?]
at ~[graylog.jar:?]
at ~[graylog.jar:?]
at org.graylog2.bootstrap.ServerBootstrap.runMigrations( ~[graylog.jar:?]
at org.graylog2.bootstrap.ServerBootstrap.startCommand( [graylog.jar:?]
at [graylog.jar:?]
at org.graylog2.bootstrap.Main.main( [graylog.jar:?]
Caused by: Host name ‘’ does not match the certificate subject provided by the peer (, OU=Wazuh, O=Wazuh, L=California, C=US)
at ~[?:?]
at ~[?:?]
at ~[?:?]
at$perform$0( ~[?:?]
at ~[?:?]
… 24 more
Caused by: Host name ‘’ does not match the certificate subject provided by the peer (, OU=Wazuh, O=Wazuh, L=California, C=US)
at ~[?:?]
at$1.verify( ~[?:?]
at ~[?:?]
at ~[?:?]
at ~[?:?]
at ~[?:?]
at ~[?:?]
at ~[?:?]
at ~[?:?]
at ~[?:?]
at$ ~[?:?]
at ~[?:?]

4. How can the community help?

Helpful Posting Tips: Tips for Posting Questions that Get Answers [Hold down CTRL and link on link to open tips documents in a separate tab]

Any takers or suggestions on this issue ? or am I missing any config ?

Hey @ramindia

I took a quick look over your logs and found these two issue I think are importent

  1. Unable to retrieve cluster information

Think You have a Graylog /Elasticsearch connection issue. Double check your config files on both Elasticsearch and Graylog. You may want to go back over the doc’s again

  1. Host name ‘’ does not match the certificate subject provided by the peer

This is a certificate issue, Since I believe you posted Graylog Configuration file then you HTTPs cretificates maybe incorrect.

And last , If you post configurations or log files here please use the mark down on top of the TEXT box before hitting reply button, :+1: it makes it easier for us to read and help you quicker.

Appreciate your support - yes I do see those logs - I use the same certs and am able to connect wazuh-indexer

let me re-visit the steps and get back to you with config (my reply)

Ok now i have revisited all the documents and still no luck.

I followed below document and added to keystore

root@soclab:/etc/graylog/server/certs# keytool -importcert -keystore /etc/graylog/server/certs/cacerts.jks -storepass changeit -alias graylog-self-signed -file /etc/graylog/server/certs/
Warning: use -cacerts option to access cacerts keystore
Owner:, OU=Wazuh, O=Wazuh, L=California, C=US
Issuer: L=California, O=Wazuh, OU=Wazuh
Serial number: 7c10faa8903c1051d8687f3d507a40653c84dea
Valid from: Sun Nov 06 19:00:00 UTC 2022 until: Wed Nov 03 19:00:00 UTC 2032
Certificate fingerprints:
         SHA1: 11:03:81:6F:E1:BA:85:38:6C:32:62:91:9E:C7:C0:17:30:9F:4D:01
         SHA256: E5:99:40:D4:A4:E4:8D:30:31:07:18:7F:11:BA:3F:97:3D:D6:27:31:C7:0A:B5:6D:5D:C0:F6:D6:1F:1D:01:FA
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3


#1: ObjectId: Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
0000: 0A 8F 31 53 55 7D E7 0C   5A 63 93 C3 75 38 52 71  ..1SU...Zc..u8Rq
0010: 58 A2 1F 92                                        X...

#2: ObjectId: Criticality=false
  PathLen: undefined

#3: ObjectId: Criticality=false
KeyUsage [

#4: ObjectId: Criticality=false
SubjectAlternativeName [

Trust this certificate? [no]:  yes
Certificate was added to keystore

I can see the cert in key store 
# keytool -keystore /etc/graylog/server/certs/cacerts.jks -storepass changeit  -list | grep gray -A1
Warning: use -cacerts option to access cacerts keystore
graylog-self-signed, Nov 8, 2022, trustedCertEntry,
Certificate fingerprint (SHA-256): E5:99:40:D4:A4:E4:8D:30:31:07:18:7F:11:BA:3F:97:3D:D6:27:31:C7:0A:B5:6D:5D:C0:F6:D6:1F:1D:01:FA

I am able to connect to wazuh-indexer from curl

# curl -k -u graylog:mypassword
ip        heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name           51          47   7    2.52    2.17     1.18 dimr      *

# curl -k -u graylog:mypassword
  "name" : "",
  "cluster_name" : "soclab-cluster",
  "cluster_uuid" : "UMUvHE-oSmevWd3aTLiF6w",
  "version" : {
    "number" : "7.10.2",
    "build_type" : "rpm",
    "build_hash" : "e505b10357c03ae8d26d675172402f2f2144ef0f",
    "build_date" : "2022-01-14T03:38:06.881862Z",
    "build_snapshot" : false,
    "lucene_version" : "8.10.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  "tagline" : "The OpenSearch Project:"

Gray Log Server config
# Path to the java executable.

# Default Java options for heap and garbage collection.
GRAYLOG_SERVER_JAVA_OPTS="-Xms4g -Xmx4g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:-OmitStackTraceInFastThrow"

# Avoid endless loop with some TLSv1.3 implementations.
GRAYLOG_SERVER_JAVA_OPTS="$GRAYLOG_SERVER_JAVA_OPTS -Djdk.tls.acknowledgeCloseNotify=true"

# Fix for log4j CVE-2021-44228

# Pass some extra args to graylog-server. (i.e. "-d" to enable debug mode)

# Program that will be used to wrap the graylog-server command. Useful to
# support programs like authbind.

not able to attached the config ( config has graylog user, that also works)

Hello @ramindia

I took a look at your GL Config file.

If you using HTTPS for cURL That indicates to me, Graylog is setup for HTTPS. Then the GL configuration file you post is incorrect or are you using reverse proxy?

I also see you copied java default keystore “cacerts”. What I don’t get is how your able to get a cURL response on port 9200 with the setting you have in Graylog config. :thinking:

What happens when you cURL Just curious,

I’m not sure if you know this but is making configuration to elasticsearch and NOT to Graylog this maybe a problem. In ELK stack yeah, I would make all my config’ in Elasticsearch BUT Graylog works the other way around. Yes , some configuration can be made to ES but I would definitely ensure Graylog is able to connect to ES.

For example:

I have lab GL server called using HTTPS this be my configuration for Graylog, I don’t use localhost because I need to reach ES of the network.

Graylog Configuration

[root@graylog streams]# cat /etc/graylog/server/server.conf  | egrep -v "^\s*(#|$)"
is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = epOqmLi7r7CdZxl76QOQxr8b
root_password_sha2 = 5e884898da28047151d0e56f8dc62927736
root_email = ""
root_timezone = America/Chicago
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address =
http_publish_uri =
http_enable_cors = true
http_enable_tls = true
http_tls_cert_file = /etc/ssl/certs/graylog/graylog-certificate.pem
http_tls_key_file = /etc/ssl/certs/graylog/graylog-key.pem
http_tls_key_password = secret
elasticsearch_hosts =
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = true
allow_highlighting = false
elasticsearch_analyzer = standard
elasticsearch_index_optimization_timeout = 1h
output_batch_size = 5000

Elasticsearch/Opensearch Configuration

[root@graylog streams]# cat /etc/elasticsearch/elasticsearch.yml | egrep -v "^\s*(#|$)" graylog /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
action.auto_create_index: false
discovery.type: single-node
bootstrap.memory_lock: trueh
[root@graylog streams]#

The Graylog i am looking to run was only 9000 port

I have configured Grayloger to connect (wazuh-indexer - elastricsearch) on port 9200 (using https)

Grayloger - systemctl status - show running, but I do not see any port listening when I do netstat -lnstpd

Thank you for the example config -
your config uses simple HTTP:// (instead of this, i am using

my opensearch.yml looks like yours but i am using https

For now i am running everything in the same box, once all okay each service will be moved to different VM, so that time i need secure the communication - so that uses https ?

Can guide based on my input, appreciated your input (after your reply i did fresh VM see i messed up anything.) but i am stuck same place cert and elasticcluster errors.


Ok I see now. I have done that with Open-distro for elasticsearch (i.e. OpenSearch)
Once you get the cert’s and keystore completed you should be good. /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200

action.auto_create_index: true
opendistro_security.ssl.transport.pemcert_filepath: /etc/elasticsearch/admin.pem
opendistro_security.ssl.transport.pemkey_filepath: /etc/elasticsearch/admin-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: /etc/elasticsearch/root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: /etc/elasticsearch/admin.pem
opendistro_security.ssl.http.pemkey_filepath: /etc/elasticsearch/admin-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: /etc/elasticsearch/root-ca.pem
opendistro_security.allow_unsafe_democertificates: false
opendistro_security.allow_default_init_securityindex: false
- ',OU=admin,O=enseva,L=cedar rapids,ST=iowa,C=us'
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
opendistro_security.system_indices.enabled: true
opendistro_security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opendistro-asynchronous-search-response*"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3

Yes, that is the connection between Graylog and elasticsearch/opensearch

Like I said…

I have other applications that need to connect to Elasticsearch node. if you using multiple nodes with https ensure Graylog can read the certificates and can access the keystore.
As for your configuration with HTTPS in Graylog configuration file

# List of Elasticsearch hosts Graylog should connect to.
# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.
# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that
# requires authentication.
# Default:
#elasticsearch_hosts = http://node1:9200,http://user:password@node2:19200

I have secured the ES cluster but as for Graylog connection over the network to ES I have used user/password not HTTPS yet

1 Like

I know the issue above as you mentioned since I build different certs all the coming the same error.
After fresh sleep and wake I have realised that some mistake with the certs. i have read my cert with
openssl x509 -subject -nameopt RFC2253 -noout -in /root/t/wazuh-certificates/

This is not matching as you mentioned below your config, then graylog started and able to access GUI


1 Like

Hey, @ramindia

Awesome :+1: glad we could help.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.