Graylog Data Node Cluster Cert Errors on initial deployment

I am attempting to deploy a six node Graynode cluster, 3x Graylog/MongoDB, and 3x Graylog Datanodes on Ubuntu 24.04 LTS VMs.

DNS is configured correctly.

I am unable to proceed past the provisioning of certificates. As it constantly errors out on two of the datanodes. Due to Graylog creating datanode certificates with incorrect SAN names

As shown in the screenshot below

The datanode.conf file is the same for each data node, apart from the hostname entry which is unique to the datanode. dvm-graylogdb-X.virtual.local X being the number 1 thru 3.

The error logs fill rapidly and show the following for datanode 2 and 3:

Datanode 2:

2025-09-01T12:50:27.654Z INFO [OpensearchProcessImpl] [2025-09-01T12:50:27,645][ERROR][o.o.t.n.s.SecureNetty4Transport] [dvm-graylogdb-2.virtual.local] Exception during establishing a SSL connection: javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching dvm-graylogdb-1.virtual.local found.
2025-09-01T12:50:27.655Z INFO [OpensearchProcessImpl] javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching dvm-graylogdb-1.virtual.local found.

And datanode 3:

2025-09-01T12:56:29.736Z INFO [OpensearchProcessImpl] [2025-09-01T12:56:29,687][ERROR][o.o.t.n.s.SecureNetty4Transport] [dvm-graylogdb-3.virtual.local] Exception during establishing a SSL connection: javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching dvm-graylogdb-2.virtual.local found.
2025-09-01T12:56:29.736Z INFO [OpensearchProcessImpl] javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching dvm-graylogdb-2.virtual.local found.

Datanode 1 works just fine.

For information, here is the datanode.conf file: ###################################### GRAYLOG DATANODE CONFIGURATION FILE## - Pastebin.com

OS Information: Ubuntu 24.04 LTS

Package Version: 6.3

Please advise what logs and/or configuration files you need.

I have blown-away and rebuilt the entire environment several times now and still the same issue.

I used the following doc to install the datanodes:

Hey @ranko,

Could you try setting the option node_name within the conf instead of hostname, use this as a reference for available options..

Hi, Wine_Merchant,

I made the changes as you asked, and I have the same issue.

I have, however grabbed a copy of the three datanode server logs.

Inside the zip file, are:

datanode-1.log - this one has a valid certificate (i.e. green)

datanode-2.log and datanode-3.log these are still failures.

The datanode.conf file relevant sections:

#### Hostname
#
# if you need to specify the hostname to use (because looking it up programmati>
#hostname =

#### OpenSearch node name config option
#
# use this, if your node name should be different from the hostname that's foun>
#
node_name = dvm-graylog-2.virtual.local

The node_name is unique per each node.

Just to get a broader picture, how is DNS implemented here. Do you have a server or rely on the hosts file?

DNS is ran via three Windows Servers running AD/DS. Full resolution on all six servers.

DNS is ran via three Windows Servers running AD/DS. Full resolution on all six servers.

Update: I have also tested using hosts file entries for all six VMs, the issue remains.

Any thoughts?
I had thought about blowing it all away and redeploy uising Ubuntu 22.04 LTS, I am using 24.04 LTS.

Would you be able to interrogate the keystore on node 2 or 3 for the certs generated to view which SAN they are actually picking up? I believe the password secret option from datanode.conf/server.conf should be the password to the keystore.

I have success… I added the server IP address to the bind_address, rather thann leaving it at bind_address = 0.0.0.0.

To verify, I then removed the IP address from bind_address, and reset it back to 0.0.0.0 and tried again, and it promptly failed.

Any ideas? I mean its not mentioned in the documentation.

Here is the keystore contents:

Your keystore contains 1 entry

Alias name: datanode
Creation date: 2 Sept 2025
Entry type: PrivateKeyEntry
Certificate chain length: 2
Certificate[1]:
Owner: CN=dvm-graylogdb-3.virtual.local
Issuer: CN=Graylog CA
Serial number: 1990993e44a
Valid from: Tue Sep 02 08:38:31 UTC 2025 until: Thu Oct 02 08:38:31 UTC 2025
Certificate fingerprints:
         SHA1: D6:F6:
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 4096-bit RSA key
Version: 3

Extensions:

#1: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
  DNSName: localhost
  IPAddress: 0:0:0:0:0:0:0:1
  IPAddress: 127.0.0.1
  IPAddress: 10.105.1xx.xxxx
  DNSName: dvm-graylogdb-3.virtual.local
  DNSName: ip6-localhost
]

Certificate[2]:
Owner: CN=Graylog CA
Issuer: CN=Graylog CA
Serial number: 7d55a31224854509a2435a2202d0b08f
Valid from: Tue Sep 02 08:38:17 UTC 2025 until: Fri Aug 31 08:38:17 UTC 2035

Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 4096-bit RSA key
Version: 3

Extensions:

#1: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
  CA:true
  PathLen: no limit
]

There is no reference to SAN, which I think its the problem

Nice progress! By SAN I was referencing the below. It’s the list of names/IP’s that can be used to contact the host and in this case all appears correct.

SubjectAlternativeName [
  DNSName: localhost
  IPAddress: 0:0:0:0:0:0:0:1
  IPAddress: 127.0.0.1
  IPAddress: 10.105.1xx.xxxx
  DNSName: dvm-graylogdb-3.virtual.local
  DNSName: ip6-localhost
]

When using 0.0.0.0 that should mean that the service binds to all available IP’s, in this case it might be it doesn’t bind to the IP associated with the DNS entry. When 0.0.0.0 is defined, you could run ss -tulpen to view which ports are open against which IP’s.

It’s okay to bind against a specific I address.

1 Like

@Wine_Merchant thank you for your help and assistance. With your help, I have the datanodes all running.

TLDR; within the ‘/etc/graylog/datanode/datanode.conf’ file on all three data-node servers.

  • Ensure that the bind_address = is set to the IP address of the NIC you would like the data node to listen on, and,
  • Ensure that the node_name = is also populated with the FQDN of the server.
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.