Error: [VersionProbe] Unable to retrieve version from Elasticsearch in a GL running Opensearch instead

1. Describe your incident:
Basically, i had created my environment by Ansible, and when i tried bringing up the graylog in the target server, he shows me the error saying that is impossible to find the Elasticsearch installed in 127.0.0.1:9200. But ain’t using Elastic, I’m using Opensearch.

2. Describe your environment:

  • OS Information:
    Ubuntu
  • Package Version:
    Jamie
  • Service logs, configurations, and environment variables:
    GL CONFIG:
# If you are running more than one instances of Graylog server you have to select one of these
# instances as leader. The leader will perform some periodical tasks that non-leaders won't perform.
is_leader = true

# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea
# to use an absolute file path here if you are starting Graylog server from init scripts or similar.
node_id_file = /etc/graylog/server/node-id

# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.
# Generate one by using for example: pwgen -N 1 -s 96
# ATTENTION: This value must be the same on all Graylog nodes in the cluster.
# Changing this value after installation will render all user sessions and encrypted values in the database invalid. (e.g. encrypted access tokens)
password_secret = MY.SECRETE.HERE

# The default root user is named 'admin'
root_username = admin

# You MUST specify a hash password for the root user (which you only need to initially set up the
# system and in case you lose connectivity to your authentication backend)
# This password cannot be changed using the API or via the web interface. If you need to change it,
# modify it in this file.
# Create one by using for example: echo -n yourpassword | shasum -a 256
# and put the resulting hash value into the following line
root_password_sha2 = MY.SECRETE.HERE

# The email address of the root user.
# Default is empty
#root_email = ""

# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.
# Default is UTC
#root_timezone = UTC

# Set the bin directory here (relative or absolute)
# This directory contains binaries that are used by the Graylog server.
# Default: bin
bin_dir = /usr/share/graylog-server/bin

# Set the data directory here (relative or absolute)
# This directory is used to store Graylog server state.
data_dir = /var/lib/graylog-server

# Set plugin directory here (relative or absolute)
plugin_dir = /usr/share/graylog-server/plugin

###############
# HTTP settings
###############

#### HTTP bind address
#
# The network interface used by the Graylog HTTP interface.
#
# This network interface must be accessible by all Graylog nodes in the cluster and by all clients
# using the Graylog web interface.
#
# If the port is omitted, Graylog will use port 9000 by default.
#
# Default: 127.0.0.1:9000
#http_bind_address = 127.0.0.1:9000
http_bind_address = MY.IP.ADDR.HERE:9000

#### HTTP publish URI
#
# The HTTP URI of this Graylog node which is used to communicate with the other Graylog nodes in the cluster and by all
# clients using the Graylog web interface.
#
# The URI will be published in the cluster discovery APIs, so that other Graylog nodes will be able to find and connect to this Graylog node.
#
# This configuration setting has to be used if this Graylog node is available on another network interface than $http_bind_address,
# for example if the machine has multiple network interfaces or is behind a NAT gateway.
#
# If $http_bind_address contains a wildcard IPv4 address (0.0.0.0), the first non-loopback IPv4 address of this machine will be used.
# This configuration setting *must not* contain a wildcard address!
#
# Default: http://$http_bind_address/
#http_publish_uri = http://192.168.1.1:9000/

################
# HTTPS settings
################

#### Enable HTTPS support for the HTTP interface
#
# This secures the communication with the HTTP interface with TLS to prevent request forgery and eavesdropping.
#
# Default: false
#http_enable_tls = true

# The X.509 certificate chain file in PEM format to use for securing the HTTP interface.
#http_tls_cert_file = /path/to/graylog.crt

# The PKCS#8 private key file in PEM format to use for securing the HTTP interface.
#http_tls_key_file = /path/to/graylog.key

# The password to unlock the private key used for securing the HTTP interface.
#http_tls_key_password = secret

# If set to "true", Graylog will periodically investigate indices to figure out which fields are used in which streams.
# It will make field list in Graylog interface show only fields used in selected streams, but can decrease system performance,
# especially on systems with great number of streams and fields.
stream_aware_field_types=false

# Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For
# header. May be subnets, or hosts.
#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128

# List of Elasticsearch hosts Graylog should connect to.
# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.
# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that
# requires authentication.
#
# Default: http://127.0.0.1:9200
elasticsearch_hosts = https://127.0.0.1:9200

# Frequency of the Elasticsearch node discovery.
#
# Default: 30s
# elasticsearch_discovery_frequency = 30s

# Set the default scheme when connecting to Elasticsearch discovered nodes
#
# Default: http (available options: http, https)
#elasticsearch_discovery_default_scheme = http

# Enable payload compression for Elasticsearch requests.
#
# Default: false
#elasticsearch_compression_enabled = true

# Enable use of "Expect: 100-continue" Header for Elasticsearch index requests.
# If this is disabled, Graylog cannot properly handle HTTP 413 Request Entity Too Large errors.
#
# Default: true
#elasticsearch_use_expect_continue = true

# Graylog uses Index Sets to manage settings for groups of indices. The default options for index sets are configurable
# for each index set in Graylog under System > Configuration > Index Set Defaults.
# The following settings are used to initialize in-database defaults on the first Graylog server startup.
# Specify these values if you want the Graylog server and indices to start with specific settings.

# The prefix for the Default Graylog index set.
#
#elasticsearch_index_prefix = graylog

# The name of the index template for the Default Graylog index set.
#
#elasticsearch_template_name = graylog-internal

# The prefix for the for graylog event indices.
#
#default_events_index_prefix = gl-events

# The prefix for graylog system event indices.
#
#default_system_events_index_prefix = gl-system-events

# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.
# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom
# Elasticsearch documentation: https://www.elastic.co/guide/en/elasticsearch/reference/2.3/analysis.html
# Note that this setting only takes effect on newly created indices.
#
#elasticsearch_analyzer = standard

# How many Elasticsearch shards and replicas should be used per index?
#
#elasticsearch_shards = 1
#elasticsearch_replicas = 0

# Maximum number of attempts to connect to datanode on boot.
# Default: 0, retry indefinitely with the given delay until a connection could be established
#datanode_startup_connection_attempts = 5

# Waiting time in between connection attempts for datanode_startup_connection_attempts
#
# Default: 5s
# datanode_startup_connection_delay = 5s

# Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearch
# on heavily used systems with large indices, but it will decrease search performance. The default is to optimize
# cycled indices.
#
#disable_index_optimization = true

# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch
# on heavily used systems with large indices, but it will decrease search performance. The default is 1.
#
#index_optimization_max_num_segments = 1

# Time interval to trigger a full refresh of the index field types for all indexes. This will query ES for all indexes
# and populate any missing field type information to the database.
#
#index_field_type_periodical_full_refresh_interval = 5m

# You can configure the default strategy used to determine when to rotate the currently active write index.
# Multiple rotation strategies are supported, the default being "time-size-optimizing":
#   - "time-size-optimizing" tries to rotate daily, while focussing on optimal sized shards.
#      The global default values can be configured with
#       "time_size_optimizing_retention_min_lifetime" and "time_size_optimizing_retention_max_lifetime".
#   - "count" of messages per index, use elasticsearch_max_docs_per_index below to configure
#   - "size" per index, use elasticsearch_max_size_per_index below to configure
#   - "time" interval between index rotations, use elasticsearch_max_time_per_index to configure
# A strategy may be disabled by specifying the optional enabled_index_rotation_strategies list and excluding that strategy.
#
#enabled_index_rotation_strategies = count,size,time,time-size-optimizing

# The default index rotation strategy to use.
#rotation_strategy = time-size-optimizing

# (Approximate) maximum number of documents in an Elasticsearch index before a new index
# is being created, also see no_retention and elasticsearch_max_number_of_indices.
# Configure this if you used 'rotation_strategy = count' above.
#
#elasticsearch_max_docs_per_index = 20000000

# (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also see
# no_retention and elasticsearch_max_number_of_indices. Default is 30GB.
# Configure this if you used 'rotation_strategy = size' above.
#
#elasticsearch_max_size_per_index = 32212254720

# (Approximate) maximum time before a new Elasticsearch index is being created, also see
# no_retention and elasticsearch_max_number_of_indices. Default is 1 day.
# Configure this if you used 'rotation_strategy = time' above.
# Please note that this rotation period does not look at the time specified in the received messages, but is
# using the real clock value to decide when to rotate the index!
# Specify the time using a duration and a suffix indicating which unit you want:
#  1w  = 1 week
#  1d  = 1 day
#  12h = 12 hours
# Permitted suffixes are: d for day, h for hour, m for minute, s for second.
#
#elasticsearch_max_time_per_index = 1d

# Controls whether empty indices are rotated. Only applies to the "time" rotation_strategy.
#
#elasticsearch_rotate_empty_index_set=false

# Provides a hard upper limit for the retention period of any index set at configuration time.
#
# This setting is used to validate the value a user chooses for the maximum number of retained indexes, when configuring
# an index set. However, it is only in effect, when a time-based rotation strategy is chosen.
#
# If a rotation strategy other than time-based is selected and/or no value is provided for this setting, no upper limit
# for index retention will be enforced. This is also the default.

# Default: none
#max_index_retention_period = P90d

# Optional upper bound on elasticsearch_max_time_per_index
#
#elasticsearch_max_write_index_age = 1d

# Disable message retention on this node, i. e. disable Elasticsearch index rotation.
#no_retention = false

# Decide what happens with the oldest indices when the maximum number of indices is reached.
# The following strategies are available:
#   - delete # Deletes the index completely (Default)
#   - close # Closes the index and hides it from the system. Can be re-opened later.
#
#retention_strategy = delete

# This configuration list limits the retention strategies available for user configuration via the UI
# The following strategies can be disabled:
#   - delete # Deletes the index completely (Default)
#   - close # Closes the index and hides it from the system. Can be re-opened later.
#   - none #  No operation is performed. The index stays open. (Not recommended)
# WARNING: At least one strategy must be enabled. Be careful when extending this list on existing installations!
disabled_retention_strategies = none,close

# How many indices do you want to keep for the delete and close retention types?
#
#elasticsearch_max_number_of_indices = 20

# Disable checking the version of Elasticsearch for being compatible with this Graylog release.
# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!
#
#elasticsearch_disable_version_check = true

# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only
# be enabled with care. See also: https://docs.graylog.org/docs/query-language
allow_leading_wildcard_searches = false

# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and
# should only be enabled after making sure your Elasticsearch cluster has enough memory.
allow_highlighting = false

# Sets field value suggestion mode. The possible values are:
#  1. "off" - field value suggestions are turned off
#  2. "textual_only" - field values are suggested only for textual fields
#  3. "on" (default) - field values are suggested for all field types, even the types where suggestions are inefficient performance-wise
field_value_suggestion_mode = on

# Global timeout for index optimization (force merge) requests.
# Default: 1h
#elasticsearch_index_optimization_timeout = 1h

# Maximum number of concurrently running index optimization (force merge) jobs.
# If you are using lots of different index sets, you might want to increase that number.
# This value should be set lower than elasticsearch_max_total_connections_per_route, otherwise index optimization
# could deplete all the client connections to the search server and block new messages ingestion for prolonged
# periods of time.
# Default: 10
#elasticsearch_index_optimization_jobs = 10

# Mute the logging-output of ES deprecation warnings during REST calls in the ES RestClient
#elasticsearch_mute_deprecation_warnings = true

# Time interval for index range information cleanups. This setting defines how often stale index range information
# is being purged from the database.
# Default: 1h
#index_ranges_cleanup_interval = 1h

# Batch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch output
# module will get at once and write to Elasticsearch in a batch call. If the configured batch size has not been
# reached within output_flush_interval seconds, everything that is available will be flushed at once. Remember
# that every outputbuffer processor manages its own batch and performs its own batch write calls.
# ("outputbuffer_processors" variable)
output_batch_size = 500

# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two
# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages
# for this time period is less than output_batch_size * outputbuffer_processors.
output_flush_interval = 1

# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and
# over again. To prevent this, the following configuration options define after how many faults an output will
# not be tried again for an also configurable amount of seconds.
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

# Number of process buffer processors running in parallel.
# By default, the value will be determined automatically based on the number of CPU cores available to the JVM, using
# the formula (<#cores> * 0.36 + 0.625) rounded to the nearest integer.
# Set this value explicitly to override the dynamically calculated value. Try raising the number if your buffers are
# filling up.
#processbuffer_processors = 5

# Number of output buffer processors running in parallel.
# By default, the value will be determined automatically based on the number of CPU cores available to the JVM, using
# the formula (<#cores> * 0.162 + 0.625) rounded to the nearest integer.
# Set this value explicitly to override the dynamically calculated value. Try raising the number if your buffers are
# filling up.
#outputbuffer_processors = 3

# The size of the thread pool in the output buffer processor.
# Default: 3
#outputbuffer_processor_threads_core_pool_size = 3

# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).
#udp_recvbuffer_sizes = 1048576

# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)
# Possible types:
#  - yielding
#     Compromise between performance and CPU usage.
#  - sleeping
#     Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.
#  - blocking
#     High throughput, low latency, higher CPU usage.
#  - busy_spinning
#     Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.
processor_wait_strategy = blocking

# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.
# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.
# Must be a power of 2. (512, 1024, 2048, ...)
ring_size = 65536

inputbuffer_ring_size = 65536
inputbuffer_wait_strategy = blocking

# Number of input buffer processors running in parallel.
#inputbuffer_processors = 2

# Manually stopped inputs are no longer auto-restarted. To re-enable the previous behavior, set auto_restart_inputs to true.
#auto_restart_inputs = true

# Enable the message journal.
message_journal_enabled = true

# The directory which will be used to store the message journal. The directory must be exclusively used by Graylog and
# must not contain any other files than the ones created by Graylog itself.
#

OPENSEARCH CONFIG

# ======================== OpenSearch Configuration =========================
#
# NOTE: OpenSearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.opensearch.org
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: my-application
cluster.name: my-vm
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
node.name: ${HOSTNAME}
#
# Add custom attributes to the node:
#
# node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/opensearch
#
# Path to log files:
#
path.logs: /var/log/opensearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# OpenSearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# network.host: 192.168.0.1
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of cluster-manager-eligible nodes:
#
# cluster.initial_cluster_manager_nodes: ["node-1", "node-2"]

discovery.type: single-node

#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true

######## Start OpenSearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
# plugins.security.ssl.transport.pemcert_filepath: mycert.pem
# plugins.security.ssl.transport.pemkey_filepath: mycert.key
# plugins.security.ssl.transport.pemtrustedcas_filepath: myca-cert.pem
# plugins.security.ssl.transport.enforce_hostname_verification: false
# plugins.security.ssl.http.enabled: true
# plugins.security.ssl.http.pemcert_filepath: mycert.pem
# plugins.security.ssl.http.pemkey_filepath: mycert.key
# plugins.security.ssl.http.pemtrustedcas_filepath: myca-cert.pem
# plugins.security.allow_unsafe_democertificates: true
# plugins.security.allow_default_init_securityindex: true
# plugins.security.authcz.admin_dn:
#   - CN=kirk,OU=client,O=client,L=test, C=de

# plugins.security.audit.type: internal_opensearch
# plugins.security.enable_snapshot_restore_privilege: true
# plugins.security.check_snapshot_restore_write_privileges: true
# plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
# plugins.security.system_indices.enabled: true
# plugins.security.system_indices.indices: [".plugins-ml-model", ".plugins-ml-task", ".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opensearch-notifications-*", ".opensearch-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]
# node.max_local_storage_nodes: 3

### Configurações alteradas por Cleber para produção

plugins.security.ssl.transport.pemcert_filepath: mycert.pem
plugins.security.ssl.transport.pemkey_filepath: mycert.key
plugins.security.ssl.transport.pemtrustedcas_filepath: myca-cert.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: mycert.pem
plugins.security.ssl.http.pemkey_filepath: mycert.key
plugins.security.ssl.http.pemtrustedcas_filepath: myca-cert.pem
plugins.security.allow_unsafe_democertificates: false
plugins.security.allow_default_init_securityindex: true
# plugins.security.authcz.admin_dn:
#  - CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".plugins-ml-model", ".plugins-ml-task", ".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opensearch-notifications-*", ".opensearch-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]
node.max_local_storage_nodes: 3

######## End OpenSearch Security Demo Configuration ########
...

GRAYLOG LOGS

[2024-05-03T18:07:38,530][WARN ][o.o.h.AbstractHttpServerTransport] [sauk-sk12] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:38792}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[netty-codec-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.86.Final.jar:4.1.86.Final]
	at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
	at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
	at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
	at sun.security.ssl.TransportContext.fatal(TransportContext.java:358) ~[?:?]
	at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293) ~[?:?]
	at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
	at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
	at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
	at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
	at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
	at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
	at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
	at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296) ~[netty-handler-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343) ~[netty-handler-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236) ~[netty-handler-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285) ~[netty-handler-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.86.Final.jar:4.1.86.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.86.Final.jar:4.1.86.Final]
	... 16 more

I think the Elastic Search message is a red herring: there are still messages in the code that haven’t been updated from the time when we only supported Elastic.
Based on the server log it looks like a connectivity issue.

Ya Graylog uses the word elastic in most places still. In your config file it looks like you set you opensearch url to https:// is it actually setup to go over tls?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.