Elasticsearch is not available

I have started a new install of Graylog with Opensearch. I have gone through all documents and procedures but it seems like Graylog is looking for Elasticsearch, which I did not install. I installed Opensearch instead.

Here is the error I get in server.log:
2022-06-21T12:52:04.121-05:00 INFO [VersionProbe] Elasticsearch is not available. Retry #2
2022-06-21T12:52:09.124-05:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: unexpected end of stream on http://127.0.0.1:9200/… - \n not found: limit=0 content=….

And the server is not listening on port 9000
netstat -an | more
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN
tcp6 0 0 127.0.0.1:9200 :::* LISTEN
tcp6 0 0 127.0.0.1:9300 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 127.0.0.1:9200 127.0.0.1:51144 TIME_WAIT

Can someone please help? Do I need to install Elasticsearch?

Thank you

You shouldn’t need Elasticsearch if you have OpenSearch. Can you post your Graylog server.conf (Obfuscated and using the </> forum tool to make it readable) You can use this command from the tips on asking questions page to get just the relevant data from the file:

cat /etc/graylog/server/server.conf         | egrep -v "^\s*(#|$)"

Hi all!

Is there any solution for this? I have the same issue with a 3x nodes cluster running GL 4.3.3 and OS 2.0.1.

Thanks!

OpenSearch 2.0.1 is not on the support list (Installing OpenSearch - Installing Graylog)

Hard to help diagnose if you don’t post your settings as requested in the previous post… :stuck_out_tongue:

Hi @tmacgbay

Yep, I saw that OS v2.x is not supported after I built the cluster… :frowning:

Here it says that some clients would still work with newer versions of OS… isn’t it the case for Graylog?

Thanks!

Hello again,

the requested settings:

$ egrep -v "^\s*(#|$)" /etc/graylog/server/server.conf  
is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root_password_sha2 = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
elasticsearch_hosts = http://admin:admin@node-1:9200,http://admin:admin@node-2:9200,http://admin:admin@node-3:9200
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
proxied_requests_thread_pool_size = 32

TIA!

It’s not clear - this paragraph is on the page you noted:

OpenSearch 2.0.0 no longer supports compatibility with legacy clients. Due to breaking changes with REST APIs, some features are not supported when using OpenSearch 1.x clients to connect to OpenSearch 2.0.

I noticed your Elasticsearch version is not defined in your server.conf maybe defining it rather than Graylog trying to query it will get you by that error?

...
elasticsearch_version = 7
...

You could also post your opensearch conf file…

I defined:

elasticsearch_version = 7

as suggested; no changes, same error message.

ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: unexpected end of stream on http://node-n:9200/... - \n not found

I tracked it down to these classes:

./org/graylog2/storage/versionprobe/VersionProbe.class
./okhttp3/internal/http1/Http1ExchangeCodec.class

inside the graylog.jar file, but Java is like Greek to me :slight_smile: so I don’t know what’s the URL the process is looking for… do you know it?

This is my opensearch.yml file, taken from the master-node:

$ egrep -v "^\s*(#|$)" /etc/opensearch/opensearch.yml 
path.data: /var/lib/opensearch
path.logs: /var/log/opensearch
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de
plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".plugins-ml-model", ".plugins-ml-task", ".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opensearch-notifications-*", ".opensearch-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]
node.max_local_storage_nodes: 3
cluster.name: "test-cluster"
node.name: "node-1"
network.host: "0.0.0.0"
http.port: 9200
bootstrap.memory_lock: true
discovery.seed_hosts: ["node-1","node-2","node-3"]
cluster.initial_master_nodes: ["node-1"]
node.roles: ["data","master"]

Thanks a lot in advance!

Hello,

have you tried to set network.host: “0.0.0.0” to a Ip address ?

Example:

[root@graylog mongodb]# cat /etc/elasticsearch/elasticsearch.yml  | egrep -v "^\s*(#|$)"
cluster.name: graylog
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.10.10.101
http.port: 9200
action.auto_create_index: false

/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.10.101 node-1
10.10.10.102 node-2
10.10.10.103 node-3

The reason I stated this was normally you’ll see the “Unable to get version information Elasticsearch nodes” message when Graylog can’t connect to elasticsearch hosts for some reason. Not only could it not retrieve the specific information (version), but it actually couldn’t establish a connection at all with the node.

Couple commands to troubleshoot double check configuration in GL config file.

Cluster Health

curl -XGET http://node_1:9200/_cluster/health?pretty=true

All Node/s Info

curl -XGET http://node_1:9200/_nodes?pretty=true

Check Firewall/s, SELinux, permissions, etc…

hi @gsmith

Thanks for your reply.

Setting “network.host” to use an IP address doesn’t change anything.

SELinux is disabled on all OpenSearch VMs.
/etc/hosts is populated with all needed IP addresses and hostnames.

Oddly enough, this:

curl https://node-1:9200/_cat/nodes?pretty -u admin:admin -k

shows me all 3 nodes.

x.x.x.1 14 25 1 0.18 0.14 0.09 dm * node-1
x.x.x.2 45 25 1 0.14 0.11 0.08 di - node-2
x.x.x.3 41 26 0 0.04 0.04 0.05 di - node-3

Other curl-related queries work as well.

curl -XGET http://node-1:9200/_nodes?pretty=true

This shows a lot of stuff, as expected; e.g.:

[... trimmed ... ]
      "plugins" : [
        {
          "name" : "opensearch-alerting",
          "version" : "2.0.1.0",
          "opensearch_version" : "2.0.1",  
          "java_version" : "11",
          "description" : "Amazon OpenSearch alerting plugin",
          "classname" : "org.opensearch.alerting.AlertingPlugin",
          "custom_foldername" : "",
          "extended_plugins" : [
            "lang-painless"
          ],
          "has_native_controller" : false
        },
[... trimmed ...]

As you can see, the OpenSearch version is exposed; perhaps there is a conditional check that only goes for values < 2.x … :-/ just guessing.

Cheers

Answering to myself: downgrading OpenSearch to 1.3.3 solved the connectivity issue.

Now I need to deal with some SSL-related problems:

[2022-07-04T08:58:00,378][WARN ][o.o.h.AbstractHttpServerTransport] [osnlogssearch11ivm] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=0.0.0.0/0.0.0.0:9200, remoteAddress=null}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:480) ~[netty-codec-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279) ~[netty-codec-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:623) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:586) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.73.Final.jar:4.1.73.Final]
        at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
        at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:340) ~[?:?]
        at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293) ~[?:?]
        at sun.security.ssl.TransportContext.dispatch(TransportContext.java:186) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:295) ~[netty-handler-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1341) ~[netty-handler-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1234) ~[netty-handler-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1283) ~[netty-handler-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510) ~[netty-codec-4.1.73.Final.jar:4.1.73.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449) ~[netty-codec-4.1.73.Final.jar:4.1.73.Final]

I just seen that the version was to high for Graylog. Hence, why I showed those commands above for troubleshooting. Glad you resolved that issue :+1:

I’m taking a guess that you have OpenSearch with certs?
Are you trying to use them with Graylog?

If so, insure the format on the certificate are correct that Graylog can use found here

Make sure the correct certs are in the keystore and Graylog can access these certificates.
The reason I stated this was from the log file

javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown

Hope that helps

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.