No data nodes have been found (using opensearch)

I installed mongodb, opensearch and graylog on Ubuntu. In a VM intended as a single “all-in-one” GrayLog-server

All application seems to run, however when trying to do an initial GrayLog setup, the first screen of setup shows that Graylog does not see the data node (No data nodes have been found). A fatal problem of course.

I have no idea why, I can just remark that most of the documentation is related to elasticsearch.

I sincerely hope someone understand why the datanode is not recognised and what to do to fix that.

Louis

Configuration info (relevant / changed settings)

>> mongod.conf <<

Where and how to store data.

storage:
dbPath: /var/lib/mongodb

engine:

wiredTiger:

where to write logging data.

systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log

network interfaces

net:
port: 27017
bindIp: 127.0.0.1

how the process runs

processManagement:
timeZoneInfo: /usr/share/zoneinfo


>> opensearch.yml <<

cluster.name: graylog
node.name: graylog-data-node
path.logs: /var/log/opensearch
network.host: 0.0.0.0

discovery.type: single_node <<<< System Does not start with this setting !!

cluster.initial_cluster_manager_nodes: [“graylog-data-node”]
action.auto_create_index: false
plugins.security.disabled: true


>> graylog server.conf <<

is_leader = true
node_id_file = /etc/graylog/server/node-id
password_secret =
root_username = admin
root_password_sha2 =
root_timezone = Europe/Amsterdam
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = 192.168.x.y:9000
stream_aware_field_types=false
disabled_retention_strategies = none
allow_leading_wildcard_searches = false
allow_highlighting = false
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5 (to high for my machine, but unchanged for now)
outputbuffer_processors = 3 (to high for my machine, but unchanged for now)
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 250


**>>>>>>> Installed Software <<<<<<<< **

louisb@graylog:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
louisb@graylog:~$

louisb@graylog:~$ dpkg -l | grep -E “.(opensearch|graylog|mongo).”
ii graylog-5.2-repository 1-2 all Package to install Graylog 5.2 GPG key and repository
ii graylog-server 5.2.0-7 amd64 Graylog server
ii mongodb-database-tools 100.9.0 amd64 mongodb-database-tools package provides tools for working with the MongoDB server:
ii mongodb-mongosh 2.0.2 amd64 MongoDB Shell CLI REPL Package
ii mongodb-org 6.0.11 amd64 MongoDB open source document-oriented database system (metapackage)
ii mongodb-org-database 6.0.11 amd64 MongoDB open source document-oriented database system (metapackage)
ii mongodb-org-database-tools-extra 6.0.11 amd64 Extra MongoDB database tools
ii mongodb-org-mongos 6.0.11 amd64 MongoDB sharded cluster query router
ii mongodb-org-server 6.0.11 amd64 MongoDB database server
ii mongodb-org-shell 6.0.11 amd64 MongoDB shell client
ii mongodb-org-tools 6.0.11 amd64 MongoDB tools
ii opensearch 2.11.0 amd64 An open source distributed and RESTful search engine

I’m assuming you are installing 5.2, and since you said that you installed opensearch that you did not install the beta graylog data node service.

based on those assumptions you need to add
elasticsearch_hosts = http://127.0.0.1:9200
to your graylog server.conf file.

that was a default value before 5.2, and the docs haven’t been updated yet to let you know you need to add it now in 5.2 and later.

1 Like

Tja, you are probably right howveer note:

  • that that is allready the default # Default: http://127.0.0.1:9200 and
  • also note that every thing is still ^elasticsearch^ and
  • that there is something with password, but that sentence also says "password@node2 where I have only one node. The local machine

I tried what happend when adding opensearch_hosts = http://127.0.0.1:9200

The answer is, graylog-server starts … however the data node is still not found

PS. I did install latest versions and not the GrayLog beta, however if that beta is relatively stable and upgradable and it is clear how to install that beta I could try

I did some small changes which did NOT change the result:

  • changed addresses all to 192.168.x.y
  • did lower couple of CPU’s and size limits to be a bit more realistic for a setup like mine

Thinking about GrayLog <> Mongod <> Elastic/OpenSearch

That are three ^standalone^ applications. And I think that only GrayLog is communicating with the other two.

The communication seems to be exclusive via IP-connections, which of course allow the tree applications to run on separate machines (in my case they all run on the same host).

So what I should expect is
that the graylog server.conf should define connection-strings to Mongo and to OpenSearch and
a) that settings related to Mongo are in the Mongo config and
b) settings related to OpenSearch in the OpenSearch config
apart from DB-table defs etc of course

I do not recognize this in especially the GrayLog server.conf. I am especially lost in regard to the expected connection string definitions / local connection definitions

Changed all IP-addresses apart from the graylog http one to 127.0.0.1 also with this setup “no data nodes”

In the opensearch.log I did find the verdict messages.

It could be that it points to the problem …

“ISM config index not exist” the exact log below

louisb@graylog:/var/log$ sudo tail -f opensearch/opensearch.log
[2023-11-02T18:33:09,778][INFO ][o.o.i.i.ManagedIndexCoordinator] [graylog] Performing move cluster state metadata.
[2023-11-02T18:33:09,778][INFO ][o.o.i.i.MetadataService ] [graylog] ISM config index not exist, so we cancel the metadata migration job.
[2023-11-02T18:33:26,022][INFO ][o.o.s.a.r.AuditMessageRouter] [graylog] Closing AuditMessageRouter
[2023-11-02T18:33:26,026][INFO ][o.o.s.a.s.SinkProvider ] [graylog] Closing InternalOpenSearchSink
[2023-11-02T18:33:26,026][INFO ][o.o.s.a.s.SinkProvider ] [graylog] Closing DebugSink
[2023-11-02T18:33:26,026][INFO ][o.o.n.Node ] [graylog] stopping …
[2023-11-02T18:33:26,106][INFO ][o.o.n.Node ] [graylog] stopped
[2023-11-02T18:33:26,106][INFO ][o.o.n.Node ] [graylog] closing …
[2023-11-02T18:33:26,110][INFO ][o.o.s.a.i.AuditLogImpl ] [graylog] Closing AuditLogImpl
[2023-11-02T18:33:26,115][INFO ][o.o.n.Node ] [graylog] closed

@louis did you add the elasticsearch_hosts setting as suggested?
GL historically used only Elastic. We now support OpenSearch, but configuration setting still say elastic. Don’t be confused by that.

Yep I am treating it that way.

However there is still no recognized data node and I am very confused about the authorization process

We have the person which graylog names “root”, I really really hope that that is not the system root account but that it is the graylog maintainer account. But I am not even sure about that !

I hope that the person called “admin” in the server.conf is the name of the graylog maintainer.

Than we have the strange thing that in the server.log there are the initial graylog login credentials … hu …
Where I would ofcause expect that you have to use the admin password as used when creating the sha2 for the config …

Than we have mongoDB. and I would assume that I had to define a DB and related account for graylog first!! and that I should store that pw somewhere in the graylog config

The same for opensearch

But it could be that all those logical security measures are not used and that “admin” in the graylog server.conf is really the system root and that mongoDB and opensearch are access using the system root account. I do not know. What ever if … that is a terrible bad idea IMHO

At the end I managed to get graylog up and running (without any config at this moment).
I had to find out a lot and still have to fix things (eg a working opensearch autorisation)

A couple of things to mention:

  • admin in the server.conf is the graylog administrator and not the system root what the name root is suggesting
  • the described method to generate the admin root_password_sha2 is NOT OK !!
    use echo -n | sha256sum | cut -d" " -f1
  • you should read ‘opensearch’ where ‘elasticsearch’ is written in the server.conf
  • the 19200 in ^http://user:password@node2:19200^ should be 9200
  • in general I would always advice to use authentication for opensearch and mongodb
  • discovery.type: single_node DOES NOT WORK opensearch does not start (actual version)
  • I am halfway in relation to that. Note that:
  1. the opensearch security scripts are in /usr/share/opensearch/plugins/opensearch-security/tools (and not in the in other indicated directorys)
  2. that those tools nee java so you have to install ^sudo apt install default-jre^ first
  3. security related settings are in /etc/opensearch/opensearch-security where you have to edit at least internal_users.yml and config.yml
  4. from the tools you need at least hash.sh and securityadmin.sh

My setup is far from ready. To name a few things to do:

  • to implement autorisation for mongodb
  • I defined a graylog user in opensearch but the autorisation is “admin” (which is too much) and dispite the defined user + password opensearch does not yet check the password
  • I have to define the whole graylog setup. I hope I can import a settings file from an earlier set-up

I wished that the graylog provide setup-video which is not bad, would cover a more realistic setup (based on latest version and including security aspects)

Further on the download speed from the graylog website (5.2) is … dramatic

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.