Two clusters pointing to same indices result in empty dashboard


We are migrating from one GL cluster to the other. The setup is similar and the clusters have their own nodes and mondb cluster. The share the same Elastic cluster and since the new cluster had a mongo sync form the old cluster all the settings are the same… including the indices. Since we are migrating via DNS and the clients are caching for more then 24 hours the connections are slowly shifting to the new cluster.

The problem however is that we only can see the “data” in the old cluster. Only the old cluster is able to search, show filled dashboards etc… It’s like the new cluster can write to the elastic indices but is not allowed to read.

Logs on both clusters are clean.


I take it this is something what you have?

Maybe show your configuration/s on this environment.
I wasnt aware this could be done. Where did you get your information on how to connect two Graylog/MongoDb clusters to one Elasticsearch cluster?

Hi @gsmith,

You are right and the picture is correct. (give or take). I never read if it is possible or not, I just assumed that the cluster config is mainly in the config file and the mongodb. And that the Elastic is just a datalake. I am wondering if the mongodb dump and restore also took some parameters that are now blocking this. concerning my config:

is_master = false
node_id_file = /etc/graylog/server/node-id
password_secret = $password_secret
root_password_sha2 = $root_password_sha2
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address =
http_external_uri = $extern_uri
elasticsearch_hosts = $elastic_host
allow_leading_wildcard_searches = false
allow_highlighting = false
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = $mongodb_uri
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
proxied_requests_thread_pool_size = 32

For as I know, the power of Graylog is that it is aware of the timerange of data stored in elastic. In the older version we use we can reindex the elastic cluster, maybe such an option in your Graylog setup can do the trick.

To be honest I never did that setup before. I am curious if it will work.
I could be wrong but I dont think your 2 Graylog/MongoDb clusters can use the same Index set. You may have to configure one of the clusters to use a different Index prefix. Or you may have to shut down the old Graylog/MongoDb cluster. I have seen in the past week people here migrating elasticsearch Indices and MongoDb databases which I have tried with success. It showed all my Old configuration and my Index sets but I had to point all my clients to the new Graylog Server since it had a different IP Address/FQDN .

Sorry I cant be more help.

Fun fact after some days the views shifted now. So I still have very clinngy clients still using the old endpoint/cluster, so data still comes in,but the events on search, dashboard etc… are now only visible in the new cluster.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.