Testing Elasticsearch Cluster


(Greg Smith) #1

Hello All,
Problem With elasticsearch Cluster.
When shutting down ES-Node 1 (testing failover), Graylog+ MongoDb Cluster is unable to use index.
NOTE: At this point I do not have Load balancer in place for Graylog+MDB Servers. I was performing tests on redundancy with Elasticsearch Servers.

Environment;
Total of 6 CentOS 7.3 Servers minimal install.
3 Servers with Graylog version 2.3 and Mongo version 3.4 ‘Clustered’
3 Servers with Elasticsearch 5.6.4 ‘Clustered’

Graylog Steps install/configured as follow;
http://docs.graylog.org/en/2.3/pages/configuration/multinode_setup.html

MongoDB Replica Set installed/configured as follow;
https://docs.mongodb.com/manual/tutorial/deploy-replica-set/

Elasticsearch Cluster installed/configured
https://www.elastic.co/guide/en/elasticsearch/reference/5.4/setup.html

[Elasticsearch Node #1 configuration]
cluster.name: graylog
node.name: lab-elastic-001.enseva-labs.net
node.master: true
node.data: false
network.host: 10.200.6.95
http.port: 9200
discovery.zen.ping.unicast.hosts: [“10.200.6.95”,“10.200.6.96”,“10.200.6.97”]
discovery.zen.minimum_master_nodes: 2

[Elasticsearch Node #2 configuration]
cluster.name: graylog
node.name: lab-elastic-002.enseva-labs.net
node.master: true
node.data: true
network.host: 10.200.6.96
http.port: 9200
discovery.zen.ping.unicast.hosts: [“10.200.6.95”,“10.200.6.96”,“10.200.6.97”]
discovery.zen.minimum_master_nodes: 2

[Elasticsearch Node #3 configuration]
cluster.name: graylog
node.name: lab-elastic-003.enseva-labs.net
node.master: false
node.data: true
network.host: 10.200.6.97
http.port: 9200
discovery.zen.ping.unicast.hosts: [“10.200.6.95”,“10.200.6.96”,“10.200.6.97”]
discovery.zen.minimum_master_nodes: 2

[Graylog+MongoDb #1 Configuration]
is_master = true
rest_listen_uri = http://10.200.6.92:9000/api/
web_listen_uri = http://10.200.6.92:9000/
elasticsearch_hosts = http://10.200.6.95:9200,http://10.200.6.96:9200,http://10.200.6.97:9200
elasticsearch_index_prefix = graylog
elasticsearch_template_name = graylog-internal
mongodb_uri = mongodb://10.200.6.92:27017,10.200.6.93:27017,10.200.6.94:27017/graylog?replicaSet=replica01

[Graylog+MongoDb #2 Configuration]
is_master = false
rest_listen_uri = http://10.200.6.93:9000/api/
web_listen_uri = http://10.200.6.93:9000/
elasticsearch_hosts = http://10.200.6.95:9200,http://10.200.6.96:9200,http://10.200.6.97:9200
elasticsearch_index_prefix = graylog
elasticsearch_template_name = graylog-internal
mongodb_uri = mongodb://10.200.6.92:27017,10.200.6.93:27017,10.200.6.94:27017/graylog?replicaSet=replica01

[Graylog+MongoDb #3 Configuration]
is_master = false
rest_listen_uri = http://10.200.6.94:9000/api/
web_listen_uri = http://10.200.6.94:9000/
elasticsearch_hosts = http://10.200.6.95:9200,http://10.200.6.96:9200,http://10.200.6.97:9200
elasticsearch_index_prefix = graylog
elasticsearch_template_name = graylog-internal
mongodb_uri = mongodb://10.200.6.92:27017,10.200.6.93:27017,10.200.6.94:27017/graylog?replicaSet=replica01

I thought when the First ES node.master is down, it would failover to the Second ES node.master?
Or do I have a wrong configuration preventing this? Any Help would be appreciated.
Thank you


(Jan Doberstein) #2

you should re-read: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-zen.html

The discovery.zen.minimum_master_nodes sets the minimum number of master eligible nodes that need to join a newly elected master in order for an election to complete and for the elected node to accept its mastership. The same setting controls the minimum number of active master eligible nodes that should be a part of any active cluster. If this requirement is not met the active master node will step down and a new master election will be begin.

If you have the same resources for all 3 Elasticsearch servers, then configure them the same way and let Elasticsearch do the magic, unless you know exactly what you are doing.

Jan


(Greg Smith) #3

@ jan
Thank you, I found your advice to work

The ES nodes #1 and #2 configured as;
node.master: true
node.data: false

The ES node #3 configured as;
node.master: false
node.data: true

Edited the following line as;
discovery.zen.minimum_master_nodes: 2
to
discovery.zen.minimum_master_nodes: 1

I shutdown ES-Node#1 ( master)
The second master took over asap. check the Web and was still connected.
I must have missed that simple configuration, Thanks again for pointing that out.
much appreciated


(Jan Doberstein) #4

@gsmith

just to have it said - you know that you have now only one data node configured that is holding all data while two nodes are master nodes and will not get any data and do only manage everything.

if your Elasticsearch Node #3 is down you have lost all your data.


(Greg Smith) #5

@jan
Yes, I had to re-read some more from the link you provided.
I’m going to increase my ES-nodes from 3 to 5 (two masters and three data). I’m understanding this more. Thank you for you help along the way.


(system) #6

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.