We have deployed a graylog server cluster with 2 graylog server nodes(one master node and one slave node), one mongodb and a elasticsearch cluster.
When we shut the master node down, the other graylog node didn’t become a master node automatically and it seems that out alerts are out of work.
When we want to solve this single point failure, we try to start two master nodes at the same time but we found its not appropriate according to the document and the log of the second master node.
Is that normal? What should we do if the master node is down? Does alert stop working after master is down? It seems that he master node may cause a single point failure.
The second question is, what will the alert happen when we deploy more than one master node? Will it run well?
Are there any plans to add support for mechanism to elect Graylog master nodes automatically?
Obviously avoiding split-brain type of problems that happens in Elastic Search?
I suppose making sure that the number of nodes are odd number and minimum 3?