Shard election issue

Hello everyone,

I have cluster with 3 graylog servers (elasticsearch and mongodb are on each server). I configured 3 shards and 1 replica, this is minimal setup for now. In the Graylog Index view, I see al shards are green and the idea is 1 shard per node.

When 1 node goes down, the replica shard of the offline server automatically becomes the primary shard which is good. But when the server returns, the replica shard stays as original, while the previous primary shard is replica now. In that case, I have 2 primary shards from node1 for example, and 1 from node2, while both shards of node3 are set as replicas. Is this normal or shouldn’t be like that?

To ilustrate:
BEFORE NODE3 WENT DOWN:
Node1 - S0 (primary - node1), S0 (replica - node2)
Node2 - S0 (primary - node2), S0 (replica - node3)
Node3 - S0 (primary - node3), S0 (replica - node1)

AFTER IT GOES UP:
Node1 - S0 (primary - node1), S0 (replica - node2)
Node2 - S0 (primary - node2), S0 (replica - node3)
Node3 - S0 (primary - node1), S0 (replica - node3)

By this logic, Node3 will never get any data.

Thanks in advance

nope - sharding does not work like you think.

The data is read and written to all nodes. Always.

So the current behavior of my setup is normal?

So in case Node1 goes down after, and replica shard of node3 activates, I will still have all the data?

Please read:

https://www.elastic.co/guide/en/elasticsearch/reference/6.2/_basic_concepts.html#getting-started-shards-and-replicas

EDIT:

Thank you for the articles they are very good.

I misunderstood Graylog web ui and now everything is clear. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.