Primary shard allocate new node

I’m using graylog 3.2.6, elasticsearch 6.8.
The size per shard is around 50G and there are 5 primary shards per index.
There are 4 indexes.

You have added a new node to elasticsearch.
If a new index is subsequently created in graylog, only the new node in elasticsearch will have a primary shard created.

I am experiencing an issue where the output buffer of graylog is full. Is there any way to spread the newly created primary shard across the entire node regardless of the number of shards per node in the existing elasticsearch cluster?

Hello @junshoong

A primary shard does not really move to another node as per say. If node 1 has all the Primary’s, then Node 2 has duplicates of the primarys incase Node 1 goes down, then Node 2 would be your master.

Another thing, try to keep the shards size between 20 and 40 GB. use a lot of memory any higher.

You could try split index set like two shards on node 1 , etc…

The split index API allows you to split an existing index into a new index, where each original primary shard is split into two or more primary shards in the new index.

I havent done that but I have shutdown one of 3 Elasticsearch nodes and I notice my primarys landed on node 2. using this setting #discovery.zen.minimum_master_nodes: its been a while.

If you have the resources, you could increase the output_buffer settting in the Graylog configuration file. Be sure you dont over extend the thread count, meaning by adding up your input_buuffer, process_buffer, and output_buffer settings, the rule normally is that they should equal the amount of CPU cores you have.

Hey @junshoong

just adding some more info for ya. To move shards around I use this awesome opensource software, been using it while now since ES 6.8 now Im using it for Opensearch 1.3 and 2.5 . If your interested its called Cerebro

all i have to do is click a button
Example:

It kind of best of both worlds

Thank you for your answer, I improved the performance by increasing the output_batch_size in the graylog configuration.
I haven’t found a solution for ES configuration yet, but the following value should help:

index.routing.allocation.total_shards_per_node

However, I’m not sure how it will work in practice because I’m applying ILM and when it switches to warm phase, 5 shards should be placed on one node.

To reduce the load for now, I manually adjusted it through the cerebro you mentioned and it’s a little stabilized now.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.