Is there a limitation of how many shards graylog can see or use?

Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question.
Don’t forget to select tags to help index your topic!

1. Describe your incident:
Right now i have 3 wazuh indexers running for a total of 3k shards
and im running out of shard with multiple indices

2. Describe your environment:

  • OS Information:ubuntu 22.04

  • Package Version:

3. What steps have you already taken to try and solve the problem?
i’ve tried increasing the shard limit on my indexers to 1400 each to a total of 4.2k but it is still reflecting as 3k shards on my graylog

4. How can the community help?
is there a way i can increase the amount of shards some way some how

Helpful Posting Tips: Tips for Posting Questions that Get Answers [Hold down CTRL and link on link to open tips documents in a separate tab]

Graylog reflects the total amount of shards in the cluster. So if one server has 1.000 shards, graylog shall mention 3.000 shards if the back-end has three servers. The limitation is back-end oriented and not in Graylog.

Some configuration guidelines that may help when having three servers:

• 3, 6 of 9 primary shards per index (and one secondary shard for speed and backup)
• The size of a shard is 10 tot 25GB if fast search is required, else up to 50GB/Shard
• 20 shards per GB heap space (prod = 12GB heap = 240 shards/node =750 shards for the cluster (default set at 1000 per node)
• The ratio Memory <> Disk size = 1:16

Sources: opster.com/number-of-shards

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.