Eleasticsearch shards exhasted

1. Describe your incident:
ElasticsearchException[Elasticsearch exception [type=validation_exception, reason=Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;]]

2. Describe your environment:

  • OS Information:
    RHEL 8.6

  • Package Version:
    Graylog 4.2.10+37fbc90

  • Service logs, configurations, and environment variables:

3. What steps have you already taken to try and solve the problem?
trying to increase shards

4. How can the community help?
hi, i understand that i am out of shards, but i am not sure what to do about it. I was looking into increasing the shards, but i am now thinking i should change my indexes.
currently, i have about 12 indexes and all are set to rotate daily and retain for 90 days.

my first questions, is what does the community recommend for rotation strategies? should i be rotating less often? this would result in less shards…correct?

thanks for your input

Helpful Posting Tips: Tips for Posting Questions that Get Answers [Hold down CTRL and link on link to open tips documents in a separate tab]

There was a similar post here not to long ago.

By default Graylog puts down four shards per index which gives it immediate compatibility with a clustered environment. If you aren’t’ clustered, you don’t need 4… The info in the post will help more, ask away if you need more clarification! :smiley:

will do, thanks for pointing me in the right direction

h, so i read everything over and i think i understand. I see that by default, an index is created with 4 shards. I am not quite clear on how i get to 1000 though, if i have 12 indexes x 4 shards/index…is that per rotation period?

in any event…how do i change this. If i edit an index and change it from 4 shards to 1…will the system do whatever is necessary to reduce the shards…do i have to do any cleanup after?
thanks again

yup! so it’s 12 x (number of indices currently saved per index set) x 4

Editing the index set and reducing the shards will affect future indices on rotation, changing your index set rotation strategy to keep fewer indices will eliminate some you have currently. You COULD create an Elasticsearch cluster to take on the shards or for that matter change the number of clusters allowed… though the last one there is not recommended per Elasticsearch…

so, if i am understanding you correctly, if i reduce the shards from 4 to 1, new indexes will use less shards, but existing index will use the same 4 shards as they were created with?

so, how do i free up shards for new indexes if i am already maxed at 1000?


I answered that with my thoughts in the previous post. possible options:

  1. Reduce your index retention in one or more Index Sets and the Graylog system will auto clean up the older indices out of the Index Set so that you fit within the new parameters of reduced retention.

  2. Move from a single instance to and Elasticsearch Cluster and redistribute the shards

  3. You can increase the shard limit - here is a post on that with more detail on how to do it at the end . Increasing shard limits is not recommended.

is it also the case that if i change the rotation from daily to weekly, i would reduce the shards used as well (on new indices)?

Hello @tonyg
chiming in,

large indices will cause some issue down the line. This is depend on how much resource you have.
Also by chance do you use Index replicas?

hi @gsmith , I think i saw that index sizes between 20 and 50 gig were usually ok. some of my indexes have very little activity, so i changed those to rotate weekly. the big ones, i left daily
as for replica’s, that is set to 0

btw, is there a way to empty (completely) an index in the GUI. some of my indexes i can afford to dump and free up shards…do i have to rm in the OS or can i use the GUI to clear it out?

thanks again

1 Like

You can delete each index manually in the GUI by viewing each indices you want to delete and clicking on the red “Delete Index” button. It is preferable to do it in the GUI so Graylog registers it but you can do it via elasticsearch if it is a large number (You would then “recalculate Index ranges”)


@tmacgbay you have been a big help on this…thanks!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.