I tried that. It seems create index is easy, but reindex takes a long time, and I guess after that it would be time for forcemerge and block writing, and recalculate index ranges. Scripting that would be pretty easy, but the effort it takes for the ES cluster seems an overkill, so I’ll just have a separate index set and delete the indices there when the time comes.
Thanks for you help, anyway. And just to let you all know; having a couple of thousand shards less can easily be felt in the responsiveness of the system, so shrinking was worth it. The blog post about sizing the ES cluster: https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster was really informational.
Especially useful were these two tips:
TIP: Small shards result in small segments, which increases overhead. Aim to keep the average shard size between a few GB and a few tens of GB. For use-cases with time-based data, it is common to see shards between 20GB and 40GB in size.
TIP: The number of shards you can hold on a node will be proportional to the amount of heap you have available, but there is no fixed limit enforced by Elasticsearch. A good rule-of-thumb is to ensure you keep the number of shards per node below 20 to 25 per GB heap it has configured. A node with a 30GB heap should therefore have a maximum of 600-750 shards, but the further below this limit you can keep it the better. This will generally help the cluster stay in good health.