The alias points to multiple indices without one being designated as a write index


we’re getting indexing failure every week on index set that has high traffic.

ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=no write index is defined for alias [xxx_deflector]. The write index may be explicitly disabled using is_write_index=false or the alias points to multiple indices without one being designated as a write index]]

Every time, there are 2 indices marked as deflector. I can resolve the issue by removing the alias from one of the indices, but this is happening weekly.

Environmental information

  • CentOS 7
  • Graylog 4.0.8+6b8c55d
  • MongoDB 4.2.14
  • Elasticsearch 7.10.2 oss

Hello && Welcome

Have you seen this post?

If that post does not resolve your issue could you show use the results of what happened?

Also to help you further, greater detail about you environment would be appreciate like extractors , pipelines, types of inputs used, index mapping, etc…


@gsmith Thank you for a quick respond! I’ve seen the article above and the steps in that article does resolved my issue. However, my issue has been coming back weekly for the last 3 weeks.

Here are my current environment settings:

  1. GELF TCP and 1 Syslog TCP inputs
  2. 3 index sets
    Index set configuration:
    • 4 shards
    • 1 replica
    • 20000000 docs per index
    • Max # of Indices: 200
    • retention strategy: deletion

So far, we only have issue with one of the index set that has high traffic and going thru the index retention strategy (deletion).


By chance are you monitoring metrics on you Graylog server for CPU usage, Disk I/O and memory?
Do you have a custom Index template?

The index with high traffic have you tried to adjust the retention from 20000000 docs per index to something like 1 Day?


Could you post your whole Graylog log file or any configuration you have?
Have you tried to upgrade Graylog to version 4.1?

I haven’t try to change the retention to 1 day yet. Surprisingly, it has been working fine for at least 1 week which was not the case. I will change the retention policy if it happens again. I would love to upgrade to 4.1 since it has native Prometheus support. Do you know from 4.0.8 to 4.1, do I also need to upgrade MongoDB and ElasticSearch? Below are the CPU/Memory/Disk IO



You do not have to upgrade MongoDb or Elasticsearch for Graylog 4.1.

Steps for CentOS 7

Stop Graylog service

Systemctl stop graylog-server

Clean your repo.

Yum clean all

Download package

sudo rpm -Uvh

Perform upgrade

yum update graylog-server

Start Graylog

Systemctl start graylog server

I would highly recommend you check out Graylog Changelog & Documentation first to make sure nothing will break your setup.

hope that helps

Thanks @gsmith. I’ve upgrade all my nodes to 4.1.7. I will let you know if the issue ever come back but so far it’s good (finger cross)

BTW, I’m also having another issue where all my nodes think they’re master node. I have 3 nodes cluster graylog-01, graylog-02, and graylog-03. I only specified is_master in the server.conf for graylog-01. Graylog Web is seeing all the nodes as master. I do see the a notification about multiple master servers everytime I restart any of the node.


That sound like a configuration issue. By chance did you set http_bind_address ?
Also only one GL nodes need to be set as Master.

graylog-01 is_master = true
graylog-02 is_master =false
graylog-03 is_master =false

EDIT; Showing your GL configuration file will help us, help you.
hope that helps

Thanks @gsmith! that did it. I didn’t know that without is_master = false in the configuration file would automatically make the node master.

Yes, I do have http_bind_address set.

http_bind_address = graylog-01:9000
http_bind_address = graylog-02:9000
http_bind_address = graylog-03:9000

1 Like

Nice :slight_smile: , If the original issue comes back it probably would help showing your ES config files an GL config files.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.