Indexes becoming read-only before deflector is moved

Hello,

After our upgrade from Elasticsearch 5.x to 6.x and Graylog 2.x to 2.5.1 (I can’t remember at the moment what version of Graylog we came from) we encountered a problem related to indexes becoming read-only before the deflector is moved when a new index is created.

What seems to happen:
Graylog has been using index eku_fw_1009, and needs to create a new index callled eku_fw_1010.
The new index gets created successfully.
Graylog attempts to repoint the deflector from eku_fw_1009 to eku_fw_1010, and get an Elasticsearch exception. The index is apparently in a read-only state at this time – not sure if Graylog did this or Elasticsearch.

2019-01-18T11:14:47.150-05:00 WARN [IndexRotationThread] Deflector is pointing to [eku_fw_1009], not the newest one: [eku_fw_1010]. Re-pointing. org.graylog2.indexer.ElasticsearchException: Couldn't switch alias eku_fw_deflector from index eku_fw_1009 to index eku_fw_1010

After this, we see messages showing the index in question is currently set to read-only state and that Graylog can’t index messages. Thousands of these errors show as it attempts to index messages.

2019-01-18T11:14:55.120-05:00 WARN [Messages] Failed to index message: index=<eku_fw_1009> id=<313dc441-1b3c-11e9-bdea-d89ef3264d11> error=<{“type”:“cluster_block_exception”,“reason”:“blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}>

At this point the new index does exist, but the deflector can’t be repointed.

If we fix the read-only state of the index in question, Graylog can successfully move the pointer and starts writing log messages again.

Fixing the index:

curl -X PUT “localhost:9200/eku_fw_1009/_settings” -H ‘Content-Type: application/json’ -d ‘{ “index”: { “blocks”: { “read_only”: “false” } } }’

After applying the fix, I see another message in the log showing it is going to re-point.

2019-01-18T11:14:57.150-05:00 WARN [IndexRotationThread] Deflector is pointing to [eku_fw_1009], not the newest one: [eku_fw_1010]. Re-pointing.

The elasticsearch cluster is not out of disk space, and I didn’t see any error messages related to either of these indexes (eku_fw_1009 or eku_fw_1010). This occurs only on index sets we have created in addition to the “Default index set” graylog uses. I couldn’t find posts related to others having this kind of issue, so my belief is the problem is “our problem” in some way.

While this is occurring, the web interface doesn’t indicate any issues with indexing even though logs are not being successfully written.

Anyone else experiencing this kind of thing?

This github issue seems related, but lists the reverse of my issue.

Thanks!!
Dustin @ EKU

Did you monitor your elasticsearch close? Can you find if the cluster is slow in creating the index? If you have a similar issue as in the described github it would be helpful if you add how we can reproduce that.

Let me do some digging - I don’t know how slow would a problem for Graylog or how to measure how quickly ES is at creating new indexes.

Goals:
Time how long it takes Graylog to create indexes across a period of time.
Create/delete indexes outside of graylog through the day, and time those creations.

Does Graylog wait for the new index to be usable before it moves on, or is there a timeout value somewhere?

1 Like

Using the head plugin, we created and deleted indexes several times. They were online and ready to use in less than a single second each time. My coworker is going to cycle indexes a few times and see how long that takes.

Is there a Graylog configuration option that would log more detail about when/what happens during the index creation?

Is there a Graylog configuration option that would log more detail about when/what happens during the index creation?

nope - just the code in github.

Understood.

I pulled a copy of the source and am getting up to speed on how things work.

My belief is that Graylog should not be showing success in the GUI if it is not able to index messages. I am trying to locate where that happens so I can create a coherent bug report.

New information.

After seeing the problem start, I did our normal repair procedure:
remove the read_only and read_only_allow_delete settings false on the index we are currently writing to.

Typically we see in the logs that Graylog finally was able to move the deflector from the old index to the new one and that messages are again written to ES.

Today, something different happened. The newly created index was read-only BEFORE graylog attempted to move the deflector to it. I had to fix both the source and destination indexes.

The current theory is that ES is making the decision to set these indexes as read-only - maybe a template problem?

Still working on it.

Did you have the option action.auto_create_index: false in your elasticsearch.yml?

That setting might help in this situation.

Thanks!!

I’ll take a look.

This setting was in our cluster as the default (true), so we have made that change.

curl -X PUT “localhost:9200/_cluster/settings” -H ‘Content-Type: application/json’ -d ‘{“persistent”: {“action.auto_create_index”: “false” }}’

We have removed old index data to ensure all ES nodes are well under the space requirements and have restarted everything with that setting in place. As of this moment, messages are flowing and we appear healthy.

Jan - thank you for your assistance!!

This problem turned out to be a new feature in ES 6.0 that we didn’t understand. As mentioned above, we were coming from ES 5.3 to 6.0.

In ES 5.3, these were the settings related to what happens as a node approaches being “full”.
https://www.elastic.co/guide/en/elasticsearch/reference/5.3/disk-allocator.html

Same page, but for 6.x:
https://www.elastic.co/guide/en/elasticsearch/reference/6.6/disk-allocator.html

To summarize our issue:
Our “hot” nodes were drifting above the cluster.routing.allocation.disk.watermark.flood_stage setting and ES was marking those indexes as read-only. We kept fixing the problem, and it would stay fixed until the flood_state was met.

Our experience with 5.x said that as the node gets full ES will stop placing primary and replica shards on that node. The index could still be written to, and we just needed to make sure nodes didn’t full all the way up before things rolled over. This was a rare occurrence.

The 6.x experience is different.

If any node hosting a replica or primary passes above the cluster.routing.allocation.disk.watermark.flood_stage setting, that index is marked as read-only. Graylog sends a message to server.log saying that it can’t write to the index each time a write is attempted. Graylog will fail to write messages to that index until it’s time to cycle deflectors and create new indexes. As the current index is already read only, the deflector can’t be moved to a new index. Graylog then continues attempting to write to a read-only index, and the admin has to get involved to get things working again.

Depending on circumstances, the problem can’t fix itself. The curator process can’t move a read only index, so disk space issues can’t be resolved on their own.

I feel like Graylog needs to tell the end user a little more about this issue. The web interface shows messages being indexed correctly, but in fact ALL messages being written to an index with this issue are lost.

Thoughts:

  1. If the index Graylog is attempting to write messages to begins returning read-only error messages, Graylog should keep the failed messages in the disk-journal until the problem is resolved.
  2. These errors show up the server.log as WARN messages, but they indicate we have lost a log message. I know there are cases where a message couldn’t be indexed, but this seems more critical based on the “why”.
  3. The Graylog admin should set the cluster.routing.allocation.disk.watermark.flood_stage and other disk related settings carefully and review them before each upgrade (we take this on the chin :slight_smile: ). If others are interested I’ll post what we selected.
  4. The GUI should probably show the admin this particular problem is happening. There were clear messages in the log files for ES, but we hadn’t known ES would handle disk space issues in this manner and weren’t looking for the right logs.

In addition to all this, we discovered a different Elasticsearch bug that was drinking up a good portion of our space, but it’s not directly related to this problem so I’ll save it for a different post.

This forum is excellent - I appreciate the help and discussion.

I hope this helps someone else!!

Dustin

2 Likes

Please do post your settings, would be nice to know :smiley:

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.