Need to clear elastic search after upgrade to version 4

Description of your problem

During the upgrade of elasticsearch from 5.6->6.8->7.14 I managed to wipe out all data. We can live with that. At this point I want to start fresh. I cleared everything from elasticsearch and started it running on all ES nodes and then went ahead with the upgrade from 3.x to 4.1. At this point version 4 is running, but is spewing errors trying to get to the old index:
Index not found for query: graylog_298. Try recalculating your index ranges.

Recalculating the index range from either the GUI or curl gives this result:

2021-08-25T23:34:29.950Z INFO [RebuildIndexRangesJob] Recalculating index ranges.
2021-08-25T23:34:29.950Z INFO [SystemJobManager] Submitted SystemJob [org.graylog2.indexer.ranges.RebuildIndexRangesJob]
2021-08-25T23:34:29.952Z INFO [RebuildIndexRangesJob] No indices, nothing to calculate.
2021-08-25T23:34:29.952Z INFO [SystemJobManager] SystemJob [org.graylog2.indexer.ranges.RebuildIndexRangesJob] finished in 2ms.

I’ve tried creating a new index set and setting it as the default, but something is still trying to go to graylog_298.

I want to be able to run graylog with all the configurations I’ve developed for streams, alerts, etc., but with new data. I’m willing to start from scratch with elasticsearch again if needed. How can I do this.

Hello,

I need to ask a couple questions.
How did you manage to wipe all your data from upgrading Elasticsearch? I’m just curious.
What documentation are you using for Graylog Upgrade process?

Are you aware of this?

That would be this section.

Next option would be rotate you indices, same section. This would be the preferred way.

You can try to execute these at your own risk.

Check Elasticsearch health.

curl -XGET http://localhost:9200/_cluster/health?pretty=true

Check Shards see what going on.

curl -XGET http://localhost:9200/_cat/shards

List you indices

curl -s http://localhost:9200/_cat/indices

How to delete Index

curl -X DELETE "localhost:9200/my-index-000001?pretty"

Hope that helps

1 Like

gsmith Regular
August 26

Hello,

I need to ask a couple questions.
How did you manage to wipe all your data from upgrading Elasticsearch? I’m just curious.

I did something stupid. After adding a third ES node I somehow wound up with an entire set of unassigned shards. In researching how to resolve unallocated shards I came across a suggested command that I didn’t read fully: it looked for unallocated shards and then deleted the index that contained them. Since we had one unallocated shard from each index it deleted everything.

What documentation are you using for Graylog Upgrade process?

Since elastic search was already hosed I was following the documentation for a fresh install:

https://docs.graylog.org/en/4.1/pages/installation/os/ubuntu.html

tgarons:

upgrade of elasticsearch from 5.6->6.8->7.14

Are you aware of this?

I was not aware of this. That isn’t mentioned in the link I referenced above. Do I need to downgrade?

The only thing I want to preserve now are the streams and alerts and plug-in configuration that I have previously set up. I can start all over again with elasticsearch if necessary.

That’s a tough question. It’s not good idea from my experience to down grade elasticsearch. If you have to do that I might even considering starting over. You could just leave it alone because I seen some community members using 7.14 without problems. This would be up to you. I person would try to make it work but just beware.

You could create a content pack like so.


The select what you want

Click NEXT

The Create & Download

Keep that in a good spot then upload when you all done.

I personally would see if I can get the Index correct before doing all that.
All you metadata from stream configurations, etc… is in you MongoDb.

Hope that help

Well, I am a newbie…

but if you deleted your index set on Elastic side, but Graylog still thinks it exists…
… isn’t the solution simply to delete and re-create the index set on Graylog’s side?




PS: I am also on EL 7.14. So far the only problem is occasionally wrong information about index ranges. But it seems to be only a visual problem. Indexing and searching works fine.

| nisow95612
August 26 |

  • | - |

Well, I am a newbie…

but if you deleted your index set on Elastic side, but Graylog still thinks it exists…
… isn’t the solution simply to delete and re-create the index set on Graylog’s side?

I am looking for a solution like that. As you say, the problem is on the graylog side. It still thinks graylog_298 exists. The problem with deleting the index set is that is is the default index set. I can’t delete it unless I create another index set and if do that—create another set and make it the default—when I try to delete it I get a warning that my couple of dozen streams are tied to it.

Ah, right, each stream has its index set configured explicitly. I forgot about that.
You change assigned index set in More Actions → Edit stream, but there is no mass-update option in GUI.

The solution turned out to be pretty simple: Go in to System->Indices->Default index set and then Maintenance->Rotate active write index
Everything seems to be working now

I was assuming that might be the case. One of my servers had this problem I just rotated also.

1 Like

Yeah, you might have to go to each stream/index.

if you want a good look at your indices on the Elasticsearch side:

curl -X GET --netrc "MyESserverName:9200/_cat/indices/*?v&s=index&pretty"

  • --netrc refers to the ~/.netrc file security info for accessing Elastic
  • The operative piece in here is the * for all indices, you could put gl* for all indicies starting with gl
2 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.