Migrate Vm Based Graylog to Dockerized Version

Hi,

We currently maintain a graylog cluster which works on virtual machine. Our setup is like that:

  • ElasticSearch clusters
  • Mongodb running on graylog installed machines, not a separate clusters.

So, we want to run graylog inside docker container. What we tried first was to create a graylog installed container and join it to the existent graylog clusters. It fails because of the mongo db versions seem different and they are not be able to communicate with each other.

So, We thought that just create another graylog cluster with the dockerized version, get a mongodb dump from existent cluster and import the data to the newly created mongo. I think it can work but what I am not sure about this migration is ElastichSearch data. Does the ElasticSearch stores information about graylog clusters? If yes then it can be problematic because our new cluster is not gonna be able to work with current ElasticSearch. Can you clarify me?

Hello @guzelcihad

You almost have it.
It is advisable to insure MongoDb is the same version before performing a Mongdump &mongorestore

Again, Insure the version are the same. What needs to happen is…

Elasticsearch Creating A Snapshot
Example:

Configure elasticsearch.yaml file.

path: "/etc/elasticsearch/my_backup" 

Then excute the follow to register the repo.

curl -X PUT "localhost:9200/_snapshot/my_repo?pretty" -H 'Content-Type: application/json' -d'
{
"type": "fs",
"settings": {
"location": "/etc/elasticsearch/my_repo"
}
}
'

Or just one index

curl -X PUT "localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true&pretty" -H 'Content-Type: application/json' -d '
{
"indices": "graylog_1",
"ignore_unavailable": true,
"include_global_state": false,
"metadata": {
"taken_by": "aaron",
"taken_because": "testing for community issue",
"date": "2021-04-09"
}
}
'

Once completed run the following.

curl -X PUT "localhost:9200/_snapshot/my_repo/snapshot_1?wait_for_completion=true&pretty"

Copy you data to its destination

Restore

 curl -X POST "localhost:9200/_snapshot/my_repo/snapshot_1/_restore?pretty 

NOTES:

Restart ES and GL service
If an error occurs check index

ERROR [IndexRotationThread] Couldn’t point deflector to a new index
java.lang.IllegalArgumentException: [alias] is unsupported for [REMOVE_INDEX]

curl -XGET 'http://localhosts:9200/_cat/indices?pretty=true'

If there are old indices make sure to remove them. Example if I restored graylog_1112 if there are indecixe name graylog_0 or graylog_1 you need to remove them like this.

curl -XDELETE localhost:9200/graylog_0

Should be good

That would be MongoDb, It holds all the metadata.

Elasticsearch hold/Indexes the messages/logs. If you have a custom Index template then you need to copy that over to the new instance.

1 Like

Hi @gsmith ,

First of all thank you.

It is advisable to insure MongoDb is the same version before performing a Mongdump &mongorestore

According to your advice, I can say that MongoDB versions are not same but I got dump and imported successfully into MongoDB even the versions are not match.

You gave an idea about ElasticSearch migration. Actually I wasn’t thinking that to migrate ElasticSearch to a new cluster. I just wanted to make sure that our new graylog cluster can work with the existent ElasticSearch cluster. According to your answer if we don’t have a custom index template then we don’t have problems to work with existent ElasticSearch. But for safety reasons it is best to get a snapshot. Do you think I understood you right?

Hello,

I was referring about migrating and keeping custom template intact but since this seams to be a default installation, no worries. If there is no need for old data then just create elasticsearch container. Should be good, You may need to adjust GL configuration file to connect with elasticsearch. On th other hand if you need the old data then you would have to adjust you docker-compose file to the old data directory.

Example:

elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2-amd64
    # image: opensearchproject/opensearch:1.3.2
    network_mode: bridge
    #data folder in share for persistence
    volumes:
      - es_data:/usr/share/elasticsearch/data  <-- Change Me here if needed.
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0

I always execute a backup (i.e. snapshot or VM Checkpoint ) before I change/upgrade anything.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.