Migrating to new server, maintaining config but with new empty ES back end

We are trying to migrate one of our systems to different hardware. I want maintain the same config but create a new ES empty default index set. I can do a mongo dump and restore but once I’ve done this how do I prompt the system to create a new default index upon startup and re-reference the objects already in mongo with the fresh default index set?

I’m using the official graylog docker image.

data is stored in Elasticsearch - on Startup Graylog checks if the indices are given or not.

This is totally unsupported by Graylog and not an official recommendation:

  • import your old mongoDB data
  • drop index_ranges and index_sets collection
  • start graylog

Missing data will be created

Thanks Jan, that’s a great help! :grinning: I now have some more detailed questions.

If I wanted to ignore the ‘inputs’ from the restored mongodb would I simply drop the inputs collection? If so, would the following config parameters be observed at first start up

GRAYLOG_CONTENT_PACKS_AUTO_LOAD
GRAYLOG_CONTENT_PACKS_LOADER_ENABLED
GRAYLOG_CONTENT_PACKS_DIR

so I could import some predefined inputs using content packs?

Basically I would like all the following parameters to supersede any config that might be in detailed in the restored mongodb

GRAYLOG_MESSAGE_JOURNAL_ENABLED
GRAYLOG_WEB_ENABLE
GRAYLOG_ELASTICSEARCH_MAX_NUMBER_OF_INDICES
GRAYLOG_ELASTICSEARCH_REPLICAS
GRAYLOG_ELASTICSEARCH_SHARDS
GRAYLOG_ELASTICSEARCH_MAX_DOCS_PER_INDEX
GRAYLOG_IS_MASTER
GRAYLOY_PASWORD_SECRET
GRAYLOG_ROOT_PASSWORD_SHA2
GRAYLOG_ELASTICSEARCH_CLUSTER_NAME
GRAYLOG_CONTENT_PACKS_AUTO_LOAD
GRAYLOG_CONTENT_PACKS_LOADER_ENABLED
GRAYLOG_CONTENT_PACKS_DIR
GRAYLOG_MONGODB_URI
GRAYLOG_ELASTICSEARCH_HOSTS
GRAYLOG_WEB_ENDPOINT_URI
GRAYLOG_TRUSTED_PROXIES

Thanks in advance Jan.

At this point, why not simply build a completely new environment? :slight_smile: Most, if not all, of those settings can be created in the server configuration file.

Perhaps you could backpaddle a few steps and explain more about the situation you’re in and what your goa is. Maybe you’re overthinking things a little bit :slight_smile:

Hi Tess, we are actually building a completely new environment. We are migrating from on-prem to AWS utilising many of their managed services. i.e. ECS, ELB, Elasticserach, EFS, S3, Route 53, SNS, AutoScaling, etc. I have already built an automated deployment mechanism for using Cloudformation and other tools where by I can drop a whole Graylog deployment using a few commands.

However, when migrating we would like to preserve many of the existing Graylog configurations such as users, alerts, streams, dashboards. I’ve investigated creating a content pack and importing it to the new system however content packs don’t support items such as users & alerts. I’m not concerned about migrating any actual ES data as the old system will still be available for a period if users want to search historical messages.

1 Like

Thanks for sharing Steven! Yeah, that totally makes sense :slight_smile:

I now have some more detailed questions.

please just try what you want to do in a dev environment. As I have written you are in a 100% unsupported area and you’ll need to find the way yourself.

I managed get the migration working the way I wanted in the end.

I did have to drop all the following collections.

system_messages
inputs
index_ranges
index_sets
index_failures
sessions
nodes
index_failures
cluster_config
cluster_events

and modify all streams to reference the new default index set.

db.streams.update(
{ index_set_id: "58de701d7af4e0373ddd9b4b" },
{ $set: { index_set_id: "5c5326b517eb0a00fb355ade" } },
{
multi: true
}
)

Thanks Jan

1 Like

Nice work!

You wouldn’t be interested in making a nice blogpost / writeup about your process, would you? :smiley:

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.