Backing up to AWS from AWS

We’re looking to implement Graylog Enterprise Gold (node in our local data center) with an AWS ElasticSearch backend, but we’re going to need to keep the log data for a long period of time (many years). So, I’m hoping to keep a few weeks worth of local indices and then be able to back up the older indices to some place like an AWS S3 bucket. However, it seems rather inefficient to have data have to come back to our local server to be sent back to AWS, not to mention the possible costs associated with that.

Is there any way to have Graylog shuffle the archives to S3 without having to come back to the server? Is there a better way of doing this or am I perhaps misunderstanding some part of this? I’d prefer not to have to have the Graylog server in AWS as well (for security and caching reasons).

Thanks in advance.

That’s what the Graylog Archive plugin is for: http://docs.graylog.org/en/2.3/pages/archiving.html

Please contact the Graylog Enterprise support for discussing possible solutions for your requirements.

Also see Sending Graylog Archives to S3 Bucket or Glacier - #2 by jochen.

Thanks for the reply, but the Archive plugin requires that the data flow through the Graylog server, so data would have to go from AWS to our data center back to AWS, which I’m trying to avoid. I’ll contact Enterprise support as you suggest. I was just hoping that I could find an answer here (and one could be provided for others having the same quandary).

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.