We’re looking to implement Graylog Enterprise Gold (node in our local data center) with an AWS ElasticSearch backend, but we’re going to need to keep the log data for a long period of time (many years). So, I’m hoping to keep a few weeks worth of local indices and then be able to back up the older indices to some place like an AWS S3 bucket. However, it seems rather inefficient to have data have to come back to our local server to be sent back to AWS, not to mention the possible costs associated with that.
Is there any way to have Graylog shuffle the archives to S3 without having to come back to the server? Is there a better way of doing this or am I perhaps misunderstanding some part of this? I’d prefer not to have to have the Graylog server in AWS as well (for security and caching reasons).
Thanks in advance.