We’re looking to implement Graylog Enterprise Gold (node in our local data center) with an AWS ElasticSearch backend, but we’re going to need to keep the log data for a long period of time (many years). So, I’m hoping to keep a few weeks worth of local indices and then be able to back up the older indices to some place like an AWS S3 bucket. However, it seems rather inefficient to have data have to come back to our local server to be sent back to AWS, not to mention the possible costs associated with that.
Is there any way to have Graylog shuffle the archives to S3 without having to come back to the server? Is there a better way of doing this or am I perhaps misunderstanding some part of this? I’d prefer not to have to have the Graylog server in AWS as well (for security and caching reasons).
Thanks for the reply, but the Archive plugin requires that the data flow through the Graylog server, so data would have to go from AWS to our data center back to AWS, which I’m trying to avoid. I’ll contact Enterprise support as you suggest. I was just hoping that I could find an answer here (and one could be provided for others having the same quandary).