Hello
we run a Graylogcluster in AWS with 3 ec2 Graylognodes , 3 ec2 Mongodb Nodes and a managed AWS OpenSearch Service [OSS].
OSS has 3 Data nodes for HOT and 3 Ultrawarm Data nodes for WARM and COLD indices.
With this setup, we want to be able to keep indices for about 180 days, where Hardware limitations of OSS (and Graylog) allow us to keep open indices for about 30 days.
If somebody wishes to analyze data from 5 months before, it is possible to start graylog from a temporary clone of OSS-Cluster. What’s about the handling of Metadata (Mongodb) in this scenario?
Even when only analyzing the OSS clone with kibana, there will be miss fields like source, and fields generated by Graylog pipelines and extractors.
Does someone have experience with old data?
Thanks for Help
Bernd