As stated clearly that ES is work perfectly in a single site. Since we have multiple data centers, I assumed that GL could be installed individually in each data center. Is there any plug-in or somehow third party tool to ,somehow, submit the search request to mutliple GLs and consolidate the result into a single view?
we already have a feature request for that - but no timeline to support.
This can be done, yes. But as Jan points out there is no unifying interface yet.
What you -can- do is make one big Graylog cluster that is spread geographically. It might be less than optimal, but it works. In my case we have multiple data centers, with each DC housing ES+Graylog. The ES indices are built with >0 replicas, meaning that if one DC dies you don’t immediately use all your logging.
so it is a theory, only, I’m not sure it could be working, but I have a problem like that, so I thinking a lot about that topic.
I think in a big system. Unfortunately the GL servers should be access to each api interface (a lot of the traffic is just the message/sec counter).
If you create different data sets for each datacenter, maybe you can set the store location based on index name in elasticsearch
Something like that:
In this case I think you can achieve you store the logs “locally” at your site/DC, so no big traffic between DCs, but you will search in all elastic database.
My another idea for my problem, where we think in a centralized log system, I plan GL servers for all sites, and one big ES cluster in the center DC. Probably with VPN between the GL servers to provide the communication, and decrease the bandwidth with compress.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.