I am new to graylog and love it. Set up a test environment on vmware via ova. Now I am planning a graylog cluster with elastic search and was looking for a sizing guide. There was one on Sizing estimator but the link is dead.
We expect a minimum of 300 million log entries with around 30 GB per day which should be kept for 7 days (~210 GB) per week. I thought of creating a ES cluster of 3 nodes and a graylog/mongodb cluster of 3 nodes - 6 ubuntu servers in total, load balanced by an existing loadbalancer.
Is there a guide on how to size ES and GL nodes regarding mem, vCPUs and storage?
How are logs spread over the nodes. Does each node need 210 GB?
as you can read in this community - you can place a simple matrix over a setup. To many variables - if you ingest at the same pace over the day, if you do normalization on your logs, how ‘save’ should your logs be stored? Do you need redundancies? In case of error in the setup how important is it for you that you can restore that?
How many people will active use Graylog, how many dashboards will you have running on a TV Board?