Regarding tuning Graylog Cluster

Here we have already done with the configuration part but for proper functioning of it need some guidance.

  • we are assuming 50K messages per second, so please suggest what we need to tune in server.conf &
    other conf if required.

Thanks in advance

Hey @shivamtiwari18

I must say thats a lot and out of my knowledge base for sure :laughing:
A while back this was posted here and here

There are other members that have very large environment/s im sure they will jump in for ya… Not sure how you setup your cluster but normally Opensearch/elasticsearch nodes would be separate from Graylog/MongoDb nodes. I do about 30-40 Gb a day, started off watching Elasticsearch/Graylog using metric’s tools (i.e,Cerebro, Grafana & Zabbix) incase something bad happens and adjust settings as needed. Thats all what I know :smiley:

1 Like

can you provide your graylog config file .

  1. set up a Loadbalancer. Without you will be lost
  2. separate your Graylog from Elastic/Opensearch
  3. know how to configure heap for Graylog and Elastic/Opensearch. Graylog should have about 80% of available RAM, Elastic/Opensearch 50% (rest for Caching by OS).
  4. for 10-15k msg/sec I use 3 GL nodes with 16 Cores and 32GB ram. As it should scale linear you can do the math
  5. Depending on how long you want to keep your data you will need a lot of storage. Make sure to know how to handle shards and calculate the necessary heap.
  6. adjust the number of processors in the graylog config for processing. Here you will need a big increase. Also for input and output some slight increases might be necessary.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.