Facing issue with Multinode Graylog Cluster


We have setup multinode graylog 3.0.2 cluster for our production logs processing. Daily logs size are 250 GB to 300 GB. We are using filebeat log shipper and graylog sidecar to upload logs.

Below are the setup architecture.

Graylog + MongoDB cluster

Three nodes with 128 GB RAM, 32 CPU, 700 GB Disk and 64 GB heap for each.

Elasticsearch cluster

Three nodes with 128 GB RAM, 32 CPU, 700 GB Disk and 64 GB heap for each. Where two nodes are Master + Data and one is only Master. And 18 TB SAN mount for Data node.

Log Uploading

We are uploading last day logs i.e. Yesterday’s logs.
We have split the logs in three graylog nodes. Copy these logs files to one location from where graylog sidecar will upload the logs using filebeat log collector.

Issues Facing

  1. When logs uploading started from all nodes the input speed are 50k to 100K msgs/s and output speed is 20k to 100k msgs/s. Batch size set to 50k msgs
  2. After some time journal gets full and output logs stop processing.

Is there anything missing or anything wrong. Please suggest.


You should read all information given on https://www.elastic.co/guide/en/elasticsearch/reference/6.8/heap-size.html

The heap for Elasticsearch and Graylog should be adjusted - ES 31GB and Graylog ~8GB (only if you have large lookup tables in RAM that might be different).

That should make your environment more stable.

Thanks for your response. will try decreasing heap memory.
My main concern about log uploading. Now I’m copying 100 GB logs to one GL node, which location is given in filebeat log collector. Is this a proper way to upload or there is any other way.

My recommendation is to place all available Graylog nodes in the filebeat configuration and make use of the loadbalancing in filebeat ( https://www.elastic.co/guide/en/beats/filebeat/current/load-balancing.html ) or I would install a proxy/loadbalancer that does the same.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.