Unfair graylog load balance

Hello guys!

We are running a graylog structure with 21 servers in docker. For some reason, some servers receive less connections then the others, that cause high stress in some servers and no stress in others.

Any ideas about what can be?

Best Regards

No, but that’s not surprising given the few details (well, none actually except for the fact that it’s 21 nodes running in Docker containers) you’ve provided.

Please elaborate on your environment, provide the configuration of all nodes, and explain what you mean by “receive less connections”.

While one server is processing 1000 messages, other is processing 40, and other 0.

Seems like all the messages received are not round robin distributed.

Hello,
what kind of input do you have (GELF TCP, Syslog, Filebeat,…)?
How did dyou configure the load balancing of this input (haproxy)?
How are configured the clients to send logs to graylog?

I have here three load balancers (citrix), each one configured for one kind of input: gelf tcp, gelf udp and syslog udp.

All the three load balancers points to my 21 servers.

the clients are configured using syslog (for nginx and linux logs) and the apps send their logs using log4j via gelf

Sounds more like a question for Citrix in your situation then.

Do your log sources reconnect at regular intervals? You could make them reconnect once in a while, and then the load balancers could distribute the load more evenly. If they do not reconnect, the connection just stays with the server they got connection to first.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.