Output buffer performance and cluster sync

I have one graylog (master) + ES 1 node (master/data) on one server, on SSD [raid1];
and 6 servers, with graylog only (is master = false). This 6 servers write logs to one es node. ES status is GREEN. But sometimes, i have full output buffer on graylog nodes.

I can process ±10k/s logs on single node and it is my “deadline”. Why RAM not usage on graylog fully ? How to speed up output buffer ? Where is “bottle neck”?
In same configuration, on ELK stack i have better performance (1 es node and 5 logstash)

and second moment, i hve this [WARN] on all GL nodes:
2017-10-26T12:12:45.621+03:00 WARN [NodePingThread] Did not find meta info of this node. Re-registering.
But time on all nodes is sync.
on all nodes -

root@grayss-int-kr-v:~# dpkg -s graylog-server | grep ‘^Version:’
Version: 2.3.2-1
root@grayss-int-kr-v:~# date
Thu Oct 26 17:16:00 EEST 2017

Thank you !

Hej @DecardShaw

I do not know what kind of processing you do with graylog - so it is hard to tell where the bottleneck is in your setup. But my feeling - with only the text given - is that your elasticsearch is the bottleneck in this setup.

Some more information on your configuration, output buffer processors, output batch size would be helpful. You might want additional look into this posting:


1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.