virtualization environment: hyper-v
- Graylog 4 Node Cluster.
- 1 master 3 Replica
- 16 cpu - 32 gb ram
Graylog version 3.3.2
Process buffer - Output buffer Full
also why the load is not evenly distributed
100 K logs coming in a second.
I’m asking for help
@cemk You need to check your Graylog server.log and elasticsearch log file. There might be your elasticsearch having an issue to accepting messages and log files will give you the exact cause of it.
Do you have outgoing messages? Maybe you just need a bigger elastic cluster.
But an @makarands mentioned, check logs first.
After the IO and other performaces of your graylog and elastic servers.
After that, please check the graylog config file’s comments about processor numbers, and ring sizes. You missconfigured it. (This problem not connected)
You also can check the batch_size parameters.
Hi, no problem on elasticsearch side I checked
the problem we are having, Process Buffer - Output Buffer Full
Is increasing the number of graylog nodes the solution?
Is there anything wrong with the settings I sent ?
is_master = false
node_id_file = /etc/graylog/server/node-id
password_secret = xxxxx
root_password_sha2 = xxxxx
bin_dir = /usr/share/graylog-server/bin
data_dir = /graylog-data
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = 0.0.0.0
http_thread_pool_size = 24
elasticsearch_hosts = xxxxx
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 8
retention_strategy = delete
elasticsearch_shards = 6
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 4000
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 12
outputbuffer_processors = 12
processor_wait_strategy = blocking
ring_size = 262144
inputbuffer_ring_size = 262144
inputbuffer_processors = 16
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /graylog-data/graylog/data/journal
message_journal_max_age = 12h
message_journal_max_size = 40gb
lb_recognition_period_seconds = 3
mongodb_uri = xxxxx
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
proxied_requests_thread_pool_size = 32
Yes, but the buffers fulls because the elastic can’t adsorb as many messages as graylog tries to send.
As I mentioned, check the config file’s comments. You should use less processors in the config what you have. Input, process, and output 's sum should be less then 16. But it is not releated with your problem…
And the ring size should fit in your processors cache.
But it’s just my memory, check the config and the docs.
As I see you decrease the batch size. Check the docs, what it is. Maybe it makes clear everything.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.