IN/OUT messages reported for node and not for cluster

After upgrade to 3.3 GL interface on the right top corner is reporting only numbers of incoming/outgoing messages only for the current node of a cluster.
Someone else sees it too, or something is wrong in my cluster?


can you share a little about your setup? How many nodes? How many MongoDB Serves? Replicaset?

Sure I can. It is a small three-node cluster running Debian 10. This three nodes are running mongo 4.2 in 4.0 compatibility, all are in the same replica set. Nothing else is present on the server. Three nodes Elastic cluster is separated from the GL. Nodes are running 3.3 GL on 9000 default port. Nginx in proxy configuration is doing redirection from 80 to 433 and then proxy to 9000.
Access to the nodes is without NLB. I’m just using DNS Round Robin and using Beat with logstash output and load-balancing. Some of the three nodes are targets of the syslog UDP stream which is not balanced across the nodes. So some of the nodes are processing small numbers and some larger number of messages.

is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = ***
root_username = ***
root_password_sha2 = ***
root_email = ***
root_timezone = ***
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = 10.20.XXX.YYY:9000
trusted_proxies =, 0:0:0:0:0:0:0:1/128, 10.20.XXX.YYY/32
elasticsearch_hosts = http://10.20.XXX.YYY:9200,http://10.20.XXX.YYY:9200,http://10.20.XXX.YYY:9200
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 3
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 3
outputbuffer_processors = 2
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /graylog/journal/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://graylog:***@gl1.domain.local:27017,gl2.domain.local:27017,gl3.domain.local:27017/graylog?replicaSet=rs0
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
proxied_requests_thread_pool_size = 32

server {
listen 10.20.XXX.YYY:443 ssl http2;
server_name gl1.domain.local 10.20.XXX.YYY graylog.domain.local;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Graylog-Server-URL https://$server_name/;
proxy_pass http://10.20.XXX.YYY:9000;

if you go to the System > Nodes page - did you see all details from all nodes as it should or did you see for all nodes the same?

he @fangycz

sorry that I wasn’t clear.

I meand do all Nodes have unique information presented in the overview or are all the same? From what I can see all nodes report unique is that true?

But in addition:

in 3.3.1 this issue will be fixed

Hi @jan
When I open separate browser for each node, it seems they are reporting the same throughput which belongs to gl master. On the other hand as you can see in the nodes view, they are reporting the correct throughput for each node. In my setup the second node is target for syslog from Cisco ASA firewall.

About the bug, yes the behavior is the same. But just as I said before, it seems in my setup, that all the nodes are reporting throughput of the master.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.