Process buffer fills with imaginary messages

Hello All, I have found a fun problem with a process buffer that fills.
From watching over some time it looks like the diffrence from the input and output of the process buffer is added to the buffer usage, but over time the numbers do not add up.
image

The photos are from the system when it had it’s processing paused

input - output != usage

The system is running from the official docker images 2.4.6, ES is a AWS ES domain running 5.5

I’m not using any pipelines, so i know i did not make an event loop.
If anyone has an idea I’d live to know what it might be.

Thank you.

org.graylog2.buffers.process.usage is the current number of messages in the process buffer, not how many messages have been processed in a given time. If the process buffer fills up but the number of outgoing messages is low, you have a problem with the indexing performance of your Elasticsearch cluster.

I get that, and that is part of the problem that I see. If I take the input - the output I should have a number that is close to the usage. But when you look at the two screen grabs what I see is that the usage will grow by 1 to 4 eps, the output will trail the input by 2 events. One other funny Behavior, this is in a graylog cluster with three nodes when I shut down the node having the issue the problem seems to migrate to one of the other nodes that was working just fine until the shutdown of the problem node, the inputs are cloud trail and cloudwatch.

The problem looks to be connected to the first instance that is turned on.
it only affects one of the three servers in the cluster.

the config

ALLOW_HIGHLIGHTING	true
ALLOW_LEADING_WILDCARD_SEARCHES	true
DNS_RESOLVER_ENABLED	true
ELASTICSEARCH_HOSTS	https://vpc-****.es.amazonaws.com
ELASTICSEARCH_IDLE_TIMEOUT	180s
MESSAGE_JOURNAL_ENABLED	false
MONGODB_URI	mongodb://graylog:*******@*.*.*.*:27017,*.*.*.*:27017,*.*.*.*:27017/graylog
OUTPUTBUFFER_PROCESSORS	15
PASSWORD_SECRET	******
PROCESSBUFFER_PROCESSORS	10
PROCESSOR_WAIT_STRATEGY	blocking
PROXIED_REQUESTS_THREAD_POOL_SIZE	64
ROOT_EMAIL	******@*****
ROOT_PASSWORD_SHA2	********
SERVER_JAVA_OPTS	-Xms4092m -Xmx4092m -XX:NewRatio=1 -XX:MaxMetaspaceSize=256m -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow
STALE_MASTER_TIMEOUT	10000
TRANSPORT_EMAIL_AUTH_PASSWORD	*********
TRANSPORT_EMAIL_AUTH_USERNAME	*******
TRANSPORT_EMAIL_ENABLED	true
TRANSPORT_EMAIL_FROM_EMAIL	**********
TRANSPORT_EMAIL_HOSTNAME	smtp.mailgun.org
TRANSPORT_EMAIL_PORT	587
TRANSPORT_EMAIL_USE_AUTH	true
TRANSPORT_EMAIL_USE_SSL	false
TRANSPORT_EMAIL_USE_TLS	true
TRANSPORT_EMAIL_WEB_INTERFACE_URL	https://********
TRUSTED_PROXIES	0.0.0.0/0
proxied_requests_thread_pool_size = 32
metrics_cloudwatch_enabled=true
metrics_cloudwatch_region=us-east-1
metrics_cloudwatch_report_interval=15s
metrics_cloudwatch_namespace=Graylog
metrics_cloudwatch_timestamp_local=false
metrics_cloudwatch_dimensions=[Server=*.*.*.*]
metrics_cloudwatch_unit_rates=seconds
metrics_cloudwatch__unit_durations=milliseconds
metrics_cloudwatch__include_metrics=.*

Looks like it was linked to an extractor on the input, after removing all of them the problem is gone.

Thank you for your time

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.