Large log line clogging process-buffer

While I work to get application owners to stop sending in large log lines & reconfigure my intermediate node to do some preprocessing to limit the size of a log packet - is there a way to drop the current process buffer with an api call?

We are getting log lines with 10000+ strings GELF’d in and it causes the process buffer to choke. Restarting the graylog-server.service drops it. But now i’m restarting multiple times a day. I’d really like a more eloquent way of dropping the process-buffer without a restart. Any thoughts?

I would guess processing still will work, but Opensearch/Elastic will be a bit scared by those sizes?
Did you try to implement a rule in a pipeline to drop such messages?

trafficaccountingsize is the function of your choice

rule "drop big messages"
when
 traffic_accounting_size() > 9000
then
  drop_message();
end
1 Like

oh what a great thought! lemme try that.

is this size in bytes? strings? lines?

## traffic_accounting_size

traffic_accounting_size [(category: message handling)]

Calculates the current size of the message as used by the traffic accounting system.

https://go2docs.graylog.org/5-0/making_sense_of_your_log_data/functions_descriptions.html

whatever it is, just playin around with the # worked. I’m able to filter out the problematic lines while retaining the integrity of the rest of the content.

@ihe, you are the best.

the traffic_accounting_size is the size a message has, when it’s send to the log-database (Opensearch/Elasticsearch). It is accounted in bytes.
There are a few fields excepted, such as the stream or the input-ID. Fields like message and others set by you are accounted.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.