While I work to get application owners to stop sending in large log lines & reconfigure my intermediate node to do some preprocessing to limit the size of a log packet - is there a way to drop the current process buffer with an api call?
We are getting log lines with 10000+ strings GELF’d in and it causes the process buffer to choke. Restarting the graylog-server.service drops it. But now i’m restarting multiple times a day. I’d really like a more eloquent way of dropping the process-buffer without a restart. Any thoughts?
I would guess processing still will work, but Opensearch/Elastic will be a bit scared by those sizes?
Did you try to implement a rule in a pipeline to drop such messages?
whatever it is, just playin around with the # worked. I’m able to filter out the problematic lines while retaining the integrity of the rest of the content.
the traffic_accounting_size is the size a message has, when it’s send to the log-database (Opensearch/Elasticsearch). It is accounted in bytes.
There are a few fields excepted, such as the stream or the input-ID. Fields like message and others set by you are accounted.