How do you prevent killing graylog Server from index failures?

Last week I had two index failures due to too long stack traces.

The Graylog server had stopped responding and I had to restart it.

The debugging of such index failures is a pain in the a… But that’s another point.

How do you prevent your environment from such faults ? I tried it with a custom mapping and ignore_above, but the message field is a text field (because of field anaylsis ) and ignore_above works only with a keyword.

you could check the size - or the number of bytes - in the field and reduce that/write that into another field then.

Or you check if this is a stacktrace and if that is true, write the content in a specific field as keyword and delete the content from the message field or replace it.

1 Like

Hi Jan,

I have now deposited the following pipeline rule:

rule "abbreviate messages over 30000 characters"
when
    has_field("message") AND
    regex("^.{30000,}$", to_string($message.message)).matches == true
then
    set_field("message", abbreviate(to_string($message.message), 30000));
    debug( concat("abbreviate oversized message from ", to_string($message.source)));
end

And it seems to work :slight_smile:

Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.