Trying to exclude some messages from AuditBeat/AuditD

Greetings!

I’m a new Graylog user, so I apologize if some parts of this questions might seem obvious and I missed something. So, to start with here is a bit of context.

We currently have a new 3 node Graylog cluster running on 3.1.0 running with redundant load balancers in front of it. The load balancers are running Haproxy 1.8.8 and Keepalived 1.3.9 for redundancy.

We need to monitor all commands that are being run as root (either as a root session or via sudo), so I installed AuditBeat, fed it some rules so that it forwards all commands run as root to a configured Input on Graylog. This part works great. The rules used are the good old rules you can find on most tutorials to do this:

-a exit,always -F arch=b64 -F euid=0 -S execve -k root-commands
-a exit,always -F arch=b32 -F euid=0 -S execve -k root-commands

Now obviously this outputs a LOT of syscalls, and I was able to narrow it down some more using Exclude rules and such to remove the ones I did not care about. However, on the Haproxy load balancers, Keepalived is sending what looks to be its heartbeat command (/usr/bin/killall -0 haproxy) which obviously floods the messages of that Input.

What I’m trying to do, is simply prevent this from happening.
Here is what I have tried so far, without success.

1- Try to configure a rule in AuditBeat to prevent sending this message. However, since we need to monitor all root commands, I cannot really stop monitoring events by filtering either “/usr/bin/killall” or /“bin/sh” and /bin/dash" (the last 2 are used to run the “killall” action). So I tried using the “a0” to “a3” fields in the rule, since these are the first 4 arguments passed to a syscall so I could target this one specific command. The man page specifically says this field does not support strings. From the man page: “Note that string arguments are not supported. This is because the kernel is passed a pointer to the string. Triggering on a pointer address value is not likely to work. So, when using this, you should only use on numeric values”. So I checked what the output in the Raw message was for each field, and used the numeric value that was passed to filter it out. It did not work.

2- Created a new Stream, and ticked the checkbox that says “Remove matches from ‘All Messages’ stream”, in hopes that I could match the incoming messages and prevent them from making their way to the rest of Graylog. I put some rules on it that actually work:

Field: auditbeat_process_title
Type: match exactly
Value: /usr/bin/killall -0 haproxy

When there are incoming messages matching this, I see them getting caught by the stream. However, when I made a Search for all messages coming from that host, I still got the “Killall” entries that made their way there.

3- Seeing as how just the Stream was not enough, I googled for ways to blacklist/drop messages altogether, and found the documentation that talked about using both Streams and Pipelines together to achieve this. So I then created a Pipeline, connected it to my previously created Stream. Since the only messages making their way to that Stream were the ones I wanted gone, I created a single Pipeline rule that would (or so I believed) drop all messages from that Stream:

rule “Drop All Messages”
when
has_field(“message”)
then
drop_message();
end

However, even after this, I still receive the heartbeat messages in the Input, as well as the stream and regular search. So either this Pipeline rule does not work, or it does work but the problem stems from elsewhere.

At this point, I’m figuring I must not understand how things work in Graylog, being a new user and all.
Am I wrong in assuming that even after all this, these messages made their way to Elastic and got written on disk?
Am I misinterpreting what “Remove matches from ‘All Messages’ stream” actually does? I thought Streams would catch messages before they made their way to Elastic or the Input, is that the case?

In the end, the only thing that I’m trying to accomplish here is to prevent an excessive number of messages being written to disk and flooding the nodes, as we have a few servers we need to monitor Root commands on.

Thanks!

I was able to make some progress and understand at least some part of where I was wrong before.
So the Stream I had created was still connected to the Default Index Set, so even if the tickbox for “Remove matches from ‘All Messages’ stream” was checked, it would forward the messages to Elastic nonetheless.
I have since created a new Index Set and connected my stream to it, and can now see that the “Killall” messages are stored on my newly created index and NOT the default “graylog_XX” index. What this means is that it would appear that it indeed does remove the message from the All Message stream, since it does not get forwarded to the default index anymore.

Since this is the case, I believe if I could get the Pipeline rule to work and actually drop the messages, I would be able to achieve what I set out to do. Would anyone have a clue as to why the previously mentioned Pipeline rule would not drop my messages?

Thanks again!

Since I thought the problem was related to the Pipeline rule, I attempted to simple use the following rule to test things out:

rule “Always execute Drop Message”
when
true
then
drop_message();
end

Even then, the messages would still make their way to my newly created Index and not get dropped.
Am I missing some piece of configuration?

To recapitulate, I have:

-> Created a new Index Set
-> Created a new Stream which writes into the new Index Set
-> Ticked “Remove matches from ‘All messages’ stream”
-> Added rules to the Stream to match fields, which works fine.
-> Created a new Pipeline
-> Connected the new Pipeline to my new Stream
-> Created a Pipeline rule (“Always execute Drop Message”)
-> Added the “Always execute Drop Message” to the Stage 1 of my new Pipeline.

what is your processing order in System > configuration ?

are the processing pipelines before the message filter chain? if yes, you should switch that.

1 Like

Hey Jan,

It turns out this was actually the problem.

Thanks a bunch!

Just out of curiosity, is there a use-case where the defaults would be desired? Wouldn’t having this order in System > Configuration make it so Pipelines cannot be used out of the box?

sure - you might want to make parsing of everything before the stream rules kick in for example.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.