we are trying out Graylog to store syslogs from our firewalls.
We are testing with the OVA appliance and intend to move to a more production-ready environment once we figure the product out.
i’m trying to filter all messages related to DNS like this one:
I collect all the firewall traffic with the Use of Packetbeat collector
Then I create a stream on the port based (ie I separate traffic on the protocols and ports)
Then I add the stream to the pipeline
thanks for your reply, that sounds very interesting but is beyond what we are trying to achieve.
I have a bunch of Cisco FTD devices (about 70 ish), they route all traffic within our environment several hundreds of Gbps in total. This is overwhelming our FMC appliance and our retention period has dropped.
We are checking out Graylog to see if we can store these messages indefinitely, or for a few months at least.
Each connection generates a very detailed Syslog and sends it to Graylog and we extract the various fields. This gives us a replica of FMC without the 300M logs limitation
working perfectly so far but we realized that 45% of all messages are related to DNS queries, so we wanted to exclude them from being written to disk.
Forgive my ignorance in this matter, i’m a network engineer and i’m new to Graylog.
i tried to modify the rule as you suggested but i can’t seem to get hits.
i changed it as follows:
rule “drop DNS”
when
to_long($message.dst_port) == 53
then
drop_message();
end
dst_port is the name of the field that we are extracting from the messages
{
“title”: “dst_port”,
“extractor_type”: “regex”,
“converters”: ,
“order”: 8,
“cursor_strategy”: “cut”,
“source_field”: “message”,
“target_field”: “dst_port”,
“extractor_config”: {
“regex_value”: “((?<=DstPort:\s).+?(?=,))”
},
“condition_type”: “none”,
“condition_value”: “”
the extractor is working, i can see it when i look at a message.
the pipeline is receiving messages, but they are not hitting the rule
Thanks heaps changing the order did the trick nicely.
i’ve separated the unwanted messages in different streams, and then applied pipeline rules as you suggested to drop them before they are written to disk.