Filter messages depending on content


we are trying out Graylog to store syslogs from our firewalls.
We are testing with the OVA appliance and intend to move to a more production-ready environment once we figure the product out.

i’m trying to filter all messages related to DNS like this one:

Sep 09 2020 08:08:20 P1EFTD001 %FTD-1-430002: EventPriority: Low, DeviceUUID: bf2ce37c-d935-11e9-94d5-c016b62dc101, InstanceID: 1, FirstPacketSecond: 2020-09-09T08:08:20Z, ConnectionID: 33856, AccessControlRuleAction: Allow, SrcIP:, DstIP:, SrcPort: 54262, DstPort: 53, Protocol: udp,

and figured that i need to use pipeline rules.

i created a new pipeline, connected it to “all messages” and can see messages flowing.

the rule itself is where i’m stuck, i tried different formats but can’t get it to match DstPort: 53

rule “drop DNS”
contains (to_string($message.DstPort), “53”).matches == true

how can i drop all messages that contain DstPort: 53 ?

Thanks in advance,

Hi, Paolor

I collect all the firewall traffic with the Use of Packetbeat collector
Then I create a stream on the port based (ie I separate traffic on the protocols and ports)
Then I add the stream to the pipeline

Why not use something like this:

rule "drop DNS"
to_long($message.DstPort) == 53

It’s necessary to have extracted field DstPort

Hi bahram,

thanks for your reply, that sounds very interesting but is beyond what we are trying to achieve.

I have a bunch of Cisco FTD devices (about 70 ish), they route all traffic within our environment several hundreds of Gbps in total. This is overwhelming our FMC appliance and our retention period has dropped.
We are checking out Graylog to see if we can store these messages indefinitely, or for a few months at least.
Each connection generates a very detailed Syslog and sends it to Graylog and we extract the various fields. This gives us a replica of FMC without the 300M logs limitation

working perfectly so far but we realized that 45% of all messages are related to DNS queries, so we wanted to exclude them from being written to disk.


Thanks Shoothub,

Forgive my ignorance in this matter, i’m a network engineer and i’m new to Graylog.
i tried to modify the rule as you suggested but i can’t seem to get hits.

i changed it as follows:
rule “drop DNS”
to_long($message.dst_port) == 53

dst_port is the name of the field that we are extracting from the messages
“title”: “dst_port”,
“extractor_type”: “regex”,
“converters”: ,
“order”: 8,
“cursor_strategy”: “cut”,
“source_field”: “message”,
“target_field”: “dst_port”,
“extractor_config”: {
“regex_value”: “((?<=DstPort:\s).+?(?=,))”
“condition_type”: “none”,
“condition_value”: “”

the extractor is working, i can see it when i look at a message.

the pipeline is receiving messages, but they are not hitting the rule

  1. Check your processing order, so Message Filter chain is before Pipeline Processor:
  2. Try to debug rules conditions using debug function:

let debug_message = concat("Port ", to_string($message.dst_port));

Then check log file /var/log/graylog-server/server.log for debug output

1 Like


Thanks heaps changing the order did the trick nicely.

i’ve separated the unwanted messages in different streams, and then applied pipeline rules as you suggested to drop them before they are written to disk.

that will save 70% of disk space.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.