1. Describe your incident:
Not really an incident, I’m trying to understand the trade off between cramming a bunch of related rules in to one stage vs have a stage per rule.
I’m currently migrating my extraction rules to a pipeline
Looking at my long list of extraction rules I identified the rule that is triggered the most for parsing pfsense filterlog (firewall) logs and added that to stage zero. I then looked at the next rule that was triggered the most. I wasn’t sure if I should add it to the same stage or not. I chose to add a new stage (stage 1). My rational is that I’ve select the (somewhat confusingly named) option in stage zero of
“Messages satisfying none or more rules in this stage, will continue to the next stage”
I read this as “if a rule in this stage matches, stop here and don’t execute any more stages” and it’s quite possible I’m reading that wrong.
The reason this sees logical to me is once I identify a pattern for pfsense logfilter that matches, I’ll capture all the fields I need without further enrichment. This should reduce the load on the pipeline processing so it doesn’t try to match a regex on the next rule that I know won’t match.
So my question is, does it make a difference if I cramp all my pfsense logfilter (firewall log) rules into one stage or does the design pattern that i followed, having a different stage for each rule make more sense.
2. Describe your environment:
-
OS Information:
Docker single node, personal use. -
Package Version:
5.0.12 -
Service logs, configurations, and environment variables:
None
3. What steps have you already taken to try and solve the problem?
I’ve read through the documentation but there doesn’t seem to be any guidance (or I didn’t look hard enough) on when you should create another stage for a new rule vs keep it in the same stage.
4. How can the community help?
Looking for guidance or recommendation on how to think about the trade offs between pipeline stages and performance.