Pipeline condition not correct

Hi guys

I am about to create pipeline rules, but my when condition is not correct interpreted.

Development environment, no notifications, except for the newer version.

GrayLog Version: 3.2.x

OS: Ubuntu 20.04 LTS in a Hyper-V Environment, 4 cores, 10GB RAM

Message Processors Configuration: Message Filter Chain - Pipeline Processor. AWS and GeoIp deactivated.

This is the second rule in Stage 0, Rule one is working perfectly.

The original when, I wanted to use is the following:

NOT(has_field(“origin”)) && to_long($message.WorkArea_AreaSize) > to_long(lookup_value(“ltables_workareas”, $message.WorkArea_AreaID, 0))

  • $message.WorkArea_AreaSize is an extracted field from the Syslog_TCP Input
  • ltables_workareas is a lookup table

it seems strange to me, so I did the following:

I added false in the when and it behaved correctly.

Then I added true and wanted to see the result of both conditions:

debug(NOT(has_field(“origin”))); ==> false

debug(to_long($message.WorkArea_AreaSize) > to_long(lookup_value(“ltables_workareas”, $message.WorkArea_AreaID, 0))); ==> false

both debug return false

when I put one of those condition, which are false into the when, it always returns true and jumps into the then

anybody has an idea, what is wrong with my conditions? Or maybe some bug?

Thank you very much for your support

Kind regards

Patrick

  1. Try to use:
    NOT has_field("origin")
  2. Try to use parenthesis () in condition when:
    NOT A AND (B>C)
    NOT has_field("origin")) AND (to_long($message.WorkArea_AreaSize) > to_long(lookup_value("ltables_workareas", $message.WorkArea_AreaID, 0))
  3. Check also if you use default value in lookup table

Dear shoothub

Thank you very much for your reply.

I tried already !has_field(“origin”) && xxx, same as using NOT , AND instead.

And put them in parenthesis.

The command “lookup_value(“ltables_workareas”, $message.WorkArea_AreaID, 0)”, the 0 at the end is the default value.

rule “compare_to_stored_AreaSize”
when
!has_field(“origin”) && to_long($message.WorkArea_AreaSize) > to_long(lookup_value(“ltables_workareas”, $message.WorkArea_AreaID, 0))
then
debug(NOT(has_field(“origin”)));
debug(to_long($message.WorkArea_AreaSize) > to_long(lookup_value(“ltables_workareas”, $message.WorkArea_AreaID, 0)));
debug(“Stage 0 Rule 2”);
let WorkAreaSizeStored = to_string(lookup_value(“ltables_workareas”, $message.WorkArea_AreaID, 0));
let debug_message0 = concat("==> WorkAreaSizeStored= “, to_string(WorkAreaSizeStored));
debug(debug_message0);
let WorkAreaSize = to_long($message.WorkArea_AreaSize);
let debug_message1 = concat(”==> WorkAreaSize= “, to_string(WorkAreaSize));
debug(debug_message1);
let WorkAreaId = to_string($message.WorkArea_AreaID);
let debug_message2 = concat(”==> WorkArea_AreaID= ", WorkAreaId);
debug(debug_message2);

set_field(“modified”, true);
end

the result is:

2020-09-28T14:38:04.534+02:00 INFO [Function] PIPELINE DEBUG: Stage 0 Rule 1
2020-09-28T14:38:04.535+02:00 INFO [Function] PIPELINE DEBUG: ==> WorkAreaSize= 19874
2020-09-28T14:38:04.535+02:00 INFO [Function] PIPELINE DEBUG: ==> WorkArea_AreaID= 1477686
2020-09-28T14:38:04.537+02:00 INFO [Function] PIPELINE DEBUG: false
2020-09-28T14:38:04.538+02:00 INFO [Function] PIPELINE DEBUG: false
2020-09-28T14:38:04.538+02:00 INFO [Function] PIPELINE DEBUG: Stage 0 Rule 2
2020-09-28T14:38:04.538+02:00 INFO [Function] PIPELINE DEBUG: ==> WorkAreaSizeStored= 19874
2020-09-28T14:38:04.538+02:00 INFO [Function] PIPELINE DEBUG: ==> WorkAreaSize= 19874
2020-09-28T14:38:04.538+02:00 INFO [Function] PIPELINE DEBUG: ==> WorkArea_AreaID= 1477686

a little explanation.
Stage 0 Rule 1 checks, if the AreaId is already processed and if no, writes the value of AreaSize in the lookup table.
and sets the field “origin” to true, so, I can count them in a dashboard.
the second rule, checks the areaSize of a new message for modification. if the size is higher, it sets a field “modified” to true, so I can recognize them and count them in the dashboard.

both rules are in the same stage 0
have a save day
Patrick

ok, I made some progress.
it seems, it is a race condition…
when rule 2 will be fired, rule 1 has not finished yet.
in my first rule, I set a field “origin”
set_field(“origin”, true);

now, I made a second rule, which ask in the when for this field “origin”

rule “dummy_true”
when
has_field(“origin”)
then
debug(“Stage 0 rule 2”);
end

it will not fire.
if I change to when: true it fires.

so, I need to find a workaround for this race condition.
if there is any graylog developer, please analyze this situation in the lab

stay save, and do not let a virus kill another beautiful day
kind regards
Patrick

Try to rather move second pipeline rule to stage 1, if you want to be sure, that first one would fire before second. I don’t know if there is a garantee, in which order pipeline rule runs in one step.

dear shoothub

this is exactly, what I did. modified the rules and add a new stage.
now, I have unique counters in my dashboard and I can see, if somebody attacks and modifies important values in the complete chain of enterprise applications

many thanks for your support.

stay safe and healthy
Patrick

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.