Pipeline rule not working


(Guy Knights) #1

I’ve been trying to troubleshoot a pipeline rule I was working on, but I’ve since cut it back to a very simple flow and yet I still cannot get it to work.

The rule is below:

rule "rds log matched"
when
   has_field("account_id")
then
   debug("testing");
end

I’m tailing the graylog server log (single node test setup) and I don’t see any debug messages listed in there. I do however see the following whenever I have made changes to and saved the rule or pipeline config:

[DeadEventLoggingListener] Received unhandled event of type <org.graylog.plugins.pipelineprocessor.processors.PipelineInterpreter.State> from event bus <AsyncEventBus{graylog-eventbus}>

The pipeline I’m working on is connected to a stream that pulls messages (removes them) from All messages, and it has its own index. Not sure if that makes a difference.

Thanks,
Guy


(Jochen) #2

Please post the complete logs of your Graylog node(s) and the list of installed plugins including their versions (see System / Nodes / Details in the web interface).


(Guy Knights) #3

Sure. Here’s a link to the server log: https://fr.pastebin.ca/3891544

The installed plugins:

Anonymous Usage Statistics 2.3.1
AWS plugins 2.3.1
Collector 2.3.1
Elastic Beats Input 2.3.1
Enterprise Integration Plugin 2.3.1
MapWidgetPlugin 2.3.1
PagerDuty Alarmcallback plugin 1.3.0
Pipeline Processor Plugin 2.3.1


(Guy Knights) #4

I should just mention that in the interests of brevity I stopped Graylog and removed the existing server log file before restarting Graylog and letting it run for a while. So there are no examples of the [DeadEventLoggingListener] warning I mentioned earlier in the log I posted above.


(Jochen) #5

Try removing the AWS plugin and the PagerDuty plugin and try again.


(Guy Knights) #6

I can try removing the PD plugin but the AWS plugin is the source of the log messages (AWS flow logs).

FYI, I am now working with a new installation of Graylog (it was a test setup I was playing with) since my original message, and I still can’t get this to work. My actual intent is to route messages from one stream to another (which is not working), but I have also added a debug statement and it’s not outputting anything to the log either. I changed the log level to debug but still no luck.

This is my current rule:

rule "filter RDS"
when
   has_field("account_name") && $message.dst_port == 3306
then
   let stream_name = concat(capitalize(to_string($message.account_name)), " RDS");
   debug(to_string(stream_name));
   route_to_stream(name: to_string(stream_name));
end

For the record, I have another pipeline with the following rule that uses a lookup table to add a field and it’s working fine:

rule "add aws account name"
when
   has_field("account_id")
then
   let acc_name = lookup_value("aws-account-names", $message.account_id);
   set_field("account_name", acc_name);
end

However the above pipeline rule is attached to the “All messages” stream while the one I’m having problems with is connected to other streams. The other streams successfully move logs from “All messages” so there are messages in the source streams, it just seems like this pipeline is doing nothing.


(Jan Doberstein) #7

@knightsg

did you check if the field $message.dst_port is actually validated?

you need to give the information what is inside the field:

rule "filter RDS"
when
  has_field("account_name") && to_string($message.dst_port) == "3306"
then
  let stream_name = concat(capitalize(to_string($message.account_name)), " RDS");
  debug(to_string(stream_name));
  route_to_stream(name: to_string(stream_name));
end

should work


(system) #8

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.