Field added by pipeline not triggering alert

Hi there,

I’m currently trying to get alerted on a field that is added by a pipeline stage such as this:

rule "extract error level"
when
  true
then
  let fields = regex(".* (ERROR|INFO|WARN|DEBUG) .*", to_string($message.message),["severity"]);
  set_fields(fields);
end  

As you can see the rule adds a field “severity”. Looking at the specific stream the pipeline is connected to the field is there and contains the right value. However, when I set up an alert condition (a simple field value condition for “serverity:ERROR”) I never get notified. Seems the alert condition doesn’t see the severity field added by the pipeline. Is this the expected behavior? Could someone explain please? Thanks!

Is there anybody who can give me some hint please?

What’s the exact alert condition you’ve set up?
How exactly do the indexed messages, which ran through this rule, look like?

Thanks jochen,

as already stated the alert condition is a field value condition. The alert is triggered when messages matching severity: “ERROR” are received. Grace period: 0 minutes. Including last 35 messages in alert notification. Configured to repeat notifications.

Examples for log messages within the connected stream:

[0me[0m14:57:00,000 INFO [de.newsaktuell.mb.de.schedule.orderce.OrderCreatorExecutorScheduledMDB] (EJB default - 1) OrderCreatorExecutorScheduledMDB triggered
[0me[31m09:48:16,385 ERROR [de.newsaktuell.mb.jee.commons.AbstractQuartzScheduledMDB] (EJB default - 6) Error while execute scheduled command: de.newsaktuell.ordercore.helper.procevent.OrderCreatorExecutorCommand@5f6345e2: java.lang.RuntimeException: Error while executing ProcEvent
[0me[32m11:16:51,695 DEBUG [org.jboss.as.config] (MSC service thread 1-2) Configured system properties:

So as you ask I guess that it should work normally, right? I was not sure if this is supposed to work as both the alert as well as the pipeline are connected to the same stream so the issue might be related to the wrong processing order somehow.

This looks like it’s just the “message” field of these messages. What’s in the other fields?

Alerts work on already indexed messages, i. e. after all processors have been run and the messages have been indexed into Elasticsearch.

Hi jochen,

here is a more complete example of a typical error log entry. It was taken from an application specific stream that is derived from the “all messages” stream by filtering it for source == “/mbprocessing_application_1”

container_id
d7d51994c53d55576fa4c29f65c02e166f3b4df7a3044f7a6786841084719c18
container_name
/mbprocessing_application_1
full_message
{“container_name”:“/mbprocessing_application_1”,“source”:“stdout”,“log”:"\u001B[0m\u001B[31m09:48:16,385 ERROR [de.newsaktuell.mb.jee.commons.AbstractQuartzScheduledMDB] (EJB default - 6) Error while execute scheduled command: de.newsaktuell.ordercore.helper.procevent.OrderCreatorExecutorCommand@5f6345e2: java.lang.RuntimeException: Error while executing ProcEvent ",“container_id”:“d7d51994c53d55576fa4c29f65c02e166f3b4df7a3044f7a6786841084719c18”}
message
[0me[31m09:48:16,385 ERROR [de.newsaktuell.mb.jee.commons.AbstractQuartzScheduledMDB] (EJB default - 6) Error while execute scheduled command: de.newsaktuell.ordercore.helper.procevent.OrderCreatorExecutorCommand@5f6345e2: java.lang.RuntimeException: Error while executing ProcEvent
severity
ERROR
source
/mbprocessing_application_1
tag
swarm-node5.node.dint.newsaktuell.de
timestamp
2017-03-29T07:48:16.119Z

Please note that the severity field was extracted from message field and added by the mentioned pipeline rule after the log message intially appears to the application specific stream. So for me it seems the pipeline processing is some kind of post-processing that results are not visible to alerting. Hope you can correct me here and prove me wrong.

Thanks

That’s incorrect and here’s what I wrote before:

Please post the output of the following curl command (using your own credentials and the correct URI to the Graylog REST API):

# curl -u $USERNAME:$PASSWORD -H 'Accept: application/json' 'http://graylog.example.org:9000/api/alerts/conditions?pretty=true'

Hej @marcuslinke

think of the alerts like a scheduled search that runs on the data that is present. So your described situation should not happen in that way.

@jan, @jochen Thanks for the clarification!

Following the alerts configuration as demanded:

{
  "total" : 1,
  "conditions" : [ {
    "id" : "4a8bbcab-1cfb-451b-928d-59b8bf0301e1",
    "type" : "field_content_value",
    "creator_user_id" : "admin",
    "created_at" : "2017-03-20T15:12:44.218+0000",
    "parameters" : {
      "grace" : 0,
      "backlog" : 35,
      "repeat_notifications" : true,
      "field" : "severity",
      "value" : "ERROR"
    },
    "in_grace" : false,
    "title" : "mb-processing errors"
  } ]
}

This looks correct to me.

Are you sure that there isn’t just an already triggered alert which isn’t repeated (new option in Graylog 2.2.x) and that there in fact is a alarm callback configured for that stream?

Everything works as expected now. The issue was a misconfigured mail server. @jan @jochen Thanks a lot for your help!