Pipeline rule for Threat Intelligence not matching

Hi there:

Using Bro/Zeek to send traffic logs via FileBeat.

JSON decoder applied to input so that message breaks down to fields.

Message Processor configured as follows:

The stream shows message fields as expected:

And a couple of pipeline rules configured and linked to the stream to apply threat intelligence lookup on the filed “id_resp_h” shown above (the destination IP of the connection):

RULE #1:
rule “Threat Intelligence Lookups: id_resp_h”
when
has_field(“id_resp_h”)
then
set_fields(threat_intel_lookup_ip(to_string($message.id_resp_h), “id_resp_h”));
end

RULE #2:
rule “OTX Lookup: id_resp_h”
when
has_field(“id_resp_h”)
then
let intel = otx_lookup_ip(to_string($message.id_resp_h));
set_field(“threat_indicated”, intel.otx_threat_indicated);
set_field(“threat_ids”, intel.otx_threat_ids);
set_field(“threat_names”, intel.otx_threat_names);
end

The problem I’m facing is that I see no matches for these rules:

And as a consequence the threat intel plugin doesn’t trigger.

Any ideas please?

Thanks!

1 Like

Forgot to mention: Graylo version 3.0.2

Thx

1 Like

Try to check, if rule match using debug function:

https://docs.graylog.org/en/3.1/pages/pipelines/functions.html#debug

let debug_message = concat("Bro response ", to_string($message.id_resp_h));
debug(debug_message);

After that, check graylog logs for output, if the rule match.
sudo tail -f /var/log/graylog-server/server.log

You should see in log something like:
INFO [Function] PIPELINE DEBUG: Bro response xxx

2 Likes

Thanks for your message.

So, the only way I found to trigger the rule was changing its condition to “when true”.

Enabling the debug you proposed (thx for that) confirms that, for whatever the reason, the field id_resp_h is not found while processing the pipeline rule:

INFO o.g.p.p.a.f.Function [processbufferprocessor-1] PIPELINE DEBUG: Bro response

(No IP Address shown).

What’s really interesting is that I have two more Graylog Inputs, also using Beats, with a JSON extractor too, and pipeline rules to apply threat intel and these are working with no issues.

Web App Firewall:

INFO o.g.p.p.a.f.Function [processbufferprocessor-2] PIPELINE DEBUG: WAF response 144.138.x.y

OSQUERY for Open Sockets:

INFO o.g.p.p.a.f.Function [processbufferprocessor-4] PIPELINE DEBUG: OSQUERY response 134.209.x.y

I’m puzzled :frowning:

Alright, tricky one.

So, the problem was that actually the field sent out (Bro/Filebeat) is “ip.resp_h” not ip_resp_h".

The JSON extractor applied to the input though was changing ip.resp_h in original message to ip_resp_h, however the pipeline rule does not match “_” but it does match the “.”

After that another problem happened with the plugin itself not finding any string to check against the database.

So, final solution applied is:
Decode JSON in origin (FileBeat) and rename fields starting with “id.” as “id_”
Without this rename function (preprocessor) the same problem would happen with the OTX plugin not finding the field (tested that myself).
Filebeat config:

filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /usr/local/spool/bro/*.log
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  document_type: json
  json.message_key: log
  json.keys_under_root: true
  json.overwrite_keys: true
processors:
- rename:
    fields:
     - from: "id.resp_h"
       to: "id_resp_h"
    ignore_missing: false
    fail_on_error: true
tags: ["bro"]
fields:
  type: log
  collector_node_id: bro
output.logstash:
  hosts: ["x.y.z.w:5045"]

At this point the JSON extractor in Graylog input is not required anymore.

Pipeline rules:

rule “Threat Intelligence Lookups: id_resp_h”
when
has_field(“id_resp_h”)
then
set_fields(threat_intel_lookup_ip(to_string($message.id_resp_h), “id_resp_h”));
end

rule “OTX Lookup: id.resp_h”
when
has_field(“id_resp_h”)
then
let intel = otx_lookup_ip(to_string($message.id_resp_h));
set_field(“threat_indicated”, intel.otx_threat_indicated);
set_field(“threat_ids”, intel.otx_threat_ids);
set_field(“threat_names”, intel.otx_threat_names);
end

Any other field in Bro logs starting with “id.” would have to be renamed on the Filebeat collector to “id_” if pipeline rules required too.

I can’t find any explanation on why this was happening.

he @juaromu

the reason is that in the past dots in field names caused problems in elasticsearch. So one elasticsearch version did not work with dots in field names, while the following version allowed that again.

During that time we had implemented a workaround for the dots in field names and had many installations in the wild that might have the elasticsearch version that allowed dots and some that does not allow dots.

That is why Graylog is not allowing dots in field names.

Coming from that time some inconsistent behavior is given. You might want to check the Github issues of the server for an issue regarding this problem or create a bug report about the inconsistent behavior.

Hi Jan:

Thx for the explanation, that makes total sense now :-).

For me, and provided that this issue can be fixed at origin, renaming the fields using the Filebeat processor, it’s all good now, but good to know abt this and keep it in mind.

Thanks again.
Juan.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.