Help with combine rule and extractor

I ask you for advice on writing the optimal rule.
The situation is this:
The project has a set of logs. Most of them have data in request_body and response_body. At the same time, some are in json format, some are in xml, some are in other formats.
I made extractors based on regular expressions,
which separately pull out blocks with request_body and response_body data from the main log. Further, from these blocks, through the json extractor, I parsed these logs into separate fields. As a result, everything is fine for logs in json format, everything is bad for everything else.
I’m not a big master of pipelines, I’m just starting to learn.
In this regard, the question arose whether it was possible to make a rule for the pipeline, in which there was a condition that, by the value of a certain parsed independent field (in this case, request_uri), would prohibit parsing in json for all non-json logs. Something like that:

rule “stop parse not json logs”
has_field(“request_uri”) &&
contains(to_string($message.thread_name), “x”, “y”, “z”)

And then I don’t know how. In rules, you can somehow specify something like: “not execute extractor “json parser””? I mean, can graylog see extractors in rules? If yes, what would be the optimal function to use?

Of course, there is an alternative way to solve this problem -
instead of just two, make a huge number of extractors for each url separately, thus excluding unnecessary logs from parsing. But this is a very crutch way. Is there a way to solve this problem using the pipeline?
Sorry if my post looks dumb.

Hey @Garikos

I assume all these logs are going to the same input? If so would it be posible to separate the logs by TYPE into different inputs. this would be easier to adjust and/or modify.

Your extractors have probably already run by the time you get to pipelines, depending on your processing order.

I would just move it all to pipelines. Create one rule for each type of data, with a when clause that somehow matches the correct messages. Have that rule parse that data in that format. Then add all the rules to stage 0 of the pipeline and away you go. Only the one rule will actually run and parse in the correct format.

Pipelines can seem daunting but once you are used to them they are so powerful and really not hard.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.