yes, you can do all (and more) what you do in the extractor in the pipelines. Also you can do stream routing in the pipeline based on the extraction/normalization you have done in the pipeline.
Your Workflow would be nearly the one you wrote down
Input > All Messages > Processing Pipelines
and in the Processing Pipelines you can do what you like and what fits your needs, that cou be for example. Split by log type into seperated streams and have other pipelines on that streams that do other extractions/additional processing. The other option would be, do the processing and only save the final result in a new stream.
The Pipelines can route messages to streams and to those streams can be other pipelines connected. So the routing is not from pipeline to pipeline but by pipeline from stream to stream. The only important note on that. You need to have the stages sorted as the processing on the new pipeline jumps in at the stage it is routed to the stream.
Note that the built-in function
route_to_stream causes a message to be routed to a particular stream. After the routing occurs, the pipeline engine will look up and start evaluating any pipelines connected to that stream.
So if you route a message from
stage 1 on
stream_a the first stage on
stream_b for the message is
stage_2 as the pipeline processing does not start from the first stage in this pipeline. This is done to prefend the user created loops.