Decorators: These are not related to pipelines, they are an inline tool with “Search” (Data pulled from Elasticsearch database based on your queries) that dynamically creates temporary fields in search results based on decorator configurations you set up. The resulting fields are not searchable. So for instance you could have a search on a stream connected to firewall and add a decorator to pick out information for clarity. Here is an example:
Palo Alto FW Message:
eastcoastFW.myco.co 1,2022/04/29 07:43:08,012001018874,TRAFFIC,end,2561,2022/04/29 07:43:08,10.20.7.19,207.211.31.113,50.207.58.82,207.211.31.113,outbound-connect,,,ssl,vsys1,internal-zone,external-zone,ethernet1/2,ethernet1/1,log forwarding,2022/04/29 07:43:08,16913,1,60612,443,53899,443,0x40041a,tcp,allow,6615,1367,5248,18,2022/04/29 07:42:51,0,not-resolved,,7037914757779042946,0x0,10.20.0.0-10.20.255.255,United States,,8,10,tcp-rst-from-client,0,0,0,0,,fwb1,from-policy,,,0,,0,,N/A,0,0,0,0,953d8a3a-3bca-4460-a56c-e2dbb76cedb1,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2022-04-29T07:43:08.073-04:00,,,encrypted-tunnel,networking,browser-based,4,"used-by-malware,able-to-transfer-file,has-known-vulnerability,tunnel-other-application,pervasive-use",,ssl,no,no,0
[NOTE: The message above has been broken out into constituent fields with a separate pipeline rule (open new question if you want that)]
With a “Format String” decorator applied you can arrange the information so it reads easier:
Here is the Decorator rule (shows when editing a message table)

Which would give you a field in your search that looks like this:
ThisJustHappened
10.20.7.19 just connected out to something in Provincetown, United States with an IP of 205.139.111.12 (decorated)
Streams: With Graylog, you receive messages on an Input and store them in an Elasticsearch database. In between those two are Streams. A Stream can be attached to one or more inputs to catch the messages and direct them to an Index. (There is a default stream that everything goes through unless you set up your own) These Streams are where you can manipulate the data… actually… you can manipulate data before it hits streams with an Extractor attached to the Input to catch-and-modify. If you want to change data in the stream, you attach a processing pipeline to the stream and set up rules in the pipeline to manipulate data - i.e. pull out fields from the original message.
With all that - if you have a stream of messages coming from windows you would only break out three streams if you wanted to make sure you are storing “application”, “Security”, “System” (etc.) into different index files so that you can change retention times for each…If they all come in on one stream
going to one index, the pipeline rules, can simply say in the WHEN of each rule that the the winlog_channel
must be the type you want to manipulate (application, security system, etc)
Dropping messages: If you use drop_message() in a pipeline, that means it will never get to the index (stored in Elasticsearch) so if all three streams/pipelines go back to the same index when you drop “application” in one stream, it will never get stored in the index.
Hope all that makes sense!