Graylog in a Nutshell (Diagram)

i wouldn’t put nxlog etc inside sidecars, sidecars just launch a specific log shipper (like in green box you have there) based on data they receive from graylog, also rules are integral part of streams, without rules there are no streams, indices and and pipelines are both dependent on streams, but messages ultimately go to indices, also outputs are dependent on streams too so i’d put those as equal to alerts, indices and pipelines, ideally pipelines should be between streams and indices, right?

The way I did the rules with the streams was sort of what I meant. It was implied you need them, like you need connections, stages and roles for the pipelines to work.

Trying to keep this simple but I could wrap each general function with its dependants to show they are necessary.

Point taken though I’ll add a few words to clarify and wrap the streams

Not entirely sure I follow about the event definitions. Let me redo it and tell me if this is right.

Seems like a lot of work is done within the streams.

This a little more accurate?

3 Likes

@WavedirectTel

Wow :eyes:

I could use you over here to make couple logical diagram :smiley: Good Job.

I think you have archived this.

1 Like

I always thought alerts works by periodically searching indices and when I attach alert to stream it will just filter for “stream: my-stream-id” in this periodical search.

Otherwise LGTM if graylog is configured like this
graylog processor configuration

But if you reverse order of these two, then Pipelines will come first and Extractors and Stream Rules after them. Pipelines will then see everything come in “All messages” stream, which can look useless until they thell you pipelines can do route_to_stream. Then message appears in new stream and pipelines connected to that stream start to run. So its possible to do everything in pipelines too.

2 Likes

So is nisow95612 correct? Or how is the diagram as it stands?

He is actually right, but this depends on how your setting up Graylog instance. For basic understanding your good. You could put a note about settings in Processor Configuration.

I like the diagram in General a lot! I’m wondering though, if is would be suitable to put the Output to the streams as well. From my point of view the output is configured per stream after all the processing is finished.
I use Lookup tables mostly in pipelines - I’m not aware of any other way to use them. I’d vote to shift them from Dashboards to Pipelines.

Good catch :+1: On the Dashboard tip.
After reading about Outputs.

All of these Outputs first write messages to an on-disk journal in the Graylog cluster. Messages stay in the on-disk journal until the Output is able to successfully send the data to the external receiver. Once the messages have been written to the journal, they are optionally run through a processing pipeline to modify or enrich logs with additional data, transform the message contents, or filter out any some logs before sending.

So I believe your correct @ihe . I’m still learning something new :laughing:

I will try to revise. So it only outputs from the streams?

This is correct from what I read.

Wouldn’t be output from the Stream “Rules”? Or just Streams (All)?

Doesn’t let me edit my post so if an admin can post this to the original and replace the old please thanks.

(added current diagram to the initial post) :smiley:

1 Like

@WavedirectTel

Sorry for the delay, I was on Va-ca, as my understanding this would be on Streams and each stream could have a different output and type.

1 Like

In my opinion and how one configures alerts that part of the system looks in the database / indice directly as one configures the period and search query that one configures.

The system then schedules the alert query’s. That is how it seams to be implemented in version 4.x. Tho older versions seamed to work a bit differently.

Yup, alerts are effectively cronjobs that periodically search elastic. I wish there was a way to hook some of my alerts directly in the processing flow or better maybe allow pipeline rules to create real-time alerts?

Dunno I haven’t got that far with it yet. But I am enjoying the hell out of Graylog so far. So much we can do but so much to learn!

I still think lookup tables are (mostly) used in pipelines. Those can be used in decorators afterwards, but the main use at least to me is in pipelines.