This seems to be an obvious question but I think I’m misunderstanding how the Alert Filter timings are set.
If I want a Search to check once a day for logs that meet the filter how do I configure the timings?
At the moment I have
Search within the last = 24 hours
Execute search every = 24 hours
This seems to run a search every hour at the minute and second at which I created the Alert Definition (e.g. 13:10:22 then 14:10:22 etc.).
Additionally if I want this Alert Definition to run at 00:01 for example do I need to create the Alert Definition at that time? There doesn’t seem to be a way to set what time the filter runs.
I haven’t been able to confirm your observation about executing every 24 hours, but you can get more detail using the API. I would be interested specifically to see what your “search_within_ms” and “execute_every_ms” parameters look like.
The API browser makes it pretty easy to take a peak behind the curtain without setting up Postman or using curl with a JSON payload. As for scheduling at a certain time, this might be a place where the API can come in handy as well. I’ve seen the question about scheduling to run at a specific time a few times but never have seen a solution, so perhaps just setting up a scheduled task which will create/enable the event at the time at which you want it to execute every 24 hours will be better than sitting around waiting to click a button.
I’m not too worried about the time its running at really, it’s more the running once a day and not once an hour. The settings seem correct but I think I must be misunderstanding how Graylog is measuring the two time settings?
Your interpretation is exactly what mine would be. I did just modify one of my event definitions to execute every 24 hours and when I look at the config I see that it has the correct value in milliseconds.
I’m on 4.0.1. Are you as well? It would be interesting to see if yours also says 86400000, and then to see the scheduler context to see if it’s actually running once every 24 hours or more often.
If you can demonstrate that your config says 24 hours in milliseconds but that the “triggered_at” and “next_time” values aren’t 24 hours apart then maybe you’ve stumbled upon a bug.
So the schedule for the definition found in GET /events/definitions says:
@ttsandrew where would I find the scheduler context to see if its running more often than defined?
So the definition looks like below, however this definition has raised a lot of repeated alerts in a day.
2020-12-03 08:22:25 Server1
2020-12-03 09:10:43 Server2
2020-12-03 09:22:25 Server1
2020-12-03 10:10:43 Server2
I’m using 3.3.1 if that matters and the definition is currently set to Create Events for Definitions if… Filter has results. I’m wondering if I need to configure by aggregation and somehow split the aggregation by server name.
It still looks like the event definition is running more than every 24 hours though.
This looks like it should be running once a day. @aaronsachs, do you have a moment to review? Are we missing something?
Hi folks, this looks correct at first glance, but maybe @jan might have a bit more insight. Nothing seems to be missing.
that looks like a bug to me - maybe it is similar to what @ttsandrew refer, that the UI has a bug and the API would allow you to be more granular.
We have that in some places where the UI is the limiting factor but you can express very specific with the API.
So returning to this. Do you mind opening a bug report at github for that @nick?
@jan I’ll open a bug for it but unfortunately it won’t help us, we use Group Mappings which have been removed from Graylog in v4+
Is there any further testing I can do to ensure it is a bug and not a configuration mistake?
We’ve had this issue again over the weekend with exactly the same behavior.
The UI and API seem to match when it comes to settings, can anyone else replicate this behavior?
I’d like to figure out a temporary fix before raising a bug because we’re unlikely to be able to deploy an updated version of Graylog in 4+ for a while.
FWIW, do you have the enterprise license?
If you did, you could setup a dashboard with the query you want and then just have it generate a report daily. The added benefit being that you could check the dashboard whenever you wanted to see if anything matched your search at that time. just a thought.
We don’t currently have an enterprise license, we did have pre-sales calls but they didn’t exactly fill us with confidence!
Our security requirement is for alerting, it has to be push alerts, passive time based monitoring isn’t good enough. Also we found you can’t do mathematical operations on time based logs within dashboards, that was one of the things we raised during pre-sales. We were told to raise a feature request which we did and then like most things in the Github, they’ve sat there with no action.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.