What I would like, is to somehow get this value back within Graylog, so that the next time it runs I can compare them. If the new value is let’s say 2x or more, it would trigger an email notification indicating a possible DDOS attack.
Writing a bash/py script to get this value via the API, doing calculations, and sending it back to Graylog would probably be the easiest way, but I would like to know if there’s a workaround/hack I can do here to keep everything within Graylog.
If this isn’t the best way to go about it I’ll glady listen to it.
TLDR; I want to send an alert if there’s been an increase of 100% in the message count over the last 10 minutes, compared to the 10 minutes beforehand.
There has been a couple situation of other members wanting the same thing. I don’t know of a hack yet
But I do believe the Enterprise version might be capable of doing that for you, Its free under 2 GB per day.
I did come across an idea but this would pertain to installing some other features. Not sure if you want to go this far.
I enabled Prometheus on Graylog from here
Then I installed Grafana on Graylog from here
Created a dashboard for Graylog and since there is metrics, I can monitor my INPUT and Streams created. So if the percentage is greater then I wanted. Grafana is capable of send alerts. Below is an example of one stream that I have chosen.
Although this definitely does work, it feels like more overhead than a simple script. My thinking behind that conclusion is that having two services running on one host could cause a much larger bloat in resource usage, which is something I would like to avoid. Plus, even if it is a simple installation there’s more to do compared to a script that calls an API, gets a value, and does one calculation.
For now I will try to go the script route, but leave this discussion open in case anyone has come up with an all around Graylog hack of sorts.