We have a use case where we need to write a stream processor on the data.
For example: Let’s say CPU utilization data is coming into Graylog into some stream.
We want to write a rule which will compute average CPU utilization for 15 minute period and if it is more than 90% or something more complex like 2 standard deviations away from mean, it should DO SOMETHING.
In another setup where we don’t have Graylog, we are reading data from Kafka and running stream rules.
What is the best way to achieve such use cases when we use Graylog?
We don’t want to do this as BATCH by pulling data directly from ElasticSearch.