How to change Beat-input to Kafka

Hi,

I use below flowing to collect log,
collecotr-sidecar(filebeat) -> Graylog-server:Beat input
image
image
now we met performance bottle-neck on input,so we decide to let Kafka to cache the logs between sidecar and Graylog-server,

but How should I choose the type of input on this case?
Kafka GELF or raw/panitext Kafka
and is there any guide for setup Kafka input?

beats via kafka is currently not supported by the collector sidecar and graylog.

Could you please open a feature request on https://github.com/Graylog2/graylog2-server/issues for the beats input via kafka and for the beats kafka output at the collector https://github.com/Graylog2/collector-sidecar

thank you

Thanks for reply, I’ll open a feature request on Git.
BTW, is there any schedule to develop this feature now?

And,
Is there anyway to resolve the performance problem if without Kafka?
we have to input 100,000 messages per second.

BTW, is there any schedule to develop this feature now?

scheduling a feature is done when the feature is considered and seen as something that is needed. Nothing that is made ad-hoc.

Is there anyway to resolve the performance problem if without Kafka?

You have multiple options to work around this - you could add more moving parts that resolve the missing feature. The other option would be to load balance the input on multiple servers.

For reference:

Thank you for reply,
there is other way “Nxlog + sidecar” to collector log, I think I can use the nxlog + GLEF kafka to resolve this problem, is that right?

making this is one option you have - other would be to have a logstash receiving the messages via beats from the filebeat, place that into any kind of queue. make other logstash read from that queue and push it to Graylog via beats input.

what you are going to make just depends on your skill set, the amount of moving parts you like to have and the money you can spend on that.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.