Collector-sidecar support kafka

Hi,
Excuse me, I have an idea that uses graylog collector-sidecar to collect logs and can throw logs to kafka. I have read some documents about this condition, but still not sure whether it is feasible. My purpose is to relieve the stress of elasticsearch.

I did some experiments, but it didn’t work.
There is what I have already done:

  1. Install kafka.
  2. Start ZooKeeper.
  3. Start kafka.
  4. Create a topic.
    > bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
    
  5. Create Raw/Plaintext Kafka input in graylog web.
  6. Edit /etc/graylog/collector-sidecar/generated/filebeat.yml
   filebeat:
  prospectors:
  - encoding: utf-8
    fields:
      collector_node_id: graylog-collector-sidecar
      flag: apache
      gl2_source_collector: f2056814-97f7-4a67-8b05-c9c1ab997107
      type: log
    ignore_older: 0
    paths:
    - /var/log/httpd/*
    scan_frequency: 10s
    tail_files: true
    type: log
#output:
#  logstash:
#    hosts:
#    - localhost:5044
output.kafka:
  # initial brokers for reading cluster metadata
  hosts: ["localhost:9092"]

  # message topic selection + partitioning
  topic: '%{[test]}'
  partition.round_robin:
    reachable_only: false

  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000
path:
  data: /var/cache/graylog/collector-sidecar/filebeat/data
  logs: /var/log/graylog/collector-sidecar
tags:
- apache-log

The message did not throw kafka. Maybe I missing something fundamental. Is there any other way to throw messages to kafka?
I will appreciate if you can help me!

The changes in the generated filebeat configuration will be gone once collector-sidecar is getting a new configuration.

If you like to have such a setup I would recommend to not use collector-sidecar for that and configure filebeat yourself.

Graylog server does not have a beats over kafka input and you would need to configure a raw input and extract the data out. But you might want to contribute a beats over kafka input plugin if you have the feeling that this is something more people might have use for that.

Thanks for your response. I am not sure whether I understand it correctly. Well, you mean is that if I use filebeat(not controlled by collector-sidecar) to collect logs directly and then configure output.kafka in configuration file filebeat.yml can throw logs to kafka? Then, graylog can pull message from kafka by Raw/Plaintext Kafka input. Please tell me if I make mistakes.
Than you.

yes, just read the filebeat documentation on kafka output that should answer your question.

Without having that tested if should be possible to read with a RAW/Plaintext Kafka Input the submitted messages.

1 Like

Thank you! Ok, I would like to have a try.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.