Handling kubernetes logs in graylog

Hi there!

I have a graylog deployment following http://docs.graylog.org/en/2.4/pages/architecture.html#bigger-production-setup. The platform is working properly: I have tested it by sending syslog, apache, kafka, monitoring logs, etc from different machines using filebeat and a graylog collector sidecar (https://hub.docker.com/r/digiapulssi/graylog-sidecar/~/dockerfile/).

Now, I’m also collecting logs from a kubernetes cluster. I deployed a daemonset in each of the cluster’s nodes. It runs the collector sidecar following the node-level logging agent approach that is the most common and encouraged one according to kubernetes (https://kubernetes.io/docs/concepts/cluster-administration/logging/).

My concern is about how handling the logs. At the moment, I have a collector that collects all logs from kubernetes under [’/var/log/.log’, '/var/log//*.log’]. I assume those logs come from the different pods and the different applications containers (sure I’m missing some).

I have logs from different sources, with different formats, all tagged with the same tags… Which is the clever way to handle them in order to get profit of the analytical features of graylog?


you will need some way to identify the logfiles - do this because of the logfile name and use a pipeline to extract all wanted information or create a configuration that will tag the specific files.

that is up to you and your personal workflow

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.