There is a lag between Filebeat & graylog server. Issue is Filebeat is behind in sending the log files.
This is because I have 1 filebeat running & it has multiple lines to process.
I would like to have multiple filebeats running on a single sidecar so that I can split the logs into multiple parts & that would help mitigate the lag.
Is it possible to have multiple filebeat running on the same sidecar in graylog.
It might be possible to do this with different systemd unit files and heavily modifying configuration, but I have to ask, why do you think that splitting up the lines would help? And also, how would you work with the data once you got it into Graylog? I’m thinking that this approach would be fraught with data consistency issues, as there’s no guarantee that multiple filebeat processes would be able to effectively determine which one of them read the last line and alternate reading the file.
Also, keep in mind that sidecar is pseudo config management–I don’t think it would have the ability to manage multiple filebeat processes on a single node. It’s managing one config per collection agent AFAIK.
Overall, I think you’re better off trying to address why Filebeat is having issues reading your log versus trying to overengineer something that could potentially lead to a lot more headache.
We have all application logs writing to common nfs under different sub-directories (100’s of them).
Currently we just have 1 filebeat running on a VM & we observed there was a huge lag.
Hence we decided to split the logs to 2 different filebeat & we are able to install sidecar on a different VM.
So I wanted to have multiple filebeat on a single VM.
No, it’s not possible in Graylog. And from what I understand of your deployment, you have filebeat traversing the network to read hundreds of files to then ship those logs…over the network. Is that an accurate understanding? If so, then your problem is likely not Filebeat. In fact, I’d wager that your problem is storing the logs in NFS. Why not store logs locally to a system, ensure they’re read, and then once the logs reach a date, archive them on NFS?
If you’ve not already seen If you’ve not already seen Filebeat overview | Filebeat Reference [7.14] | Elastic, then I’d recommend going over that. You have a single filebeat agent that’s spinning up multiple harvester processes (likely hundreds, as you’ve noted there are hundreds of subcategories) and those are all attempting to read those logs. Why not have multiple filebeat agents on disparate systems that are responsible for reading a subset of your logs? That would be a more effective way to split the load than trying to run multiple filebeat processes on the same node and deal with screwing around with systemd unit files, different filebeat configs, etc.
You got it spot on @aaronsachs. The right approach would be to have multiple filebeat agents on different systems, just wanted to cross check if it was possible to have it all on 1 system as it would be easier to maintain all in 1 place.
Also I’m still curious if I can try to have multiple sidecars running as docker containers on a single VM, is that possible or is it too much of a headache.
when I read this, this sounds like a huge enviroment, so I guess you have a configmanagement like salt or so, so you can manage the filebeat with that.
I’m still curious if I can try to have multiple sidecars running as docker containers on a single VM, is that possible or is it too much of a headache.
Hi @manoj_bhat
I think you can run more filebeats using one sidecar.
Simply assign more filebeats configuration to one sidecar. Try to use separate directories for filebeat logs, etc.
You need to create second Log Collector (e.g clone of first one) and then create new Configuration for this Log collector. Finally select new Log Collector in Administration and assing newly created configuration for it.
Hi @shoothub, I already tried adding more filebeats as per the steps you have mentioned by using separate directories.
But all it does is to replace the existing Configuration When you assign a new one.
From what I have tried, 1 sidecar is able to manage only 1 filebeat per VM.
@shoothub, Let me know if you have been successful in doing what you have mentioned.