Best practices to collect data log

hello,

I explored the wonderful graylog in order to store all log of my servers and I’ve now a good idea of graylog, so I try to build the architecture for collecting my data.

What is the best practises to collect data from many servers (more than 100 servers) ?
I will collect apache logs, windows event log, file log…

I wonder if it is better to create several inputs (one for apache log, one for windows event log…) or is it better to have only one input ?

Then, all type of data would be recorded in a specific stream/indices:
example:

  • 1 apache_log stream (with an index apache_)
  • 1 windows event log stream (with an index event_log_)

If somebody has got an experience of the architecture design in graylog…

Thanks

Did you look into the getting started guide?

http://docs.graylog.org/en/2.4/pages/getting_started.html

You should use the collector/input based on the needs and not the number of servers you have. Files are easiest taken with the filebeat, windows event log with the winlogbeat. If you want to collect network equipment logs syslog would be a better solution. Maybe syslog might be the best choice for your Linux server logs too.

Personal I would go with one input for each type using processing pipelines to process the logs. If you take the extractors to work with the data it might be easier to work with several inputs one for each type.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.