For us the deployment of Graylog was to solve a very specific need at a time when another well-known SIEM solution just wasn’t going to be ready quickly enough to accommodate us. We started with a single OVA for evaluation and immediately obtained a free enterprise license because we thought “why not fully explore Graylog if we’re going to use it”.
The installation was pretty quick since we started with the OVA. We booted it and it was ready to go right away. We were able to get users configured and an input, index, and stream up and running within a day. We were ingesting logs by the end of the first day.
For us, the challenge was in securing Graylog. We knew immediately that we wanted to configure HTTPS on the web interface. We failed to immediately appreciate how precise this process would be and in the end our problems came down to not following the documentation exactly. Once we read and re-read the process we got past our difficulties. I actually ended up writing a post to help guide others.
My tip to the community is to be sure to carefully follow the documentation. I have yet to find that it doesn’t have the information that I need. It is also important to be sure that the documentation being used is for the correct Graylog version.
Thanks, @ttsandrew for the contribution. You’re tip is an excellent one! Many times we as users will use the documentation as a last resort. In your case, rereading and following the documentation carefully was the key to success.
For us it was a matter of simplicity. We needed a logging server that was not complicated to navigate on the WEB interface for common users. Having the need to find out what’s going on in different environment’s, like logins, failed logon attempts on VDI’s, firewalls, switches, storage servers, AP’s, etc…. With the correct Inputs to sort out the different devices in each environment this gave us the opportunity to simplify our alerts and notifications. After 5 years were able to track users in really time. Someone tries to logon a device that is not theirs, we can stop the intrusion within minutes. So, it basically went from collecting logs incase problems occur, to monitor environments in real time. As time passes, we have added a lot more Event Definitions to track problems and one example is when hardware drivers fail on Windows servers which could be network cards, raid cards, etc…. Now were collecting DNS query’s, Event logs were deleted, and even when a Group Policy Object have been change deleted/added. So were not only collecting logs were setup for monitoring security within the different environments.
The basic package installation was easy, hence why we choose Graylog. Just copy and paste a couple steps and start the services. Even though Graylog has multiple ways to be installed we just jumped in using a package installment. This would prove reliable for upgrading and new configuration down the road. I think the hardest problem we had was using self-signed certificates for HTTPS and securing inputs using TCP/TLS. Let it be known everyone’s environment is a little different but once we understood what needs to happen and google every error found in the logs and asked help in the forum it was a lot clearer on what to configure. Since we started securing Graylog we made or own documentation for different Operating Systems like “What to do if this happens” scenario. Any real problems occur now it basically Elasticsearch and were in the process of looking to reconfigure our servers for better performance this would be the next chapter of the Graylog experience.
It’s a must to read the documentation through. I admit there were times I was in a rush and missed a couple steps or overlooked some procedure.
Tips or Tricks: Always use the Graylog documentation first and only get ideas from third party information. In case a problem occurs and the need to go to the forum for help this will be a lot easier to troubleshoot. Do the research first before you post I know it helps
My story was very simple. We needed central log management as requirement for ISO 27000 certification. We tested one local product based on ES and Kibana, which was not very powerful, because used obsolete versions and was expensive for our small network. I’ve always had good knowledge of all open-source software available in the market, and Graylog was definitely in my list to try. So then we tested Graylog in OVA and released, that all required functionality is there and it has more options and nice web UI.
Installation - Ansible:
Testing of OVA was smooth, but in your environment we use CentOS as base system. Because I’m big fun of Ansible, official ansible playbook for graylog installation on CentOS environment was right choose.
Meanwhile we switched to Oracle Linux because of CentOS plans to don’t follow RHEL releases. Official ansible role doesn’t support Oracle Linux, but simple fix in line 37 in main.yml in role’s tasks fixed it: (ansible_distribution in ['RedHat','CentOS', 'OracleLinux'] and ansible_distribution_version is version('7', '>=')) or
Beware also, that this official playbook doesn’t install Elastic Search, so you need to use another role, or your own. Because we used ansible playbook, configuration was very quick, so we changed only some variables.
Tip and tricks
Try to read official documentation, and don’t follow third-party howto at beginning. Best way is to quickly look complete documentation from start to end to have good overview. Than read carefully complete sections to better understanding configured section. This way you progress your configuration slower but you better understand how things works, so it is much more simpler to debug and fix problems if necessary.