Step 6: Pick First Source/Type of Data

We’re in Step 6, half way to the end of the contest, where one participant will win a $100 Amazon Gift Certificate. If you haven’t contributed yet, there’s still time to play. For each step you respond to, you’ll get another change to win.

What was your first source or type of data you served up to newly installed and configured Graylog system?

  • Described the data source or type.
  • What was your expected output?
  • Did it give you the results you wanted, or did you need to go back and reconfigure any software?
  • What lessons did you learn from this step that you can share with the community? Any “gothchas?”

Thank you to the community members who have been playing the User Journey Game. Keep it going! Our currently top participant is @shoothub with several entries. Remember, each submission is an entry in the User Journey raffle. The winner whose name is drawn will receive a $100 Amazon Gift Certificate.

Posting a valid++ response to this question is worth 1 point.

For EACH STEP (there are 12 in all) in Graylog User’s Journey in which you post a valid++ response, you get a chance to win a $100 Amazon Gift Certificate! One lucky winner will have up to 12 chances to win. Go to “From the Graylog Book” to find the steps.

++Validity of response is subject to the community manager’s approval.

Hello,

Our first source/type was Syslog UDP on Graylog 2.0. This was the input we used for all our windows servers and/or VDI’s. We just want a “One Stop Shop” for all the logs in this environment. This server was just running on http, no encryption mainly because it was prevented from being exposed to the internet. It became an internal server. The installation worked right from the start and no problems, just a lot of logs started rolling in and at first had no idea how to organize it. All we did was follow the basic package installation for CentOS 7. At the time another tech was trying to install ELK stack which took him days to get it configured/running and I think it took me like 30 minutes from firing up a virtual machine and install Graylog. The simplicity of Graylog was the key in why we choose this software over others.

Architectural considerations, I would say it is a must. Take some time before building and know what you want to do. Maybe brainstorm different ideas what needs/wants to be done in the environment for the present and in the future. When we reached a production installation some question started occurring like “user logon” “failed logon”, etc… with windows we need to get the event ID’s, and this wasn’t going very well with Syslog UDP. We ended up using GELF TCP/TLS Inputs for windows log collections. This enhanced our deep searches and alerts/notification. Using different Inputs for separating devices such as Firewall, Windows Servers, Linux OS’s, and Switches this gave us the ability to start organizing our logs better. Creation of streams helped improved our search time. Finally we created a foundation that can be expanded without having to start over and/or have loss of data.

1 Like

For our POC concepts we’ve used OVA installation which was fast and easy to deploy. We put it in your VMware virtual environment, setup resources (our first fault), and configured input. Our first input was syslog messages from our Cisco switches and FortiGate firewalls (our second fault).

We wanted to parse Cisco switches messages to separate fields, so we would be able to found errros in logs and create nice dashboard to simple overview. It was quite easy, we used GROK extractors to extract facility, severity, mnemonic and description from message. After that we was able to create nice looking dashboard with all required widgets like Number of port connect/disconnect, Login/logout username, Top Interfaces and so on.

Result was OK for us, but during extraction of field we found, that timestamp from Cisco switches was missing. So we need to reconfigure Cisco switches, enable timestamp using command service timestamps log datetime and go back to graylog’s extractor and update it for timestamp.

We’ve learned after POC, that it’s good to think before sending first logs. It’s especially true for Inputs and indexes. During POC we sended logs from Cisco swiches and FortiGates to same inputs, which results a lot of unnecesarry fields. So good idea before extracting big list of field is to stop for second and think if this field is needed for something. This way you can also save a lot of space and resources. Best way for us was to create separate Input for different type of device, like own input for switches, separate for FortiGates and another one for Linux and Windows servers. Don’t forget to also create separate index, because not all logs need same retention period. Streams are also unique feature of Graylog, so it’s worth playing with it during POC.

2 Likes