Graylog support for auditd in syslogs


(Jake Smith) #1

Hi All,

Are there any resources to support auditd logging within graylog with Centos hosts to parse out the fields correctly.

I have tried enabling auditd logging via syslog to a udp syslog input as shown below but the fields vary and are not parsed corecttly.

Would anothe roption be to use something like NXLog imfile module to read the file and then send it to a GELF input. Would this work as the json should contain the fields.

Alternatively, would it be better to use a GROK filter or pipleine rules to parse out the fields.

Cheers

Jake Smith


(Jan Doberstein) #2

why not just create a key-value extractor (or pipeline) that gets all the information from the field message into seperate fields?


(Jake Smith) #3

Hi Jan,

I looked at the audit log format at below:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sec-understanding_audit_log_files
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sec-Audit_Record_Types

I may not be easy to setup due to differing message types.

Then I found this post https://slack.engineering/syscall-auditing-at-scale-e6a3ca8ac1b8

So my theory goes as follows:-

Test and setup go-audit
Either log to syslog or file in say /var/log/go-audit/audit.log
Convert existing audit.d rules to go-audit format at a later stage

Can you tell me the level of JSON support in syslog messages within Graylog? I would expect that the mixture of normal syslog style and JSON syslog style will break things.

So I will probably try to use file format with NXlog to a nice Gelf Input which should resolve the issue.

What do you think? Or are key-value pair the way to go in your opinion?

Kind Regards

Jake


(Jake Smith) #4

Hi Jan,

Actually thinking it through further, it would probably be better to do the following.

Have auditd log to the current log file /var/log/auditd/audit.log and use NXLog to ship it to a GELF Input.

What do you think?

Cheers

Jake


(Jan Doberstein) #5

adding nxlog to the game would add a new moving part that can fail …

I would do that only if really needed, because messages are truncated (or similar). Is your goal to group all messages that belongs to a specific action into one message?

You might find this blog posting useful:

https://www.graylog.org/post/back-to-basics-working-with-linux-audit-daemon-log-file


(Jake Smith) #6

Hi Jan,

Can you explain the messages are truncated when sent with NXLog please? Is this what you mean https://logmatic.io/blog/how-to-increase-your-logs-maximum-string-size-with-log-shippers/

1MB is a pretty large message size, when most messages I have seen are usually 1.5. - 2k at most with large messages due to Cisco firewalls.

I will try the method outlined in the link above, but we are using a custom auditd rules configuration, so it may need some tweaking. I will let you know.

Cheers

Jake Smith


(Jake Smith) #7

Hi Jan,

One last thing has just come to mind, our auditd logs are coming in over syslog using rsyslog to a syslog UDP input in graylog.

Using a pipeline rule to match them in the initial stage as per the blog post would it be better to tag (as in post) or route them to a different stream?

Cheers

Jake


(Jochen) #8

Just use Auditbeat and a Beats input.


(Jake Smith) #9

Hi Jan,

For Auditbeat, do we just use the logstash output to send the data to the Beats input on the correct ip and Port?

https://www.elastic.co/guide/en/beats/auditbeat/current/configuring-output.html

Cheers
Jake


(Jochen) #10

Yes, exactly.

Additionally to the normal configuration of Auditbeat to fulfill your requirements. :wink:


(Jake Smith) #11

Hi Jan,

I will post this here for others.

We ran into issues with auditbeat not working, so we installed filebeat 5.6 as per below

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.10-x86_64.rpm
sudo rpm -vi filebeat-5.6.10-x86_64.rpm
sudo cp /etc/filebeat/*.json /usr/share/filebeat/bin/

Then we ran filebeat in interactive mode (/usr/share/filebeat/bin/filebeat -c -e ) whilst debugging parts of the full configuration file. Within our config we set the logstash output as well as the document_type variable in the prospector to document_type: auditd

This then makes the rules work as per the blog post.

Plus the obligatory dashboard


Thank you for your help

Cheers

Jake


(system) #12

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.