Rookie assistance for sidecar debugging

(Konstantinos Betsis) #1

Hi All

I have deployed GrayLog 3 and everything is ok for UDP SYSLOG messages.

I wanted to enhance the functionality by forwarding FreeIPA logs using the sidecar/beat approach.
From the documentation it seemed like a straightforward config however i haven’t been able to send any log to the GrayLog.

In addition when i enter “graylog-sidecar -debug” i get the following output

[ConfigFile] YAML config parsing failed on /etc/graylog/sidecar/sidecar.yml: yaml: line 100: did not find expected key. Exiting.

The weird thing? the yaml file only has 87 lines…

Any assistance will be greatly appreciated.

Thank you

0 Likes

(Ben van Staveren) #2

See that stickied post about asking good questions :wink: But not to be too mean; can you paste your sidecar config (minus any passwords, tokens, etc.) on pastebin and link it here? Or paste it here but be sure to format it properly with the </> button on the message editor.

Because right now there isn’t enough info to answer anything :slight_smile:

0 Likes

(Konstantinos Betsis) #3

I found the issue, after working for 14 hours… it was the identation…
Fixed it and now IPA logs are forwarded OK to GrayLog.
Next step parsing of events and from what i can see there is no ready made pipeline :stuck_out_tongue:

0 Likes

(Ben van Staveren) #4

Correct! You’ll have to build your own :smiley: (but, it’s not too difficult - there’s many examples around)

0 Likes

(Konstantinos Betsis) #5

Hi Ben

I have successfully forwarded my openldap logs to graylog with the filebeat.
In order to split them and get more visibility i have tried the pipeline approach.
For the pipeline a dedicated stream has been configured called “ipaAccessLogStream”.
This is matched when there is a word “slapd” within the directory of the obtained log. This works ok.
Then the pipeline rules are created.
For rule 0 a simple match is required based on the field “message” included within the filebeat input.
Within the Graylog search i can see the field, however, for some weird reason i cannot get it a match.
What happens then is a grok filter which seams ok through simple Grok testing.
How can i troubleshoot this?

0 Likes

(Ben van Staveren) #6

Paste a few sample log entries, along with your pipeline rules (don’t forget to format with </> button), and we can take a look and see what’s going on. Or not going on. Or maybe it’ll just fix itself :smiley:

0 Likes

(Konstantinos Betsis) #7

Stage 0 rule

rule "OpenLDAP extraction"
when
    has_field("message")
then
    let pattern = "(?:(?:<= (?:b|m)db_%{DATA:index_error_filter_type}_candidates: \\(%{WORD:index_error_attribute_name}\\) not indexed)|(?:policy_%{DATA:policy_op}: %{DATA:policy_data})|(?:connection_input: conn=%{INT:connection} deferring operation: %{DATA:deferring_op})|(?:connection_read\\(%{INT:fd_number}\\): no connection!)|(?:conn=%{INT:connection} (?:(?:fd=%{INT:fd_number} (?:(?:closed(?: \\(connection lost\\)|))|(?:ACCEPT from IP=%{IP:src_ip}\\:%{INT:src_port} \\(IP=%{IP:dst_ip}\\:%{INT:dst_port}\\))|(?:TLS established tls_ssf=%{INT:tls_ssf} ssf=%{INT:ssf})))|(?:op=%{INT:operation_number} (?:(?:(?:(?:SEARCH )|(?:))RESULT (?:tag=%{INT:tag}|oid=(?:%{DATA:oid}(?:))) err=%{INT:error_code}(?:(?: nentries=%{INT:nentries})|(?:)) text=(?:(?:%{DATA:error_text})|(?:)))|(?:%{WORD:operation_name}(?:(?: %{DATA:data})|(?:))))))))%{SPACE}$";
    let message_text = to_string($message.message);
    let matches = grok(pattern: pattern, value: message_text);
    set_fields(matches);
end

Filebeat message example

beats_type
    filebeat
filebeat_@metadata_beat
    filebeat
filebeat_@metadata_type
    doc
filebeat_@metadata_version
    6.7.1
filebeat_@timestamp
    2019-04-16T12:45:35.073Z
filebeat_beat_hostname
    sadserver.sadcompany.com
filebeat_beat_name
    sadserver.sadcompany.com
filebeat_beat_version
    6.7.1
filebeat_collector_node_id
    sadserver.sadcompany.com
filebeat_fields_server
    true
filebeat_host_name
    sadserver.sadcompany.com
filebeat_input_type
    log
filebeat_log_file_path
    /var/log/dirsrv/slapd-SADCOMPANY-COM/access
filebeat_offset
    4748205
filebeat_prospector_type
    log
filebeat_source
    /var/log/dirsrv/slapd-SADCOMPANY-COM/access
message
    [16/Apr/2019:12:45:03.153665459 +0000] conn=379 op=0 BIND dn="uid=saduser,cn=sadgroup,dc=sadcompany,dc=com" method=128 version=3
source
    sadserver.sadcompany.com
timestamp
2019-04-16 12:45:35.073 +00:00

The thing is that the events never match for Rule 0 as to move to Stage 1 rules…
Simulating through raw log seams to work just fine

0 Likes

(Ben van Staveren) #8

Alrighty, I’m a bit swamped at work so it’ll take me a bit to reply to it but I think I have an idea :slight_smile:

0 Likes

(Konstantinos Betsis) #9

Honestly, any help is greatly appreciated.

0 Likes

(Ben van Staveren) #10

Okay, a few ideas:

  • Go to the System > Grok Patterns page and enter the entire pattern you use there and name it, so you can just grok("%{MYSHINYPATTERN}", ...) for ease of use
  • Also try to set the (if I recall correctly) only_named_captures parameter to true to avoid picking up random things

For the Grok pattern, I do believe certain things need to be escaped if used in a function, unsure of whether it would need it when stored as a grok pattern (Via System > Grok Patterns) but… when in doubt, escape all the things anyway.

You can also try re-creating the pattern through the grok pattern editor (System > Grok Patterns, big button top right called “Create Pattern”) and see if that lets you get a working one assembled.

0 Likes

(Konstantinos Betsis) #11

So you recommend to avoid using the pipelines?

We did the grok pattern approach but read that it is better to go for pipelines as to allow per stream processing capabilities.

Just for your info when i try to simulate the pipeline through a raw log and select the beats client and stream the processing is returned as OK.

The only issue i have found is that most likely the

    has_field("message")

is the issue.

Anyway, i’ll try and play around with more match conditions and see if this fixes things, if not most likely i will revert to simple grok parsing instead of pipelines.

Thanks for the suggestion Ben.

0 Likes

(Ben van Staveren) #12

No, no, what I meant is that instead of defining the grok pattern in a pipeline rule is to create it in the grok editor and save it, so you can refer to it more easily :slight_smile:

Pipelines are wonderful things, we use about 70 of them in our setup :slight_smile:

If you want to debug the pipeline rule, you can always just use true as the condition instead of has_field() - then it will trigger on everything entering that particular stream. Also make sure the pipeline is in fact connected to the stream you expect to see these events come in on :slight_smile:

0 Likes

(Konstantinos Betsis) #13

Oh ok now i understand.

The trigger to enforce the pipeline was exactly what i was searching for.
I will test it now with the “true”.

0 Likes

(Konstantinos Betsis) #14

OK, honestly i cannot figure it out…

Same GROK pattern configured as an extractor on the stream and it’s OK.
Apply it as a pipeline nothing happens.

I forced the “true” flag in “when” but again nothing happens, it’s like the GROK does not happen.

Funny thing is that at sometimes see “Stage 2” pipeline messages, but no fields are added from the “Stage 1”.

Through the above configuration shouldn’t i see the new fields when a message is processed and carries to “Stage 2”?

Have i gotten it wrong?

0 Likes

(Ben van Staveren) #15

The “when true” would satisfy that rule in stage 1 so you would see them in stage 2, I think. I’m kind of out of ideas here :frowning:

0 Likes

(Konstantinos Betsis) #16

Thanks for the assistance till now, really appreciate it.
Should i follow the bug approach now?

0 Likes