How To Massage Syslog Input on Intake?

1. Describe your incident:

I’m trying to configure a Netgear switch to output to Graylog, however, it appears that the message output from Netgear is almost but not quite rfc5424 compliant. Its timestamp contains a trailing colon which makes the message not parse-able by Graylog.

An example of the message is as follows:

<15>1 2024-12-16T17:44:36.927Z: %192.168.1.91-1 discAgent-7 nal_logging.c(39): Discovery Agent SIGCHLD received.

If I change the message to omit the trailing : after the Z in the timestamp, the message is appropriately parsed by Graylog.

I realize that this is not a Graylog problem, but unfortunately Netgear doesn’t give any control over the configuration of the syslog messages that it sends.

2. What steps have you already taken to try and solve the problem?

FWIW, I tried experimenting with pipelines, but it doesn’t look like pipelines get executed early enough in the Graylog intake process.

3. How can the community help?

So my question is, is there anyway to setup an input in Graylog that accepts this message and messages it into the correct format before Graylog attempts to parse it as an rfc5424 message?

I would just use one of the raw inputs, and then deal with it all in pipelines. Pipelines will make quick work of it, and the raw input won’t try and do anything with parsing, it will just pass on whatever it gets.

Thanks for the response!

I think that I’m getting close. I setup the Raw UDP input and I got messages from the switch flowing to it.

I’ve created a regular expression pipeline that selects that input and then should replace Z: with Z.

However, is there a way to get a pipeline to push the results of one input back into another input?

I guess I’m hoping that now that I’ve fixed the syslog formatting issue with the Raw Input, I could just push the output of that back into the UDP syslog input and have it parse it like normal, rather than setting up an extractor and doing all of the syslog parsing myself.

Is that an option?

Theoretically, but super messy. In that same pipeline rule you could do any extraction into fields, dont need to mess with extractors. How do you want it to send up looking?

I essentially want it to end up in the same index that I use for all of my other networking syslogs and in the same format that syslog messages usually show in.

So I guess I’m looking to extract facility, facility_num, level, log_source, and timestamp.

It would also be nice if I could use the pipeline to control which index it went into as well.

In case anyone else comes along with a similar problem, this was the setup that I ended up using:

This was the input that I ended up creating (type: Raw/Plaintext UDP):

And this was the pipeline rule that I ended up building:

rule "Parse Netgear Syslog to Syslog"
when
  from_input(id:"6760917faf4e402cec831b45")
then
    let grok_result=grok(
        pattern: "<%{NONNEGINT:syslog_pri:int}>%{NONNEGINT:syslog_ver:int} %{TIMESTAMP_ISO8601:syslog_timestamp:string}: %?%{SYSLOGHOST:syslog_host:string}-[0-9]+ %{NOTSPACE:syslog_appname:string} %{GREEDYDATA:syslog_msg:string}",
        value: to_string($message.message),
        only_named_captures: true);
    let priority = expand_syslog_priority(to_long(grok_result.syslog_pri));
    let ts = flex_parse_date(to_string(grok_result.timestamp));
    set_field("facility", priority.facility);
    set_field("level", priority.level);
    set_field("facility_num", grok_result.syslog_pri);
    set_field("source", grok_result.syslog_appname);
    set_field("syslog_host", grok_result.syslog_hos);
    set_field("message", grok_result.syslog_msg);
    set_field("timestamp", ts);
end

Obviously, the input ID above would need to be cahnged to your input ID, but the grok pattern should work with the standard netgear malformed logs.

Nice! For everything you capture with the grok pattern you can use set_fields if you want and it will just create all the fields based off the names in your grok that you used for the named captures.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.