Syslog format does not follow recommendation

Hello friends,
I come from here asking for help from experts.

I have two firewalls with “syslog/udp” configured. Sophos19 and FortNet7.2
I realized that when FortNet received the logs, it was not necessary to configure extractors to get the fields all separated in an organized way.

However, Sophos firewall cannot do the same.
In the research I did, I came to the understanding that the Sophos syslog format does not strictly follow the recommendation of RFC5424 or RFC3164.

See when comparing the log of each of the firewalls, it seems only the “date” is formatted differently.
See that FortNet brings separate fields and has no timestamp. It starts with “date” followed by “time” and right after that there is “tz”.
Sophos has none of that. That is, bring everything together in the same field.

I’d like to see a way to fix this so I don’t have to create lots of extractors.
Pipelines would be a way out? Or another resource?

Any help that can show me the path of the stones is welcome.
Grateful!

Examples:

FORNET:

message
date=2023-09-06 time=19:57:03 devname="FGOS7" devid="FGVMEVT5TVZ5WSBC" eventtime=1694041022856164541 tz="-0300" logid="0000000015" type="traffic" subtype="forward" level="notice" vd="root" srcip=172.16.16.40 srcport=52362 srcintf="port2" srcintfrole="lan" dstip=172.217.172.110 dstport=443 dstintf="port1" dstintfrole="wan" srcuuid="c7abcb36-4d02-51ee-6e5d-332545ce9294" dstuuid="d82c4ba4-4cab-51ee-206e-d6639a494f62" srccountry="Reserved" dstcountry="United States" sessionid=32190 proto=17 action="start" policyid=1 policytype="policy" poluuid="a5bb3918-4ccf-51ee-f253-789b970c9ca5" policyname="SAIDA" service="udp/443" trandisp="snat" transip=192.168.1.29 transport=52362 duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0 appcat="unscanned"

SOPHOS:

message
device_name="SFW" timestamp="2023-09-06T10:36:45-0300" device_model="SFVH" device_serial_id="C01001YG48MKX90" log_id="010101600001" log_type="Firewall" log_component="Firewall Rule" log_subtype="Allowed" log_version=1 severity="Information" duration=21 fw_rule_id="12" fw_rule_name="ALLOW_ALL" fw_rule_section="Local rule" nat_rule_id="2" nat_rule_name="Default SNAT IPv4" fw_rule_type="USER" gw_id_request=1 gw_name_request="GW_NEWLIFE" ether_type="Unknown (0x0000)" in_interface="Port1" out_interface="Port2" src_mac="00:0C:29:6E:1D:0E" dst_mac="00:0C:29:11:F7:A7" src_ip="172.16.16.203" src_country="R1" dst_ip="52.5.76.173" dst_country="USA" protocol="TCP" src_port=54770 dst_port=8347 packets_sent=5  packets_received=1 bytes_sent=260 bytes_received=40 src_trans_ip="186.192.80.130" src_zone_type="LAN" src_zone="LAN" dst_zone_type="WAN" dst_zone="WAN" con_event="Stop" con_id="2737406793" hb_status="No Heartbeat" app_resolved_by="Signature" app_is_cloud="FALSE" qualifier="New" in_display_interface="Port1" out_display_interface="Port2" log_occurrence="1"

Observation,
I found predefined extractors in the marketplace here on the forum. However, there are only half a dozen, that is, with each version that the manufacturer launches, new records appear.

A pipeline with the key value function should work less on that data i would think. Key Value Parser Delivers Useful Information Fast

1 Like

Here’s the pipeline rule that should work, it works for my firewall that give the same message type.

rule "Firewall Syslog Parser"

when
has_field("message")

then
set_fields(
		fields:
				key_value(
					value: to_string($message.message),
					trim_value_chars: "\"",
					trim_key_chars:"",
					delimiters:" ",
					kv_delimiters:"="
					)
		);
end
1 Like

Ya that should work. What output do you get, does it do nothing at all? Have you looked in server.log for any errors?

Grateful @Joel_Duffield
This was the way I needed to get to the solution.

Thank you @s0p4L1N
With the change you made in the pipeline component, it can bring me a faster understanding of the work it does

Yes @Joel_Duffield . Before this post I looked for error messages in the graylog subsystem log. I even changed it to “errors” and “debug” to see if it enriched it more… But it didn’t give me any clues.

The error that now appears is this after processing the pipeline:

gl2_processing_error
Replaced invalid timestamp value in message <8a6e1e80-4dc6-11ee-85bf-02420a0a0a07> with current time - Value <2023-09-07T18:36:01-0300> caused exception: Invalid format: "2023-09-07T18:36:01-0300" is malformed at "T18:36:01-0300".

And in the logs, this:
But I think nothing explains here.

2023-09-07 17:26:04,701 INFO : org.graylog2.bootstrap.ServerBootstrap - Graylog server 5.1.4+6fa2de3 starting up
2023-09-07 17:26:04,703 INFO : org.graylog2.bootstrap.ServerBootstrap - JRE: Eclipse Adoptium 17.0.8 on Linux 5.15.0-25-generic
2023-09-07 17:26:04,704 INFO : org.graylog2.bootstrap.ServerBootstrap - Deployment: docker
2023-09-07 17:26:04,705 INFO : org.graylog2.bootstrap.ServerBootstrap - OS: Ubuntu 22.04.2 LTS (jammy)
2023-09-07 17:26:04,706 INFO : org.graylog2.bootstrap.ServerBootstrap - Arch: amd64
2023-09-07 17:26:04,836 INFO : org.graylog2.bootstrap.ServerBootstrap - Running 54 migrations...
2023-09-07 17:26:05,797 INFO : org.graylog2.shared.initializers.PeriodicalsService - Starting 26 periodicals ...
2023-09-07 17:26:05,798 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ThroughputCalculator] periodical in [0s], polling every [1s].
2023-09-07 17:26:05,827 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration] periodical, running forever.
2023-09-07 17:26:05,845 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] periodical in [0s], polling every [1s].
2023-09-07 17:26:05,858 INFO : org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration - Legacy default stream has no connections, no migration needed.
2023-09-07 17:26:05,875 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [120s], polling every [20s].
2023-09-07 17:26:05,900 INFO : org.graylog2.shared.initializers.PeriodicalsService - Not starting [org.graylog2.periodical.ContentPackLoaderPeriodical] periodical. Not configured to run on this node.
2023-09-07 17:26:05,901 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2023-09-07 17:26:05,916 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexBlockCheck] periodical in [0s], polling every [30s].
2023-09-07 17:26:05,992 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexRetentionThread] periodical in [0s], polling every [300s].
2023-09-07 17:26:06,041 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexRotationThread] periodical in [0s], polling every [10s].
2023-09-07 17:26:06,060 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.NodePingThread] periodical in [0s], polling every [1s].
2023-09-07 17:26:06,075 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.VersionCheckThread] periodical in [300s], polling every [1800s].
2023-09-07 17:26:06,088 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ThrottleStateUpdaterThread] periodical in [1s], polling every [1s].
2023-09-07 17:26:06,098 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.events.ClusterEventPeriodical] periodical in [0s], polling every [1s].
2023-09-07 17:26:06,102 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.events.ClusterEventCleanupPeriodical] periodical in [0s], polling every [86400s].
2023-09-07 17:26:06,111 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexRangesCleanupPeriodical] periodical in [15s], polling every [3600s].
2023-09-07 17:26:06,165 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.TrafficCounterCalculator] periodical in [0s], polling every [1s].
2023-09-07 17:26:06,192 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.indexer.fieldtypes.IndexFieldTypePollerPeriodical] periodical in [0s], polling every [1s].
2023-09-07 17:26:06,208 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.scheduler.periodicals.ScheduleTriggerCleanUp] periodical in [120s], polling every [86400s].
2023-09-07 17:26:06,224 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ESVersionCheckPeriodical] periodical in [0s], polling every [30s].
2023-09-07 17:26:06,258 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.UserSessionTerminationPeriodical] periodical, running forever.
2023-09-07 17:26:06,278 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.telemetry.cluster.TelemetryClusterInfoPeriodical] periodical in [0s], polling every [540s].
2023-09-07 17:26:06,280 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.sidecar.periodical.PurgeExpiredSidecarsThread] periodical in [0s], polling every [600s].
2023-09-07 17:26:06,282 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.sidecar.periodical.PurgeExpiredConfigurationUploads] periodical in [0s], polling every [600s].
2023-09-07 17:26:06,284 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.views.search.db.SearchesCleanUpJob] periodical in [3600s], polling every [28800s].
2023-09-07 17:26:06,304 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.events.periodicals.EventNotificationStatusCleanUp] periodical in [120s], polling every [86400s].
2023-09-07 17:26:06,319 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread] periodical in [0s], polling every [3600s].
2023-09-07 17:26:08,376 INFO : org.glassfish.grizzly.http.server.NetworkListener - Started listener bound to [0.0.0.0:9000]
2023-09-07 17:26:08,377 INFO : org.glassfish.grizzly.http.server.HttpServer - [HttpServer] Started.
2023-09-07 17:26:08,378 INFO : org.graylog2.shared.initializers.JerseyService - Started REST API at <0.0.0.0:9000>
2023-09-07 17:26:08,379 INFO : org.graylog2.bootstrap.ServerBootstrap - Services started, startup times in ms: {LocalKafkaMessageQueueWriter [RUNNING]=0, PrometheusExporter [RUNNING]=4, UrlWhitelistService [RUNNING]=4, LocalKafkaJournal [RUNNING]=7, OutputSetupService [RUNNING]=8, ConfigurationEtagService [RUNNING]=11, FailureHandlingService [RUNNING]=11, InputSetupService [RUNNING]=12, EtagService [RUNNING]=17, LocalKafkaMessageQueueReader [RUNNING]=19, GracefulShutdownService [RUNNING]=21, UserSessionTerminationService [RUNNING]=24, BufferSynchronizerService [RUNNING]=26, MongoDBProcessingStatusRecorderService [RUNNING]=28, GeoIpDbFileChangeMonitorService [RUNNING]=48, JobSchedulerService [RUNNING]=52, StreamCacheService [RUNNING]=53, LookupTableService [RUNNING]=146, PeriodicalsService [RUNNING]=528, JerseyService [RUNNING]=2587}
2023-09-07 17:26:08,379 INFO : org.graylog2.shared.initializers.ServiceManagerListener - Services are healthy
2023-09-07 17:26:08,383 INFO : org.graylog2.bootstrap.ServerBootstrap - Graylog server up and running.
2023-09-07 17:26:08,384 INFO : org.graylog2.shared.initializers.InputSetupService - Triggering launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] to Running [LB:ALIVE]
2023-09-07 17:26:08,451 INFO : org.graylog2.shared.inputs.InputLauncher - Launching input [Syslog UDP/sfos19/64d658ba5c1b4b3b03d8741f] - desired state is RUNNING
2023-09-07 17:26:08,479 INFO : org.graylog2.shared.inputs.InputLauncher - Launching input [Syslog UDP/fortos7/64f8a32845cec561bdc9c4b8] - desired state is RUNNING
2023-09-07 17:26:08,483 INFO : org.graylog2.inputs.InputStateListener - Input [Syslog UDP/sfos19/64d658ba5c1b4b3b03d8741f] is now STARTING
2023-09-07 17:26:08,487 INFO : org.graylog2.inputs.InputStateListener - Input [Syslog UDP/fortos7/64f8a32845cec561bdc9c4b8] is now STARTING
2023-09-07 17:26:08,722 INFO : org.graylog2.inputs.InputStateListener - Input [Syslog UDP/fortos7/64f8a32845cec561bdc9c4b8] is now RUNNING
2023-09-07 17:26:08,728 INFO : org.graylog2.inputs.InputStateListener - Input [Syslog UDP/sfos19/64d658ba5c1b4b3b03d8741f] is now RUNNING

In fact it seems to be a problem with the Sophos syslog “timestamp” format.

Ya the timestamp probably isn’t compliant. There MAY be a setting on the sophos to pick between two syslog formats (some firewalls have this). If not, the timestamp is a normal field and you can set it in the pipeline, and there are several date parsing functions to help, you could either read the timestamp field and transform it before putting it back, or read the timestamp from these key value fields and then write that to timestamp as well.

1 Like

True @Joel_Duffield
From what I understand, this is expected:
2023-09-08 00:11:47.409

First situation:
Sophos sends a field called “timestamp” which tries to overwrite the graylog’s “timestamp” field.
Therefore, this field is left out and the parse does not occur.
Who knows, just renaming this field could already have some result.

Second situation:
It arrives that way. That is, it has a “T” string instead of a “space”. And at the end “-0300
2023-09-07T21:11:47-0300

In this case, I would have to create a pipeline that removes the “timestamp” field that sophos sends. And separate the fields:

date=2023-09-07
time=21:11:47
tz="-0300"

Would that be the way forward?
Thanks!

Ya I would read date and time into variables, use the timezone to change that time if needed to like utc, and then use those variables in a set_field of the timestamp field.

1 Like

Is there a possibility here to add a prefix to the fields?
Example:
ip_source -> prefix_ip_source

Thanks @Joel_Duffield ,
I’ll try to do.

Not directly with this pipeline function because it extract the raw key as field without the possibility to modify. But you could use the pipeline fuction rename_field

rule "ip_source to prefix_ip_source"
when
  has_field("ip_source")
then
  rename_field("ip_source", "prefix_ip_source");
end

That should work for one field at a time but if you want to do all at once, I dont know how to do it except repeating the same condition for each fields.

1 Like

@isotecviac2022 check this post: Rename Multiple Fields Using Pipeline Rule

1 Like

If you want rename multiple fields using the same pipeline:

rule "ip_source to prefix_ip_source"
when
  has_field("source") AND contains(to_string($message.source),"sophosfirewall01.lab.lan")
then
  rename_field("ip_source", "prefix_ip_source");
  rename_field("ip_dest", "prefix_ip_dest");
  rename_field("mac_source", "prefix_mac_source");
  rename_field("mac_dest", "prefix_mac_dest");
  rename_field("protocol", "prefix_protocol");
end
1 Like

Hello, @s0p4L1N
Grateful!
With a little modification I was able to resolve both Sohpos syslog issues:

rule "Firewall Syslog Parser"
when
  has_field("message") and
  contains(to_string($message.message),"device_name=\"SFW\"") and
  from_input(id:"64d658ba5c1b4b3b03d8741f") == true
then
  set_fields(
    fields:
      key_value(
        value: to_string($message.message),
          trim_value_chars: "\"",
          trim_key_chars:"",
          delimiters:" ",
          kv_delimiters:"="
               ),
        prefix: "sophos_prefix_"
             );
end

Here I can rename all the fields at once…
And this also solves the problem of the date format coming in wrong and trying to overwrite the graylog “timestamp”.

So, solved Sophos syslog parser problem.

Now I believe this is the last challenge:
When trying to do the same thing with FortOS syslog. The pipeline does the work, however, it duplicates the fields. Look:

Observation:
See that the duplication is occurring in the same log record. In other words, it is not rubbish from old things or other tests.
When necessary, I clean the last indexes created to avoid falling into traps.

So now I don’t know if it’s possible to store only the fields which are extracted from “message”.
That is, remain only with the fields that were prefixed.

For this one, the pipeline:

rule "Test fortOS set prefix"
when
  has_field("message")
then
  set_fields(
    fields:
      key_value(
        value: to_string($message.message),
          trim_value_chars: "\"",
          trim_key_chars:"",
          delimiters:" ",
          kv_delimiters:"="
               ),
        prefix: "fortnetPrefix_"
             );
    //set_fields (fields),prefix:"ftos_");
end

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.