The problem is that the log is very long, having multiple key and values and it is not similar to any popular log types like JSON or something. Therefore, it takes a lot of processing power if I create a regular expression or try to extract each key with one extractor.
I actually did that, and made a regex for every key, it ended to having 20 or so extractors, but then my server was not able to process such heavy extractors and it overloaded.
So, what is your suggestion for making an extractor for this type of log message?
I’ve checked cisco docs, and found, that key value parameters uses only some message IDs: 430001, 43002, 430003, 430004 and 430005.
So best way would be probably to extract message ID and update pipeline rule for KV only for there messages.
Create pipeline rule to extract severity and message id:
rule "Cisco FirePower grok"
when
contains(to_string($message.message), "%FTD", true)
then
set_fields(grok(pattern: "\\%FTD-%{DATA:ftd_severity:int}-%{DATA:ftd_messageid}: %{GREEDYDATA:cisco_msg}", value: to_string($message.message), only_named_captures: true));
end
Create pipeline rule KV only for message id 430001-430005
rule "Cisco FirePower KV"
when
has_field("cisco_msg") AND (to_long($message.ftd_messageid) >= 430001 AND to_long($message.ftd_messageid) <= 430005)
then
set_fields(
fields:
key_value(
value: to_string(to_string($message.cisco_msg)),
delimiters:",",
kv_delimiters:":"
));
end
Create pipeline and 2 steps, on step 0 setup pipeline rule “Cisco FirePower grok” and for step 2: “Cisco FirePower KV”