We have 2 Graylog servers running. 1 for production firewall with a manuall installation (Graylog 2.0.2 (4da1379) on VC1LOG001 (Oracle Corporation 1.8.0_77 on Linux 4.2.0-42-generic)) and 1 new deployed via OVF Graylog 2.4.5+8e18e6a on VC1LOG002 (Oracle Corporation 1.8.0_172 on Linux 4.4.0-127-generic)).
The timestamp on the old 2.0.2 graylog machine is fine.
timestamp = 2018-06-27T11:24:08.436Z
On the new one, it’s also good
timestamp = 2018-06-27T13:25:46.000Z
But, on the VC1LOG001 we see the Graylog timestamp as is, so 2018-06-27 11 : 24 : 08 but on VC1LOG002 it just keeps adding 2 hours so it would become 2018-06-27 15 : 25 : 46
graylog-ctl timezone is set to Europe/Amsterdam.
ubuntu@VC1LOG002:~ timedatectl
Local time: Wed 2018-06-27 13:27:32 CEST
Universal time: Wed 2018-06-27 11:27:32 UTC
Timezone: Europe/Amsterdam (CEST, +0200)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2018-03-25 01:59:59 CET
Sun 2018-03-25 03:00:00 CEST
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2018-10-28 02:59:59 CEST
Sun 2018-10-28 02:00:00 CET
ubuntu@VC1LOG002:~
How do get the timestamp from the log message to be displayed as graylog timestamp? Because now i can’t use the “Past 5 minutes”
From the “overview” page of both servers:
VC1LOG001
User admin: 2018-06-27 13:32:08 +02:00
Your web browser: 2018-06-27 13:32:08 +02:00
Graylog server: 2018-06-27 13:32:08 +02:00
VC1LOG002:
User admin: 2018-06-27 13:33:11 +02:00
Your web browser: 2018-06-27 13:33:11 +02:00
Graylog server: 2018-06-27 13:33:11 +02:00
-edit2-
Would this get me a correct timestamp:
rule “fortigate timestamp”
when
has_field(“devname”)
then
let build_message_0 = concat(to_string($message.date), “T”);
let build_message_1 = concat(build_message_0, to_string($message.time));
let build_message_2 = concat(build_message_1, “Z”);
let new_timestamp = parse_date(build_message_2, “yyyy-MM-dd HH:mm:ssZ”);
set_field(“timestamp”, new_timestamp);
end
I tried this:
rule “fortigate timestamp”
when
has_field(“devname”)
then
let build_message_0 = concat(to_string($message.date), “T”);
let build_message_1 = concat(build_message_0, to_string($message.time));
let build_message_2 = concat(build_message_1, “Z”);
let new_timestamp = parse_date(build_message_2, “yyyy-MM-dd HH:mm:sssZEurop/Amstedam”);
set_field("timestamp_test", new_timestamp);
end
But i dont see a field timestamp_test
-edit-
Couldnt i add a static field for Timezone or something to the input?
I use the following rule to correct the timstamp on cisco devices
rule "cisco (3.1) correct timestamp IOS"
// we want to create ISO8601 Timestamps
// make 'Feb 15 2015 13:33:22.111 UTC' ISO8601
when
has_field("cisco_message") AND
has_field("log_date") AND
grok(pattern: "%{MONTH} %{MONTHDAY} %{YEAR} %{TIME}", value:to_string($message.log_date)).matches == true
then
let time = parse_date(value:to_string($message.log_date), pattern:"MMM dd yyyy HH:mm:ss.SSS", timezone:"UTC");
set_field("timestamp",time);
end
Timestamp is still +2 and timestamp in the logmessage still has Z behind it.
Using this now:
rule “fortigate timestamp”
when
has_field(“devname”) && has_field(“date”) && has_field(“time”)
then
let build_message_0 = concat(to_string($message.date), " ");
let build_message_1 = concat(build_message_0, to_string($message.time));
let new_timestamp = parse_date(value:to_string(build_message_1), pattern:“yyyy-MM-dd HH:mm:sss”, timezone:“Europ/Amstedam”);
set_field(“timestamp”, new_timestamp);
end
In which logfiles should i look?
I think you are referring to the stages? Because right now, it’;s in stage 1.
-edit-
Noticed a type in the rule.
I modified to a stage 0 that checks for field
stage 1 the tries to apply the timestamp changes. No luck so far…
I thought i’d try something else. I add a static field “timestamp_test” with value “null” to the input.
Then i changed my pipeline rule to modify that field and this works.
I’m wondering why your pipeline works, because the docs says:
On the Configurations page, you need to enable the Pipeline Processor message processor and, if you want your pipelines to have access to static fields set on inputs and/or fields set by extractors, set the Pipeline Processor after the Message Filter Chain.
That’s my experience, too
EDIT:
Now I think I know that the pipeline works in spite of the order: You are not accessing the value of a static field.
The first timestamp is translated to the Users local timezone (set in the profile for normal user) which is for the root user whatever is written to the configuration. The second field just displays the time as it is saved in the Elasticsearch. Which is UTC.
That will be fixed in 3.0, the display of the timestamps will be unified.