Timestamp vs timestamp

1. Describe your incident:

Timestamps for an application log does not match the Graylog timestamp

The idea is to have the timestamp from the full_message part shown everywhere (Timestamp [upper left corner] and timestamp [bottom])

2. Describe your environment:

  • OS Information: Ubuntu 20.04 LTS

  • Package Version: 4.3.9 + OS 1.3.6

  • Service logs, configurations, and environment variables:

Logs are ingested by an old GL server and automatically forwarded to the new one via an output rule.
This will change in the future and only the new GL instance will prevail.

3. What steps have you already taken to try and solve the problem?

Tried with some extractors but no cigar…

4. How can the community help?

Is it possible at all?

Or is the delay caused by the forwarder?


Hey @m_mlk

What I understand is at 16:25:06 the Log/Event/Message was create.
At 16:25;09 it was processed and message Display is exactly 1 hour ahead.

Have you checked the timestamp from the user logged in? Basically ensure that the Graylog server time is correct from the Operating system to the user and everything in-between.

1 Like

Hi @gsmith

Hmm…a better example is shown below:

As you can appreciate, the timestamps are almost the same… they only differ in a few seconds.

The goal, if feasible, is to have the timestamp from “full_message” everywhere.

However, if I understand you correctly, “full_message” shows the timestamp when the message was generated, and Timestamp/timestamp show the time when the message was processed by Graylog… that would mean that it is not possible to do what I’m aiming for, right?


Ok different,

Almost, The timestamp in the message was generated from the source, Elasticsearch index’s that message, as you know by default ES uses UTC, so normally to display all three sections the same timestamp you may need to look at the following

  • Graylog server System/Overview i.e, Time configuration
  • Remote device time zone
  • Web UI user

In the forum, I have seen these were all set but unfortunately a member had to use a pipeline to correct the time issue
What I do see is a 7 sec delay, in a case like that ensure you have enough resource for Elasticsearch /Graylog. That maybe do to a delay of ingesting logs from the journal. Which is a really long time.

to me it looks as the “timestamp” is set by graylog based on the arrival-time of the logmessage. The field “full_message” contains the timestamp from the application, and some other information (DEBUG and blured stuff). Am I right?

My approach in this case would be

  1. build a grok pattern to get the timestamp from the field “full_message”.
  2. build a piepline to extract that value and put it into “timestamp”. Make sure to use a date-type and not a string. A little magic with flex_parse_date might be neccesary.
1 Like

Hi @gsmith

The GL GUI shows this:


so it seems like everything is in-sync, despite that the Graylog servers are set like this:

$ timedatectl 
               Local time: Mon 2022-12-05 18:01:51 CET
           Universal time: Mon 2022-12-05 17:01:51 UTC
                 RTC time: Mon 2022-12-05 17:01:52    
                Time zone: Europe/Zurich (CET, +0100) 
System clock synchronized: yes                        
              NTP service: active                     
          RTC in local TZ: no

…and OpenSearch servers are set like this:

$ timedatectl 
      Local time: Mon 2022-12-05 17:02:42 UTC
  Universal time: Mon 2022-12-05 17:02:42 UTC
        RTC time: Mon 2022-12-05 17:02:42
       Time zone: Etc/UTC (UTC, +0000)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

The screenshot above was taken after I copied the timedatectl output.

Or does it influence the way logs are handled internally?


Looks good, BUT for a 7 sec delay @ihe suggestion I would concur, I didn’t think about…

on the arrival-time of the logmessage.

I was going off of ingestion time of the log’s. Perhaps its both.

Graylog defaults to utilizing the timestamp provided within the log as the timestamp in the timestamp field, but in the case where it cannot automatically parse the timestamp from a syslog message, it will overwrite that timestamp with the time the message is processed by Graylog to avoid dropping the message (“timestamp” is one of three required fields for all messages to have in Graylog, the other two being “message” and “source”). The easiest way to rectify this is to use processing pipelines to parse the timestamp from the front of the message and replace the value of “timestamp” with the timestamp you get from the message. @chris.black might have an example rule for this lying around somewhere.

Correct me if I wrong. So in this case, if it cant not process the timestamp in the message the 7 sec delay shown above could be from when the message was processed? This is kind of new to me.

That’s correct @gsmith. If the time in the original message is seven seconds earlier than the timestamp Graylog assigns, that would indicate that there is some delay either in transit, or perhaps in processing in the Graylog server. Seven seconds is a long delay inside Graylog, but if the system is buffering, or even writing to the journal, it could easily introduce a seven second delay.

hi @chris.black

thanks for the extra clarification.
@joe.gross said that you’d have an example on how to fix this… or did I understand him wrong? :slight_smile:


He said I might have a sample pipeline rule. I thought I did too, but can’t find it now. I’ll keep digging and post it here if I can find it.

1 Like

I was able to get one from a colleague. This is not universal, but should get you started.

rule “Parse - Nginx Timestamp”
let new_time = parse_date(value: to_string($message.nginx_timestamp), pattern:“yyyy-MM-dd’T’HH:mm:ssZ”);
set_field(“timestamp”, new_time);


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.