Increase timestamp precision

1. Describe your incident:

We see that several log entries for different events have the same timestamp in Graylog, for example:

2022-10-18 17:02:20.688

2022-10-18 17:02:20.688

This introduces huge problem for us because we do not which one was really first.

2. Describe your environment:

  • OS Information: Ubuntu 20.04 LTS

  • Package Version: Graylog 4.3.8 + OpenSearch 1.3.3

  • Service logs, configurations, and environment variables:

$ grep -v \# /etc/graylog/server/server.conf | uniq 
is_leader = true

node_id_file = /etc/graylog/server/node-id

password_secret = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

root_password_sha2 = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx

root_email = "root@domain.tld"

bin_dir = /usr/share/graylog-server/bin

data_dir = /var/lib/graylog-server

plugin_dir = /usr/share/graylog-server/plugin

http_bind_address = 0.0.0.0:9000

elasticsearch_version = 7

trusted_proxies = 127.0.0.1/32, 10.10.10.10/32

elasticsearch_hosts = http://admin:admin@node-1:9200,http://admin:admin@node-2:9200,http://admin:admin@node-3:9200

rotation_strategy = count

elasticsearch_max_docs_per_index = 20000000

elasticsearch_max_number_of_indices = 20

retention_strategy = delete

elasticsearch_shards = 4
elasticsearch_replicas = 0

elasticsearch_index_prefix = graylog

allow_leading_wildcard_searches = true

allow_highlighting = false

elasticsearch_analyzer = standard

elasticsearch_index_optimization_jobs = 30

output_batch_size = 5000

output_flush_interval = 1

output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

processbuffer_processors = 2
outputbuffer_processors = 2

processor_wait_strategy = blocking

ring_size = 65536

inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking

message_journal_enabled = true

message_journal_dir = /var/lib/graylog-server/journal

message_journal_max_age = 12h
message_journal_max_size = 10gb

lb_recognition_period_seconds = 3

mongodb_uri = mongodb://mongodb-server:27017/graylog

mongodb_max_connections = 1000

mongodb_threads_allowed_to_block_multiplier = 5

transport_email_web_interface_url = https://node.domain.tld

proxied_requests_thread_pool_size = 32

prometheus_exporter_enabled = true

prometheus_exporter_bind_address = 0.0.0.0:9090

3. What steps have you already taken to try and solve the problem?

Searched in the forums and found some timestamp-related posts, none of them addresses the issue at hand here.

Our Developers have increased the timestamp precision in the application shipping logs to GL by using this format:

date:format=yyyy-MM-dd HH\\:mm\\:ss.ffffff

…but still we cannot see this new format in GL.

4. How can the community help?

Is there any configuration setting that allows increasing the timestamp precision?

TIA!

Best

Hey @m_mlk

Need to ask ya a couple questions.

What type of logs are sent?

What type of input are you using for those logs?

By chance do you have other streams branching off the " All Message" Stream? If so, was the tic box enabled to removed duplicates?

Form what I have seen, its best to manipulate Timestamp from Graylog then try to force Graylog to use a specific timestamp configuration.

Is it possible to show a screen shot of this issue? If so you know the routine.

EDIT:

Not sure if this will help but here are a couple examples I have in my arsenal.

rule "add pipeline_processingtime"
when 
    has_field("input_time")
then
    let ingestion_time = to_date($message.input_time).millis;
    let current_time = to_date(now()).millis;
    let millis_diff = to_long(to_long(current_time) - to_long(ingestion_time));

    set_field("pipeline_processingtime", millis_diff);
end
rule "add ingestion timestamp"
when 
    true
then
    let ingestion_time = now();
    let timestamp_time = parse_date(to_string($message.timestamp), "yyyy-MM-dd'T'HH:mm:ss.SSSZ");
    set_field("timestamp_text", to_string($message.timestamp,"default_text"));
    set_field("timestamp2b", $message.timestamp);
    set_field("timestamp2", timestamp_time);
    set_field("ingestion_time",ingestion_time);
    set_field("ingestion_time_millis", ingestion_time.millis);
    set_field("timestamp_millis",to_date($message.timestamp).millis); 
end

If a extractor is create you could use the Date modifier.


I just noticed something in the config file. By chance was it meant to have Graylog root user as UTC time_zone?

# Default is UTC
#root_timezone = UTC

@m_mlk

Ok so doing some testing, I also have found the same issue. I made a quick widget as shown below.

This was something I have not looked into yet or noticed , so I did some more research on nano seconds, microseconds, etc… on log files.

I found this.

Which lead me here.

Looking into nano seconds I found this.

Going down the rabbit hole I found this.

Not sure if you seen all these but I would guess, some how a pipeline would work for this type of issue.

@tmacgbay what would you think?

1 Like

I think you put a lot of work in there @gsmith!!

Looking at the stated doc for time formatting here there is no format relatable to ffffff (FN Time!)

It appears the best you can do is milliseconds: SSS

You can either submit a bug/feature into Graylog which will take some time (but definitely not fff or SSS time) or since you mentioned the developers made changes to accommodate, maybe they can make a separate differentiator or incrementor on their side…

It could also be that the log shipping is happening in batches and getting assigned their time at that time… Do you really have timestamp where there is no difference in the hundreds of milliseconds? (yow!)

It’s not clear though, what you are actually seeing coming through - can you show an example message?

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.