Microsecond precision

Hello, I doubt I am he first one asking this, but I cant find any source that would give me a satisfying answer.

We are using graylog to store logs from newer apps (through GELF input directly) and from older ones by sending physical logs through filebeat.

At both ways, we faced problem with ordering. Both milisecond precision in timestamp ISO 8006 and single thread at input collector ended up in logs in graylog in different order than in the physical log on disk.

We decided to use microsecond precision in the apps to asure correct order, but then we found out, that graylog / elastic search detect timestamp and shorten it to milisecond precision to store it as date, which doesnt support microsecond precision.

We malformed the timestamp, so graylog/elastic doesnt identify it as date. Our malformed timestamp is stored simply as text/string, and we are ordering the logs using that string. Works great.

I just think that we cant be the only ones having this problem, but theres no … for me “satysfying” solution.

Theres this huge ecosystem of logging apps, graylog, elkstack, … and noone needs data in order? Are we logging too fast? Are our logs in wrong format (not simple single event, but series of interconnected events)?

How did this ecosystem rose up without need for microsecond precision?

There are some posts on github and graylog pages with the same question, but again, no real solution.
For example:

Hello @Johny

I see your issue, I also did a quick search on this subject. I have found couple newer post on GitHub.

Then I came across this post, a member used a pipeline.

Hope that helps

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.