Hello, I doubt I am he first one asking this, but I cant find any source that would give me a satisfying answer.
We are using graylog to store logs from newer apps (through GELF input directly) and from older ones by sending physical logs through filebeat.
At both ways, we faced problem with ordering. Both milisecond precision in timestamp ISO 8006 and single thread at input collector ended up in logs in graylog in different order than in the physical log on disk.
We decided to use microsecond precision in the apps to asure correct order, but then we found out, that graylog / elastic search detect timestamp and shorten it to milisecond precision to store it as date, which doesnt support microsecond precision.
We malformed the timestamp, so graylog/elastic doesnt identify it as date. Our malformed timestamp is stored simply as text/string, and we are ordering the logs using that string. Works great.
I just think that we cant be the only ones having this problem, but theres no … for me “satysfying” solution.
Theres this huge ecosystem of logging apps, graylog, elkstack, … and noone needs data in order? Are we logging too fast? Are our logs in wrong format (not simple single event, but series of interconnected events)?
How did this ecosystem rose up without need for microsecond precision?
There are some posts on github and graylog pages with the same question, but again, no real solution.