To set the scene: A message with timestamp “2019-01-04 14:31:22.263Z” exists in a stream. When I view this stream and select “Search in all messages” and search for field foo = B (foo:"B") it will show up in the list of messages.
Now, at 14:45 (UTC), I change the dropdown to “Search in the last 2 hours”, then click the magnifying glass to update the search, and… no messages are found.
I’m kind of stumped as to why this happens, because the message is most definitely within the last 2 hours, yet will not show up in a relative search, neither does it with an absolute search. Only when I select “Search in all messages” do they show up.
This is currently kind-of-sort-of making our devs a little unhappy so any answers welcome.
Additional info: I’m using my admin account to do this; messages are processed through pipeline OK (debug statements verify this). Timestamp field in the message is listed as above.
if you can find in all messages, check the timestemp on the message.
After check the time on the GL WUI.
Ster on the ES servers.
Last time when I saw it it was time difference, and the GL put the timestamp from the future, so it wasn’t at the last X minutes. You can also try with absolute search (from midnight to next (not past) midnight
could you provide a screenshot that contain the following:
The first Timestamp is the field timestamp in the current users timezone. The second is the field timestamp with the current value. In my case both is UTC and equals. But that might be different if you have other timezone settings in your profile (or set as root_timezone when using the hardcoded admin account)
I’m a terrible screenshotter - the first timestamp displays as 2019-02-04 15:31:22.263 - the second timestamp (as stored in ES) is 2019-02-04 14:31:22.263Z (I typo’d in my original post, it’s 02 for month not 01 :D).
My user timezone is set to Paris.
This actually only happens in one specific set of logs, we have an application that used to log in “plain text” and now logs in JSON, so there’s 2 pipelines - one with an expensive regex to exclude the JSON logs and one to include - it’s only the JSON logs that get messed up, oddly enough it used to work - we’ve not made any changes to anything (at least that I can tell) so I’m a little stumped, still.
I’ll take a look and see what comes out of ES directly, I have a sneaking suspicion it may have decided to do something strange.
I’m almost starting to think it may be an ES issue, I haven’t had time to really dig in and debug but I will do so sometime this week, will update the topic with what I find.
Okay, so I changed parse_date to flex_parse_date which seems to have solved the off-by-one in the month, but doesn’t seem to parse the microseconds properly (always returns as 000)
For me it’s not a big deal, logs go in, and they get parsed. The devs on the other hand love their microseconds so they’ve been bugging me. I’ll do some tests because I’m absolutely 100% certain that I had the right format passed to parse_date() so why it would suddenly decide it’s January instead of February is a little beyond me
I did, but highly unlikely because the only access to ES is through Graylog API and that can’t modify messages. Nobody has direct access to ES , so the only thing I can think of now is that I stumbled on an odd bug.
I’ll do some tests to see if I can get it to happen again, and then figure out why