Debug WARN logs (invalid "timestamp")

EDIT: Re-reading the first post, the error is happening on the input receiving the message? then you can ignore all of the below because that is after the fact (leaving it because I spend some good time writing it) . Perhaps it is an extractor on the Input that is thinking it is a string? It should come in as a number… here is a post that is semi relevant and also has a pipeline rule to change the epoc to something more helpful.

I believe what is going on here is Graylog receives the message and it has a Graylog defined message ID like 8ef9d1c0-27c9-11ed-8e61-0ee8466ead25 but when Graylog goes to store it in Elasticsearch the type is different. So Graylog sees the GELF timestamp as numeric or date or something like that and tries to send it to Elasticsearch as such. Elasticsearch says that field that it has for that is keyword (string) and then likely is rejecting the whole message. Assuming your GELF formatted messages are going to the correct Graylog GELF Input?

When you send in data to ElasticSearch from Graylog, Elasticsearch makes assumptions about the type of data coming in and from what I see it defaults to keyword. The things is it only makes this assumption once on the index when it is first started… so if it it guesses that it is a keyword at index start… it is only that and messages will be rejected if it is not. I think newer versions (that are incompatible with Graylog) care a lot less about this. (you can always go to Opensearch…which is the future of Graylog.)

So how did this happen? Usually it’s because as you are starting to pull in data the fields are off or something wasn’t explicitly defined initially as a numeric so Elasticsearch found/guessed a keyword and is sticking to it until the index is rotated. Maybe your input wasn’t GELF initially or maybe…

What can you do?

  1. Rotate the index by going to System/Indices, clicking on the index you should be receiving the GELF messages on, click on “maintenance” in the upper right and chose “rotate active write index” This will make Graylog tell Elasticsearch to close the current index and start a new one… at which point Elasticsearch will evaluate each field that comes in and “timestamp” should be evaluated correctly. (Be aware of what your rotation/retention strategy is when you do this)

  2. You could create a custom mapping in Elasticsearch… you shouldn’t need to do this though, by default the timestamp should be stored correctly GELF to GELF etc. I did to a post a while about about custom mapping and correctly historical information… here though if you are not getting the message stored, there is not historical information to correct. Still, there are curl commands in there to look at what is being stored in ElasticSearch so you can modify them to your index names and the timestamp field to get more detail on what is going on there.

  3. You didn’t mention anything of this but it is possible that an extractor or pipeline rule could be setting up the fields incorrectly.

GELF is nothing new to Graylog as well as timestamps… it’s just tracking down where the processing isn’t meeting expectations… don’t loose the faith! :smiley:

1 Like