I am sending GELF logs from my app to Graylog over TCP. If my GELF contains no additional fields, then the logs appear successfully in Graylag. But as soon as I add any additional fields (the ones with an underscore prefix), my logs do not appear in Graylog.
My question is, do I need to setup an additional field before sending a log that contains said additional field? My understanding is that Elasticsearch and Graylog do this automatically when a new additional field is encountered.
For the time being, I am putting my additional fields in a json object that I in turn set as the full_message.
If I send
_some_info as an additional field, it works. However, that’s probably because
_some_info is already present in graylog as a field. But when I try to send a new field, for example
_some_new_field_with_a_name_that_does_not_already_exist, then it does not work. I know there is no type conflict occurring, because this field (index) does not exist.
This is what I am trying to send via TCP:
"short_message": "this is a short message",
I can confirm that these additional fields do not currently exist in my graylog. Perhaps I don’t have the permission to create new fields? I don’t actually administer graylog; that’s done by someone else, and they have not allowed me to create an extractor, so perhaps that’s the issue?
Graylog automatically expand field from gelf, so it’s not necessary to create extractors for additional fields. Also it’s not problem with permissions, GELF are parsed automatically regardless of your permissions. I think that problem is probably with some field, which is different type, maybe level. Some applications uses level as string (e.g. information, warning), gelf require numeric value.
Anyway best way to debug is to check graylog server logs, if it can’t parse your gelf message, there should be error message in logs. So ask your graylog admin to show logs.
Good idea, i will check the logs.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.