Compounds with different value after extraction

1. Describe your incident:

Hello forum,

Has anyone gone through this issue?
I’m ingesting windows logs via gelf/udp with NXLog.

But some fields are strange.
Notice that the “Direction” and “LayerName” fields are different after the gelf extraction. In “full_message” it is possible to verify the correct value of these fields:

Every help is welcome.

2. Describe your environment:

  • OS Information:

Implementation over “Docker Compose” in Ubuntu 22.04

  • Package Version:

Graylog 5.1.0-rc.1+9ad90f6 on df43d9e2a0af (Eclipse Adoptium 17.0.7 on Linux 5.15.0-25-generic)

  • Service logs, configurations, and environment variables:
snoc-grl-elastic  | {"type": "server", "timestamp": "2023-05-10T18:44:52,740-03:00", "level": "INFO", "component": "o.e.a.b.TransportShardBulkAction", "cluster.name": "docker-cluster", "node.name": "9dd982e9c3b9", "message": "[graylog_8][1] mapping update rejected by primary", "cluster.uuid": "VE7hoPqLSN-VvgvG57e2Nw", "node.id": "xTYj57i4TvG6nNvh1LSrtw" ,
snoc-grl-elastic  | "stacktrace": ["java.lang.IllegalArgumentException: Limit of total fields [1000] has been exceeded",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MappingLookup.checkFieldLimit(MappingLookup.java:170) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MappingLookup.checkLimits(MappingLookup.java:162) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.DocumentMapper.validate(DocumentMapper.java:297) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:476) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:421) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:361) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:292) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:175) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:220) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:126) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:85) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:179) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
snoc-grl-elastic  | "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
snoc-grl-elastic  | "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
snoc-grl-elastic  | {"type": "server", "timestamp": "2023-05-10T18:45:44,748-03:00", "level": "INFO", "component": "o.e.a.b.TransportShardBulkAction", "cluster.name": "docker-cluster", "node.name": "9dd982e9c3b9", "message": "[graylog_8][3] mapping update rejected by primary", "cluster.uuid": "VE7hoPqLSN-VvgvG57e2Nw", "node.id": "xTYj57i4TvG6nNvh1LSrtw" ,
snoc-grl-elastic  | "stacktrace": ["java.lang.IllegalArgumentException: Limit of total fields [1000] has been exceeded",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MappingLookup.checkFieldLimit(MappingLookup.java:170) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MappingLookup.checkLimits(MappingLookup.java:162) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.DocumentMapper.validate(DocumentMapper.java:297) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:476) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:421) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:361) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:292) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:175) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:220) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:126) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:85) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:179) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
snoc-grl-elastic  | "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
snoc-grl-elastic  | "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
snoc-grl-elastic  | {"type": "server", "timestamp": "2023-05-10T18:45:44,757-03:00", "level": "INFO", "component": "o.e.a.b.TransportShardBulkAction", "cluster.name": "docker-cluster", "node.name": "9dd982e9c3b9", "message": "[graylog_8][1] mapping update rejected by primary", "cluster.uuid": "VE7hoPqLSN-VvgvG57e2Nw", "node.id": "xTYj57i4TvG6nNvh1LSrtw" ,
snoc-grl-elastic  | "stacktrace": ["java.lang.IllegalArgumentException: Limit of total fields [1000] has been exceeded",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MappingLookup.checkFieldLimit(MappingLookup.java:170) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MappingLookup.checkLimits(MappingLookup.java:162) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.DocumentMapper.validate(DocumentMapper.java:297) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:476) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:421) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:361) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:292) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:175) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:220) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:126) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:85) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:179) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic  | "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
snoc-grl-elastic  | "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
snoc-grl-elastic  | "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
snoc-grl-db       | {"t":{"$date":"2023-05-10T19:32:35.736-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683757955:736146][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48014, snapshot max: 48014 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}

snoc-grl-db       | {"t":{"$date":"2023-05-10T19:33:35.757-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758015:757313][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48170, snapshot max: 48170 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db       | {"t":{"$date":"2023-05-10T19:34:35.784-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758075:784332][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48326, snapshot max: 48326 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db       | {"t":{"$date":"2023-05-10T19:35:35.807-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758135:807324][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48481, snapshot max: 48481 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db       | {"t":{"$date":"2023-05-10T19:36:35.836-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758195:836751][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48636, snapshot max: 48636 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db       | {"t":{"$date":"2023-05-10T19:37:35.857-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758255:857719][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48791, snapshot max: 48791 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db       | {"t":{"$date":"2023-05-10T19:38:35.882-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758315:882673][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48951, snapshot max: 48951 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db       | {"t":{"$date":"2023-05-10T19:39:35.899-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758375:899675][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 49106, snapshot max: 49106 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db       | {"t":{"$date":"2023-05-10T19:40:35.920-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758435:920371][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 49264, snapshot max: 49264 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db       | {"t":{"$date":"2023-05-10T19:41:35.940-03:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758495:940534][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 49419, snapshot max: 49419 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db    

3. What steps have you already taken to try and solve the problem?

Just a basic review on NXLog settings:

4. How can the community help?

The community has already helped me a lot. This problem in question is more of an everyday obstacle.
Hope someone else has gone through this and resolved it. And be able to help me with the tip.

Although I find ElasticSearch errors strange, like:

stacktrace": ["java.lang.IllegalArgumentException: Limit of total fields [1000] has been exceeded",

This also happens to be an unknown problem for me… I only see it happen with “gelf with windows log events”.

Will both problems be on the NXLog agent?

Those %%-prefixed values are from the XML event data. See for example 5156(S) The Windows Filtering Platform has permitted a connection. (Windows 10) | Microsoft Learn

Not sure how they map to the plain text. But it’s not a decoding problem in Graylog.

1 Like

Grateful, @patrickmann for your return.
See I’m using a “Graylog 5.1.0-rc.1+9ad90f6” version of graylog in my lab.
Maybe this could be the problem.

I think those %%xxxxx values are resource ids from a Windows DLL. There are a number of discussions about this if you do a search. Unfortunately there doesn’t seem to be any other source for decoding those values. I think you will need to deal with them yourself, e.g. translate them in a pipeline rule.

1 Like

I understood,
@patrickmann Thank you for clarifying this doubt of mine.
Try to work with the pipelines for this.

Hello friend,
See if you can show me the way to the rocks.
That is, I still have difficulties with the pipelines, but I’m trying to learn.
For the question above, I discovered that for the fields in question, there are only two possible values:
“Inbound” and “Outbound”
Looking at what others have already done I tried to make this rule below:

rule 
    "correct the fields"
when
    // check if the field exists to avoid wasting work
    has_field($message.Direction) == true
then
    // join values (from/to)
    let new_value = key_value("Inbound", to_string($message.Direction), "%%14592");
    let new_value = key_value("Outbound", to_string($message.Direction), "%%14593");
    // update the value in the "Direction" field.
    set_field("$message.Direction", new_value);
end

But I don’t know if it’s functional. Can you help me?
I could use csv table lookup. But I think that would be “killing the ant with a bazooka”.
Grateful.

Csv lookup table would be the “official” way to do this. I would just write two rules, and put them both in the same stage of the pipeline. Then just make the rule when $message.direction == “%%blah”, then set_field inbound.

Just make sure to set the pipeline stage to matches any rule.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.