1. Describe your incident:
Hello forum,
Has anyone gone through this issue?
I’m ingesting windows logs via gelf/udp with NXLog.
But some fields are strange.
Notice that the “Direction” and “LayerName” fields are different after the gelf extraction. In “full_message” it is possible to verify the correct value of these fields:
Every help is welcome.
2. Describe your environment:
- OS Information:
Implementation over “Docker Compose” in Ubuntu 22.04
- Package Version:
Graylog 5.1.0-rc.1+9ad90f6 on df43d9e2a0af (Eclipse Adoptium 17.0.7 on Linux 5.15.0-25-generic)
- Service logs, configurations, and environment variables:
snoc-grl-elastic | {"type": "server", "timestamp": "2023-05-10T18:44:52,740-03:00", "level": "INFO", "component": "o.e.a.b.TransportShardBulkAction", "cluster.name": "docker-cluster", "node.name": "9dd982e9c3b9", "message": "[graylog_8][1] mapping update rejected by primary", "cluster.uuid": "VE7hoPqLSN-VvgvG57e2Nw", "node.id": "xTYj57i4TvG6nNvh1LSrtw" ,
snoc-grl-elastic | "stacktrace": ["java.lang.IllegalArgumentException: Limit of total fields [1000] has been exceeded",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MappingLookup.checkFieldLimit(MappingLookup.java:170) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MappingLookup.checkLimits(MappingLookup.java:162) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.DocumentMapper.validate(DocumentMapper.java:297) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:476) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:421) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:361) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:292) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:175) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:220) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:126) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:85) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:179) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
snoc-grl-elastic | "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
snoc-grl-elastic | "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
snoc-grl-elastic | {"type": "server", "timestamp": "2023-05-10T18:45:44,748-03:00", "level": "INFO", "component": "o.e.a.b.TransportShardBulkAction", "cluster.name": "docker-cluster", "node.name": "9dd982e9c3b9", "message": "[graylog_8][3] mapping update rejected by primary", "cluster.uuid": "VE7hoPqLSN-VvgvG57e2Nw", "node.id": "xTYj57i4TvG6nNvh1LSrtw" ,
snoc-grl-elastic | "stacktrace": ["java.lang.IllegalArgumentException: Limit of total fields [1000] has been exceeded",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MappingLookup.checkFieldLimit(MappingLookup.java:170) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MappingLookup.checkLimits(MappingLookup.java:162) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.DocumentMapper.validate(DocumentMapper.java:297) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:476) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:421) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:361) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:292) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:175) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:220) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:126) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:85) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:179) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
snoc-grl-elastic | "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
snoc-grl-elastic | "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
snoc-grl-elastic | {"type": "server", "timestamp": "2023-05-10T18:45:44,757-03:00", "level": "INFO", "component": "o.e.a.b.TransportShardBulkAction", "cluster.name": "docker-cluster", "node.name": "9dd982e9c3b9", "message": "[graylog_8][1] mapping update rejected by primary", "cluster.uuid": "VE7hoPqLSN-VvgvG57e2Nw", "node.id": "xTYj57i4TvG6nNvh1LSrtw" ,
snoc-grl-elastic | "stacktrace": ["java.lang.IllegalArgumentException: Limit of total fields [1000] has been exceeded",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MappingLookup.checkFieldLimit(MappingLookup.java:170) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MappingLookup.checkLimits(MappingLookup.java:162) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.DocumentMapper.validate(DocumentMapper.java:297) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:476) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:421) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:361) ~[elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:292) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:175) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:220) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:126) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:85) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:179) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
snoc-grl-elastic | "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
snoc-grl-elastic | "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
snoc-grl-elastic | "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
snoc-grl-db | {"t":{"$date":"2023-05-10T19:32:35.736-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683757955:736146][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48014, snapshot max: 48014 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db | {"t":{"$date":"2023-05-10T19:33:35.757-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758015:757313][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48170, snapshot max: 48170 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db | {"t":{"$date":"2023-05-10T19:34:35.784-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758075:784332][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48326, snapshot max: 48326 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db | {"t":{"$date":"2023-05-10T19:35:35.807-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758135:807324][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48481, snapshot max: 48481 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db | {"t":{"$date":"2023-05-10T19:36:35.836-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758195:836751][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48636, snapshot max: 48636 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db | {"t":{"$date":"2023-05-10T19:37:35.857-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758255:857719][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48791, snapshot max: 48791 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db | {"t":{"$date":"2023-05-10T19:38:35.882-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758315:882673][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 48951, snapshot max: 48951 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db | {"t":{"$date":"2023-05-10T19:39:35.899-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758375:899675][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 49106, snapshot max: 49106 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db | {"t":{"$date":"2023-05-10T19:40:35.920-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758435:920371][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 49264, snapshot max: 49264 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db | {"t":{"$date":"2023-05-10T19:41:35.940-03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683758495:940534][1:0x7f570dd8d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 49419, snapshot max: 49419 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 6219"}}
snoc-grl-db
3. What steps have you already taken to try and solve the problem?
Just a basic review on NXLog settings:
4. How can the community help?
The community has already helped me a lot. This problem in question is more of an everyday obstacle.
Hope someone else has gone through this and resolved it. And be able to help me with the tip.