IPFIX json template question

Hello
I’m trying to use an ipfix input with my stormshield UTM, it has several proprietary fields that, according to docs, need to be specified in custom .json file, here’s how those fields look like in wireshark:


so how resulting json should look like? i mean i was basing mine on what i found on this forum and docs and came up with something like this:

 {
         "enterprise_number": 11256,
         "information_elements": [
                 {
                         "element_id": 1,
                         "name": "stormshield_1",
                         "data_type": "ipv4Address"
                 },
                 {
                         "element_id": 3,
                         "name": "stormshield_3",
                         "data_type": "ipv4Address"
                 },
                 {
                         "element_id": 4,
                         "name": "stormshield_4",
                         "data_type": "unsigned8"
                 },
                 {
                         "element_id": 5,
                         "name": "stormshield_5",
                         "data_type": "string"
                 }
         ]
 }

and i still get errors in logs, anyway i don’t have any clue about what those fields should represent, Stormshield doesn’t provide any docs about it,

alternatively is there any way to ignore those messages or ignore/mute errors resulting from message decoding fails? because it sh*ts a lot in my logs and i’m not comfortable with that;-)

EDIT: tried this:

{
        "enterprise_number": 11256,
        "information_elements": [
                {
                        "element_id": 1,
                        "name": "stormshield_1",
                        "data_type": "unsigned32"
                },
                {
                        "element_id": 3,
                        "name": "stormshield_3",
                        "data_type": "unsigned32"
                },
                {
                        "element_id": 4,
                        "name": "stormshield_4",
                        "data_type": "unsigned8"
                },
                {
                        "element_id": 5,
                        "name": "stormshield_5",
                        "data_type": "octetArray"
                }
        ]
}

still doesn’t work

Graylog doesn’t use all the field types available in Elasticsearch, it boils it down to keyword, long and date (and maybe one or two more special cases). You can see some of that indirectly in the schema standardization that they have out there - here.

So if you kept your custom index JSON to keyword and long for the fields you want to enforce type on it will be happier.

For the errors in the log, it would help to see an example message and how it is currently handled as well as the error message. I am not familiar with ipfix and stormshield but maybe I can spot something. :smiley:

my post isn’t about elasticsearch field types and custom index template, it’s about custom IPFIX fields definition json, as in graylog docs here, completely different thing

errors in my logs look like this though:

2022-06-24 13:48:46,533 ERROR: org.graylog2.shared.buffers.processors.DecodingProcessor - Unable to decode raw message RawMessage{id=5e722040-f3c4-11ec-8541-0242ac120005, messageQueueId=642769718, codec=ipfix, payloadSize=388, timestamp=2022-06-24T13:48:46.532Z, remoteAddress=/192.168.0.254:5301} on input <6298a06fd0e7117ec888ad22>.
2022-06-24 13:48:46,533 ERROR: org.graylog2.shared.buffers.processors.DecodingProcessor - Error processing message RawMessage{id=5e722040-f3c4-11ec-8541-0242ac120005, messageQueueId=642769718, codec=ipfix, payloadSize=388, timestamp=2022-06-24T13:48:46.532Z, remoteAddress=/192.168.0.254:5301}
java.lang.IndexOutOfBoundsException: readerIndex(22) + length(8) exceeds writerIndex(24): UnpooledHeapByteBuf(ridx: 22, widx: 24, cap: 24/24)
at io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1442) ~[graylog.jar:?]
at io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1428) ~[graylog.jar:?]
at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:895) ~[graylog.jar:?]
at org.graylog.integrations.ipfix.IpfixParser.parseDataSet(IpfixParser.java:364) ~[?:?]
at org.graylog.integrations.ipfix.codecs.IpfixCodec.lambda$decodeMessages$3(IpfixCodec.java:206) ~[?:?]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?]
at org.graylog.integrations.ipfix.codecs.IpfixCodec.decodeMessages(IpfixCodec.java:212) ~[?:?]
at org.graylog2.shared.buffers.processors.DecodingProcessor.processMessage(DecodingProcessor.java:154) ~[graylog.jar:?]
at org.graylog2.shared.buffers.processors.DecodingProcessor.onEvent(DecodingProcessor.java:94) [graylog.jar:?]
at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:95) [graylog.jar:?]
at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:49) [graylog.jar:?]
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:143) [graylog.jar:?]
at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66) [graylog.jar:?]
at java.lang.Thread.run(Thread.java:833) [?:?]

ha! Never been in that part of the docs… yup, completely different! I poked around a bit on this part of the error readerIndex(22) + length(8) exceeds writerIndex(24) and saw a bunch of unhelpful Minecraft posts… also unrelated. :crazy_face: The only other thing I can think of is to rotate your index when you are changing the field types in your custom JSON since each Elasticsearch index will hold onto what it first sees the field type as.