Beats Input - bytes can be at most 32766 in length

(Paul) #1

Hi Guys

I’ve deployed Graylog to use for a syslog solution. Currently using Sidecar to do the collections of winlogs only.

Been running a week and started loading some more hosts … Then Pooooooof, graylog fell over. Initially I was clueless as to whats going on.

After a bit of digging, I found the dreaded elasticsearch error which seems to be quite common ( bytes can be at most 32766 in length)

I have found a few articles where people say update the analyser, some others that mention setting index to not_analyzed or Index No. Another post mentioned to set ignore_above => 256.

Thing is … I have no clue where to even try setting these things ? Can anybody shed some light please?

I have managed to find the actual message that is too large on the originating server which is causing the failure. Turns out to be a HP WBEM Dump Event (Id 1001).

If anyone knows how I can prevent this from happening, or define some sort of “exclude” for this message that would be a great help.

Perhaps, I could instruct sidecar collector to ignore this message ? Is that possible ? Would any know?

PS - I have tried this with Graylog 2.1 and just tried with 2.2 as well. Both doing the same thing…

Appreciate your help guys :slight_smile:



Warning Immense term in field in graylog serverlog
Identify source and reason of bad messages
Where the length limitation comes from or is set up?
(Paul) #2

To add a little more …

When I seen the original crash (v2.1), I was all over the server trying to find the reason etc. Rebooting or restarting services does bring it back to life.

I have a test Graylog deployed (v2.2) and was able to replicate the issue on that. As this is a test machine … I just left it running. An hour later i went back to it and it had come back to life. What better, is that it did log the messages from the Host which was sending the HUGE PARCELS and still logging.

I guess the issue is not as bad as i initially thought … To me it seemed like everything had died a death. But it does appear to be able to recover itself which is awesome!


(Jason Haar) #3

sounds to me like graylog itself should never push data into ES that can crash it. ie both the input program (sidecar) and the middleware (graylog) have a “bug” in that they are not compliant with ES limitations? Simplest solution would be to have graylog reject any Input that is >32K?

(Jochen) #4

See for a related issue on GitHub.

tl;dr: Lucene (the search library used in Elasticsearch) only supports indexing fields with a maximum size of 32kb.

There are multiple possibilities to circumvent that, such as disabling indexing for that field with a custom index mapping or trimming the contents of the field to 32kb on the client or in Graylog using message processing pipelines.

(Paul) #5

Appreciate your responses thanks guys!

I do agree in that it shouldn’t allow the large message in the 1st place … And yes I did find that article on Git, but I’m still at a complete loss as to how to implement any of the suggested fixes.

I can post a copy of the crash exception if required, but its the same as the one in Let me know if you need it :slight_smile:

At least the server heals itself and as far as I can tell, I haven’t lost any messages bar the Message Bombs lol

(Mike Daoust) #6

how can trimming be done with the processing pipelines?

(Dustin Tennill) #7

I just hit this same issue while ingesting ticket data from our issue tracking system. I was thinking about setting the field in question to not be indexed, but then writing a bit of code on the sending side to get distinct “terms” from the huge text field to store in different field so the ticket data is actually searchable. It would be awesome if this could be done from the Graylog side as the message passes through the pipeline system.

Not sure how often this affects other Graylog users, but as a proponent of “log everything” this will probably come up again for us.


(Jochen) #8

You can use the substring() function for this, although restricting the size of fields in the client sending the log messages would be preferable.

(Dustin Tennill) #9

Thanks !!

I ended up doing this in logstash before sending the data in, but I will switch to this method.


(Paul) #10

Hi johcen. Is there any instructions on how or where to set it at the client? Only have a handful of windows hosts that produce the error, they all use the sidecar collector. Is it something to set in the yml? Would be great if I could just stop it from happening, but I’m not sure what needs doing :slight_smile: Thanks

(Jochen) #11

This depends on your clients and how you’re shipping logs to Graylog.

(Paul) #12

Any clues ?

We collect Windows Event Logs using the Windows Collector Sidecar. These clients are sending their logs to a Beats input configured on Graylog Server side.

(Jochen) #13

Winlogbeat and Filebeat currently don’t support limiting the event/field size (see, so you’ll need to cut the relevant field in Graylog:
Beats Input - bytes can be at most 32766 in length

(Paul) #14

This is like pulling teeth lol

Does that mean I need to make a Pipeline ? If so, is there an example pipeline I could look at ?

(Jochen) #15


There’s no copy & paste ready snippet for this, no. The idea is, that you’ll get proficient enough to write you own pipeline rules (and maybe share them with the community).

If you need support in setting up Graylog in your enterprise, consider buying professional support:

(Paul) #16

Awesome! Least I have direction now … Before I admin I was clueless where I should be looking :slight_smile: Thanks Jochen.

If anyone has an example pipeline I could look at … To try figure out how it works, would be a great help Thanks !

(John Buchanan) #17

Could I use substring() to match when the length of full_message field > 32766, and then a set_field to record the size? I’m seeing indexing failures “immense term in field full_message”, and want to track down the source of these messages.

(Jan Doberstein) #18

we have this little plugin that can help to track the source of big messages.