Indexing stops completely when "minor" error occurs

Ok so I get these indexing errors lately:

java.lang.IllegalArgumentException: Document contains at least one immense term in field=“trace” (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: ‘[42, 42, 32, 80, 111, 108, 105, 99, 121, 68, 101, 116, 97, 105, 108, 32, 99, 111, 110, 118, 101, 114, 116, 101, 100, 32, 116, 111, 32, 58]…’, original message: bytes can be at most 32766 in length; got 65539

First, how can I find what input that ‘trace’ field is originating from? I want to avoid this issue but I don’t recall what source sends messages with a ‘trace’ field and where it enters.

Second, once that indexing error is received, the indexing halts all together. The process buffer fills to 100% and the journal keeps filling up. So how to I clear this so indexing continues?

EDIT: the indexing doesn’t completely halt but it’s actually very slow after the error. Some logs actually get processed after 2-3 hours they arrived
EDIT2: basically this is happening but I can’t afford to flush the journal… failed to execute bulk item. No new messages are accepted in beats input. · Issue #4130 · Graylog2/graylog2-server · GitHub

So if anyone needs an answer to this same issue, I located the input by doing a search for logs containing a field labelled ‘trace’ which returned many entries, each containing the source input name. I simply modify the extractor so that specific field isn’t extracted anymore. Not ideal but it stopped the issue and the actual data is still in the full message field of the log.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.