Elasticsearch custom index mapping

What I meant was… It looks like you said you were running a few regex filters on the Input where these large files are coming in. If the regex gets hung up trying to find something, particularly easy on a large file, you are likely to lock up buffers. I posted something here a while back for someone having similar issues (Pipeline processing appears to get stuck at times - #3 by tmacgbay) They moved some things around and seemed to have solved their issue. If you are currently running regex (or anything else) via extractor or pipeline against things coming in on that input, can you post?

My completely separate suggestion to handle/truncate the large file in the pipeline with the substring() command… won’t help if you have an extractor on the input that is using regex/GROK/causing the problem. I would think you could construct a regex that would capture the first 32,000 characters or so (Something like: ^.{1,32000}) and then use the result for your message - not sure what you actually have going on as a large message traverses to the locking point though… so maybe post up what you have? :person_shrugging: