Bulk Request Sizing

Hi, i searched the forum and googled a lot but couldnt find any information on how graylog is choosing the size of its bulk requests. Graylog is breaking apart every now and then because the bulk requests exceed the heap size of elasticsearch. I increased the heap size and few days later it collapsed again. I do not think this is the right move to increase the heap size over and over. I would rather tell graylog to reduce the size of its bulk requests. But how can I set this option and how is it determined?

This is the error which gets spammed when its happening:

{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<transport_request>] would be [5976062691/5.5gb], which is larger than the limit of [5957995724/5.5gb], usages [request=0/0b, fielddata=0/0b, in_flight_requests=445085/434.6kb, accounting=5975617606/5.5gb]","bytes_wanted":5976062691,"bytes_limit":5957995724}

Also this is appearing on the search page:

1 Like

Hey guys,

any ideas? because we’re currently running into a similar issue with bulk requests…

what would you recommend?

cheers,
theresa

you can solve the problem by giving elasticsearch more HEAP. Either with adding new nodes or raising java HEAP for elasticsearch.

This JAVA HEAP needs to be available and not more than 50% of available RAM (and not more than 31GB per node). The main problem is that the server can’t handle the data in RAM that you have requested.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.