More than 80k index failures

Hi guys,

i’m having plenty of index failures like

{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [1837477][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[firepower_555][1]] containing [259] requests, target allocation id: 3GXnrmVsTh-1SyvC7u2fxQ, primary term: 1 on EsThreadPoolExecutor[name = data-node-5/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@3a422700[Running, pool size = 10, active threads = 10, queued tasks = 200, completed tasks = 832379]]"}


  1. do i loose the logs or they are getting still written to elasticsearch?
  2. any indea what i can do to prevent this thing from happening?


  1. yes you loose messages (IMHO) because it is rejected
  2. it looks like your elasticsearch is having to much to work (no threads left)

how do i make room for more threads for elasticsearch?

add move CPUs to elasticsearch …

( just a quick search revealed this )

i have 1.5 /data node with 10 cpu cores which are not doing anything max they get to 20-30%…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.