I have following error on graylog4 on ubuntu 20:
There are Graylog nodes on which the garbage collector runs too long. Garbage collection runs should be as short as possible. Please check whether those nodes are healthy. (Node: 7809e611-f490-4f51-b594-3381b3928d59 , GC duration: 1233 ms , GC threshold: 1000 ms )
What should i do? what does it mean?
Hello,
check on memory usage by ES, in my case the ES_HEAP_SIZE needed some manual
fiddling.
output_batch_size = 25 this can be something like 500
check on max_file_descriptors from your os. don’t know for ubuntu 20
EDIT: Perhaps set the refresh interval to 30
If the heap memory is full and if the garbage collector is unable to free more memory, the application will crash with an out of memory error the next time it tries to allocate memory.
i can’t find
output_batch_size = 25 in my elasticksearch.yml or java.option
Hello,
Its not in those files, need to look in Graylog configuration file.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.