I had a problem that seemed an out of memory error with my Graylog master node. At the same time, mongodb instance within the same node crashed, and the cluster about halted. Booting the master node made the whole cluster alive again.
The log in the master node says
2017-03-03T16:07:20.276+02:00 WARN [jvm] [es-graylog-client01] [gc][young][694129][22464] duration [1.2s], collections [1]/[6.4s], total [1.2s]/[7.4m], memory [7gb]->[520.9mb]/[11.4gb], all_pools {[young] [2gb]->[21.7mb]/[4.8gb]}{[survivor] [247.5mb]->[0b]/[614.3mb]}{[old] [4.7gb]->[499.2mb]/[6gb]}
2017-03-03T16:07:20.284+02:00 WARN [NodePingThread] Did not find meta info of this node. Re-registering.
2017-03-03T16:07:20.294+02:00 WARN [StreamFaultManager] Processing of stream <5874c510a6772a50fd97c7e4> failed to return within 2000ms.
2017-03-03T16:07:20.294+02:00 WARN [StreamFaultManager] Processing of stream <5874c510a6772a50fd97c7e4> failed to return within 2000ms.
16G total memory, from where JVM is defined to have 12G, no swap. Does this mean I should have a smaller amount of memory for JVM? How would that affect the performance of the cluster? What parameters other than JVM memory size should I consider here?