in normal status, there should no such INFO event. and when i start 3 gelf collector agents to achieve around 15K MPS input rate, the gc interval becoming short and gc overhead become bigger with WARN output:
[2018-04-11T15:43:39,512][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15146] overhead, spent [1s] collecting in the last [1.8s]
[2018-04-11T15:43:47,887][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][young][15152][13246] duration [2.1s], collections [1]/[2.8s], total [2.1s]/[35m], memory [13.4gb]->[12.2gb]/[29.8gb], all_pools {[young] [1.2gb]->[1.3mb]/[1.3gb]}{[survivor] [153.9mb]->[166.3mb]/[166.3mb]}{[old] [12gb]->[12gb]/[28.3gb]}
[2018-04-11T15:43:47,887][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15152] overhead, spent [2.1s] collecting in the last [2.8s]
[2018-04-11T15:43:50,982][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15155] overhead, spent [716ms] collecting in the last [1s]
[2018-04-11T15:44:38,226][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15202] overhead, spent [518ms] collecting in the last [1s]
[2018-04-11T15:44:44,019][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][young][15207][13282] duration [1s], collections [1]/[1.7s], total [1s]/[35.1m], memory [15.6gb]->[14.9gb]/[29.8gb], all_pools {[young] [944.4mb]->[12.1mb]/[1.3gb]}{[survivor] [166.3mb]->[163.4mb]/[166.3mb]}{[old] [14.6gb]->[14.7gb]/[28.3gb]}
[2018-04-11T15:44:44,019][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15207] overhead, spent [1s] collecting in the last [1.7s]
[2018-04-11T15:44:51,914][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][young][15210][13283] duration [5.4s], collections [1]/[5.7s], total [5.4s]/[35.2m], memory [16gb]->[14.9gb]/[29.8gb], all_pools {[young] [1.1gb]->[8.6mb]/[1.3gb]}{[survivor] [163.4mb]->[166.3mb]/[166.3mb]}{[old] [14.7gb]->[14.8gb]/[28.3gb]}
[2018-04-11T15:44:51,915][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15210] overhead, spent [5.4s] collecting in the last [5.7s]
[2018-04-11T15:46:00,120][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15277] overhead, spent [528ms] collecting in the last [1s]
[2018-04-11T15:46:03,577][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15280] overhead, spent [660ms] collecting in the last [1.1s]
[2018-04-11T15:46:36,193][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15311] overhead, spent [500ms] collecting in the last [1s]
[2018-04-11T15:46:38,194][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15313] overhead, spent [543ms] collecting in the last [1s]
[2018-04-11T15:46:52,503][WARN ][o.e.m.j.JvmGcMonitorService] [es2.mylogs.com] [gc][15326] overhead, spent [601ms] collecting in the last [1.1s]
at the same time, i observed the message journal become very large after serveral hours input running with high CPU usage. I suspect that instead of shrinking the JVM heap size, I should add more CPU cores for ES nodes. already 28 cores for each ES.