Graylog process buffer utilization high , too many unprocessed messages

Hi all ,

I been reading through some previous post about the “subject” in my post but cant seem any of these solve my problem .
I have a graylog appliance installation (all in one) and my server specs are 8 vCores, 24GB RAM and 280GB HD .
2.2.3+7adc951, codename Stiegl
Oracle Corporation 1.8.0_144 on Linux 3.19.0-25-generic
After we migrate server to new Datacenter ,performance has been degraded and i don’t really know the reason .
In the middle of the day there are many unprosseced messages and inputs are only GELF TCP and Syslog UDP and we have no Outputs or Pipelines .
Storage infrastructure it’s a black box for me and don’t know even RAID,TYPE (NFS,DAS,iSCSI,Fiber attached) or hard disk tier (SSD ,SATA,SAS) .

I have already implemented some fine tuning proposals like raise journal size to 4GB , i gave graylog-server 7GB RAM and ES 13GB of RAM,raise processbuffers to 4 and outputbuffer to 2 , also change ring size to 131072 .
Below you can find custom attributes

{
“timezone”: “Europe/Athens”,
“smtp_server”: “xxxxxxxxxxx”,
“smtp_port”: 25,
“smtp_user”: “”,
“smtp_password”: “”,
“smtp_from_email”: “xxxxxxxxxxxxxxxxxxxxxx”,
“smtp_web_url”: “http://graylog”,
“smtp_no_tls”: true,
“smtp_no_ssl”: true,
“master_node”: “127.0.0.1”,
“local_connect”: false,
“current_address”: “xxxxxxxxxxx”,
“last_address”: “xxxxxxxxxxxxx”,
“enforce_ssl”: true,
“journal_size”: 4,
“node_id”: false,
“internal_logging”: true,
“web_listen_uri”: false,
“web_endpoint_uri”: false,
“rest_listen_uri”: false,
“rest_transport_uri”: false,
“external_rest_uri”: false,
“custom_attributes”: {
“graylog-server”: {
“memory”: “7168m”,
“processbuffer_processors”: “4”,
“outputbuffer_processors”: “2”,
“ring_size”: “131072”,
“inputbuffer_ring_size”: “131072”,
“output_batch_size”: “5000”
},
“elasticsearch”: {
“memory”: “14336m”
}
}
}

Please how can I solve this issue ?
Thanks in advance !

root@graylog:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 12G 4.0K 12G 1% /dev
tmpfs 2.4G 708K 2.4G 1% /run
/dev/mapper/graylog–vg-root 15G 5.4G 8.8G 39% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 12G 0 12G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sdb1 296G 58G 223G 21% /var/opt/graylog/data

@kyriazisg Is there any changes in and Graylog ES JVM heap value after migration? Os Is there any change in hardware configuration changes(Like CPU and RAM) that happened on ES and Graylog server?

Dear Makarand,
First of all thanks for answering back .
I have raise both Graylog’s and ES JVM heap size to 7GB and 14GB ,after the problem showed up .
heap.current heap.percent heap.max
3.8gb 27 13.9gb
500mb 7 6.6gb

I have Opened a ticket to datacenter’s stuff in order to give more cpu capacity (was 4 vCores and now has 8 vCores) as also RAM (was 12GB and now is 24GB) to the VM but nothing change .
At the time i’m answering on this ticket, graylog has 25000 unprocessed messages and it’s 10:30 pm !
I must mention that system resources seem to be standby . System has many resources free .
I really don’t know what else to try !

Check also you disk subsystem especially IOPS, usign for example fio

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.