I am very much a Graylog noob and have followed the instructions to the best of my ability. Graylog is working, it is just over whelmed with logs. I am not sure what to set the configs to to make the most out of the server. What details should I post here in order to get help with the configs?
Just a cursory glance leads me to think that Elasticsearch doesnât have enough resources to chew through your messages. What do you have the heap set for in ES (/etc/default/elasticsearch in deb, /etc/sysconfig/elasticsearch in RHEL) and Graylog (/etc/default/graylog in deb, /etc/sysconfig/graylog in RHEL) ?
#Specifies the maximum file descriptor number that can be opened by this process #When using Systemd, this setting is ignored and the LimitNOFILE defined in
#/usr/lib/systemd/system/elasticsearch.service takes precedence
#MAX_OPEN_FILES=65535
#The maximum number of bytes of memory that may be locked into RAM #Set to âunlimitedâ if you use the âbootstrap.memory_lock: trueâ option #in elasticsearch.yml. #When using systemd, LimitMEMLOCK must be set in a unit file such as
#/etc/systemd/system/elasticsearch.service.d/override.conf.
#MAX_LOCKED_MEMORY=unlimited
#Maximum number of VMA (Virtual Memory Areas) a process can own #When using Systemd, this setting is ignored and the âvm.max_map_countâ #property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf
#MAX_MAP_COUNT=262144
Ah! Ok. That makes sense. So, you presently have 16GB assigned to Graylog, but Elastic has nothing in ES_JAVA_OPTS, which IIRC, means that it takes a default of 1GB. You can change that with the following:
Though since youâve clearly got more RAM, Iâd bump it up closer to something like 6GB for ES.
FWIW, you might consider tuning this a bit more finely and maybe end up giving 10G in heap each for Elastic and Graylog. Unless youâre seeing that Graylog is having trouble keeping up with processing messages, in which case you might benefit more from scaling Graylog out than up.