Elasticsearch dies for no apparent reason

Hi,

I’m using Graylog 2.4.6 with elasticsearch 5.6.15 on a Centos server with 4Gb Ram and 1Gb Swap. I edit /etc/sysconfig/graylog-server to Xmx2g = half system RAM for a 4g system. It ran fine for a while but not every time I run a large search Elasticsearch seems to die. It does not print that though. I can only see that it is no longer running when I check with systemd. I used to run into this issue when I did not use swap but since adding swap it originally fixed the issue. So I’m thinking again that elasticsearch ran out of memory now that there is more data to search from. Any advice?

# systemctl status elasticsearch.service 
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: failed (Result: signal) since Mon 2019-02-25 13:33:33 UTC; 6min ago
     Docs: http://www.elastic.co
  Process: 8697 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DIR} (code=killed, signal=KILL)
  Process: 8695 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
 Main PID: 8697 (code=killed, signal=KILL)

Feb 25 13:32:44 em1-san-log systemd[1]: Starting Elasticsearch...
Feb 25 13:32:44 em1-san-log systemd[1]: Started Elasticsearch.
Feb 25 13:33:33 em1-san-log systemd[1]: elasticsearch.service: main process exited, code=killed, status=9/KILL
Feb 25 13:33:33 em1-san-log systemd[1]: Unit elasticsearch.service entered failed state.
Feb 25 13:33:33 em1-san-log systemd[1]: elasticsearch.service failed.

look at your elasticsearch server logs …

but when you have a 4GB Ram System, and allocated 2GB to Graylog, and you use the default 1GB for Elasticsearch you have no RAM left … if possible allocate more RAM to this server or set Graylog to 1GB and Elasticseach to 2GB what would make more sense.

You may as well give up because 4Gb for Graylog and Elasticsearch? Good luck. If you have already allocated 2Gb to Graylog, that leaves 2Gb for Elasticsearch, so if you allocated more than that to ES, and not even taking into account OS memory, it’ll blow up real quick.

Get more RAM. It’s cheap.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.