Heavy reads on Elasticsearch

Hi, I’m using Graylog 2.3.2 with ES 5.6.3 both on the same server (I’m running just one) on AWS/EC2 instance m4.2xlarge. This server has one 4TB st1 EBS volume attached for Elasticsearch data. My Graylog server receives logs from around 50 collector-sidecar agents, and a dashboard that has 10 more widgets (queries cached for from 300 sec. to 1 day).

Currently I see my server is choking Elasticsearch that it reads heavily ~ 100MB/s and no activities can do on Graylog’s web interface, though writes throughput is ~ 25 MB/s (my EBS volume is capped at 125MB/s). I can’t figure out what function that reads heavily like this. Can somebody point me what to look at and what could I do to improve my server’s perfomance.

Just some ideas: Alerting, Dashboards, other users, unsecured Elasticsearch clusters…

You could activate the access log and check for other users in Graylog:
http://docs.graylog.org/en/2.3/pages/securing.html#logging-user-activity

I only created one user, so maybe alerts and dashboard. I’m having 5 alerts conditions in which there are 2 unresolved, and it’s because misconfiguration and I can’t remove it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.