Hi, I’ve Graylog running, without any problems. It is running since 3 months ago, with a medium load. Some basic stats: 60 indices, 304,261,633 documents, 121.2GB (2 graylog servers and 3 elasticsearch servers, 3 shards and 1 replica).
But we have Elasticsearch unsecured… Anyone can connect and do what the want. I could set some iptables rules, but I think that is too much simple and insecure. I have been reading about x-pack and search guard, but documentation is not clear for me. I just want to set up a local user with a password (no LDAP or AD), and tell graylog to connect to elasticsearch using those credentials. What is the best approach to do this? Perhaps, I would also configure Elasticsearch to use SSL with self signed certificates. How would Graylog behave with self signed certificates?
One more question, how would authentication and ssl affect elasticsearch replication?
Hi @jochen, iptables is a simple solution that could work in some environments. We don’t want to do this because if so we need to manage and collect all possible Graylog and ElasticSearch IPs and modify all of the nodes when adding or removing a node; so we can’t auto scale (in case of Graylog; for ElasticSearch I think autoscale, without discovery, could be even more difficult).
Using some kind of authentication and SSL, and auto-provisioning the servers with the credentials, shouldn’t be a problem.
Thanks @jan, this could be a simple solution without having to add more complexity using x-pack or simple guard, and adding SSL certificates should also be easy.
In the side of ElasticSearch, I don’t know much about it. How should we configure each Elasticsearch node to connect to the others using credentials?
If you don’t secure the transport ports of your Elasticsearch nodes, any “rogue” node could join the cluster as a client node and provide an unsecured HTTP API.