I am trying to install one graylog node and elasticsearch nodes on ubuntu 16.04 using ansible. even though the playbook is fully working, when I call my web-uri I get a 502 nginx message. Also I checked graylog logs and here is the output.
2017-08-03T10:28:20.221+02:00 INFO [CmdLineTool] Loaded plugin: Elastic Beats Input 2.2.3 [org.graylog.plugins.beats.BeatsInputPlugin] 2017-08-03T10:28:20.223+02:00 INFO [CmdLineTool] Loaded plugin: Collector 2.2.3 [org.graylog.plugins.collector.CollectorPlugin] 2017-08-03T10:28:20.223+02:00 INFO [CmdLineTool] Loaded plugin: Enterprise Integration Plugin 2.2.3 [org.graylog.plugins.enterprise_integration.EnterpriseIntegrationPlugin] 2017-08-03T10:28:20.224+02:00 INFO [CmdLineTool] Loaded plugin: MapWidgetPlugin 2.2.3 [org.graylog.plugins.map.MapWidgetPlugin] 2017-08-03T10:28:20.232+02:00 INFO [CmdLineTool] Loaded plugin: Pipeline Processor Plugin 2.2.3 [org.graylog.plugins.pipelineprocessor.ProcessorPlugin] 2017-08-03T10:28:20.233+02:00 INFO [CmdLineTool] Loaded plugin: Anonymous Usage Statistics 2.2.3 [org.graylog.plugins.usagestatistics.UsageStatsPlugin] 2017-08-03T10:28:20.567+02:00 INFO [CmdLineTool] Running with JVM arguments: -Djava.net.preferIPv4Stack=true -Xms1500m -Xmx1500m -XX:NewRatio=1 -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb 2017-08-03T10:28:20.789+02:00 INFO [Version] HV000001: Hibernate Validator null 2017-08-03T10:28:26.438+02:00 INFO [InputBufferImpl] Message journal is enabled. 2017-08-03T10:28:26.590+02:00 INFO [NodeId] Node ID: e591a5c0-4d64-47d7-99be-64a96550ad5c 2017-08-03T10:28:27.611+02:00 INFO [LogManager] Loading logs. 2017-08-03T10:28:27.618+02:00 INFO [LogManager] Logs loading complete. 2017-08-03T10:28:27.658+02:00 INFO [LogManager] Created log for partition [messagejournal,0] in /var/lib/graylog-server/journal with properties {file.delete.delay.ms -> 60000, compact -> false, max.message.bytes -> 104857600, min.insync.replicas -> 1, segment.jitter.ms -> 0, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> true, retention.bytes -> 5368709120, delete.retention.ms -> 86400000, flush.ms -> 60000, segment.bytes -> 104857600, segment.ms -> 3600000, retention.ms -> 43200000, flush.messages -> 1000000, segment.index.bytes -> 1048576}. 2017-08-03T10:28:27.658+02:00 INFO [KafkaJournal] Initialized Kafka based journal at /var/lib/graylog-server/journal 2017-08-03T10:28:27.772+02:00 INFO [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy <BlockingWaitStrategy>, running 2 parallel message handlers. 2017-08-03T10:28:27.831+02:00 INFO [cluster] Cluster created with settings {hosts=[127.0.0.1:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} 2017-08-03T10:28:27.929+02:00 INFO [cluster] Exception in monitor thread while connecting to server 127.0.0.1:27017 com.mongodb.MongoSocketOpenException: Exception opening socket at com.mongodb.connection.SocketStream.open(SocketStream.java:63) ~[graylog.jar:?] at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115) ~[graylog.jar:?] at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:113) [graylog.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144] Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_144] at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_144] at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_144] at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_144] at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_144] at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_144] at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:57) ~[graylog.jar:?] at com.mongodb.connection.SocketStream.open(SocketStream.java:58) ~[graylog.jar:?] ... 3 more 2017-08-03T10:28:27.972+02:00 INFO [cluster] No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out 2017-08-03T10:28:46.038+02:00 INFO [CmdLineTool] Loaded plugin: Elastic Beats Input 2.2.3 [org.graylog.plugins.beats.BeatsInputPlugin] 2017-08-03T10:28:46.040+02:00 INFO [CmdLineTool] Loaded plugin: Collector 2.2.3 [org.graylog.plugins.collector.CollectorPlugin] 2017-08-03T10:28:46.041+02:00 INFO [CmdLineTool] Loaded plugin: Enterprise Integration Plugin 2.2.3 [org.graylog.plugins.enterprise_integration.EnterpriseIntegrationPlugin] 2017-08-03T10:28:46.041+02:00 INFO [CmdLineTool] Loaded plugin: MapWidgetPlugin 2.2.3 [org.graylog.plugins.map.MapWidgetPlugin] 2017-08-03T10:28:46.049+02:00 INFO [CmdLineTool] Loaded plugin: Pipeline Processor Plugin 2.2.3 [org.graylog.plugins.pipelineprocessor.ProcessorPlugin] 2017-08-03T10:28:46.050+02:00 INFO [CmdLineTool] Loaded plugin: Anonymous Usage Statistics 2.2.3 [org.graylog.plugins.usagestatistics.UsageStatsPlugin] 2017-08-03T10:28:46.305+02:00 INFO [CmdLineTool] Running with JVM arguments: -Djava.net.preferIPv4Stack=true -Xms1500m -Xmx1500m -XX:NewRatio=1 -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb 2017-08-03T10:28:46.547+02:00 INFO [Version] HV000001: Hibernate Validator null 2017-08-03T10:28:48.693+02:00 INFO [InputBufferImpl] Message journal is enabled. 2017-08-03T10:28:48.716+02:00 INFO [NodeId] Node ID: e591a5c0-4d64-47d7-99be-64a96550ad5c 2017-08-03T10:28:48.920+02:00 INFO [LogManager] Loading logs. 2017-08-03T10:28:48.950+02:00 WARN [Log] Found a corrupted index file, /var/lib/graylog-server/journal/messagejournal-0/00000000000000000000.index, deleting and rebuilding index... 2017-08-03T10:28:48.976+02:00 INFO [LogManager] Logs loading complete. 2017-08-03T10:28:48.976+02:00 INFO [KafkaJournal] Initialized Kafka based journal at /var/lib/graylog-server/journal 2017-08-03T10:28:48.995+02:00 INFO [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy <BlockingWaitStrategy>, running 2 parallel message handlers. 2017-08-03T10:28:49.016+02:00 INFO [cluster] Cluster created with settings {hosts=[127.0.0.1:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} 2017-08-03T10:28:49.073+02:00 INFO [cluster] No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out 2017-08-03T10:28:49.081+02:00 INFO [cluster] Exception in monitor thread while connecting to server 127.0.0.1:27017 com.mongodb.MongoSocketOpenException: Exception opening socket at com.mongodb.connection.SocketStream.open(SocketStream.java:63) ~[graylog.jar:?] at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115) ~[graylog.jar:?] at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:113) [graylog.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144] Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_144] at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_144] at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_144] at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_144] at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_144] at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_144] at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:57) ~[graylog.jar:?] at com.mongodb.connection.SocketStream.open(SocketStream.java:58) ~[graylog.jar:?] ... 3 more 2017-08-03T10:29:19.074+02:00 ERROR [MongoConnectionProvider] Error connecting to MongoDB: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}] 2017-08-03T10:29:19.324+02:00 INFO [cluster] No server chosen by WritableServerSelector from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]}. Waiting for 30000 ms before timing out
Here is my playbook:
- name: Dummy play to force gathering facts on all hosts
hosts: all
tasks: []
- name: Deploy elasticsearch primary node
hosts: elasticsearch-primary
become: yes
roles:
- role: elastic.elasticsearch
es_instance_name: "{{inventory_hostname}}"
es_config:
node.name: "{{inventory_hostname}}"
cluster.name: graylog
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: "{{ lookup('template', 'elasticsearch-hosts.json.j2') }}"
http.port: 9200
transport.tcp.port: 9300
network.host: "{{ansible_default_ipv4.address}}"
node.data: true
node.master: tru
bootstrap.mlockall: true
vars:
es_major_version: "2.x"
es_scripts: false
es_templates: false
es_version_lock: false
es_heap_size: 512m
tasks:
- name: Open necessary ports for elasticsearch (master node)
ufw: from="{{ item.0 }}" port="{{ item.1 }}" rule=allow
with_nested:
- "{{ lookup('template', 'elasticsearch-nodes.yaml.j2', convert_data=True) | from_yaml }}"
- [9300, 9200]
- name: Deploy Elasticsearch data nodes
hosts: elasticsearch-nodes
become: yes
roles:
- role: elastic.elasticsearch
es_instance_name: "{{inventory_hostname}}"
es_config:
node.name: "{{inventory_hostname}}"
cluster.name: graylog
discovery.zen.ping.unicast.hosts: ["{{hostvars['elasticsearch-primary'].ansible_default_ipv4.address}}:9300"]
http.port: 9201
transport.tcp.port: 9301
network.host: "{{ansible_default_ipv4.address}}"
node.data: true
node.master: false
bootstrap.mlockall: true
discovery.zen.ping.multicast.enabled: false
vars:
es_major_version: "2.x"
es_scripts: false
es_templates: false
es_version_lock: false
es_heap_size: 1g
tasks:
- name: Open necessary ports for elasticsearch (data nodes)
ufw: from="{{hostvars['elasticsearch-primary'].ansible_default_ipv4.address}}" port="{{ item }}" rule=allow
with_items:
- 9301
- 9201
- name: Set up SSL certificates
hosts: graylog-primary,graylog-nodes
vars_files:
- "vars/main.yml"
- "vars/secret.yml"
tasks:
- name: Ensure OpenSSL is installed
package: name=openssl state=present
become: yes
- name: Ensure ssl folder exist
file:
path: "{{ ssl_certs_path }}"
state: directory
owner: "{{ ssl_certs_path_owner }}"
group: "{{ ssl_certs_path_group }}"
mode: "{{ ssl_certs_dir_mode }}"
become: yes
- name: Copy SSL certificate data
copy:
content: "{{ item.content }}"
dest: "{{ item.dest }}"
owner: "{{ ssl_certs_path_owner }}"
group: "{{ ssl_certs_path_group }}"
mode: "{{ ssl_certs_file_mode }}"
with_items:
- { content: "{{ ssl_certs_local_certificate_data|default }}", dest: "{{ ssl_certs_certificate_path }}" }
- { content: "{{ ssl_certs_local_privkey_data|default }}", dest: "{{ ssl_certs_privkey_path }}" }
# - { content: "{{ ssl_certs_password|default }}", dest: "{{ ssl_certs_password_file_path }}" }
no_log: true
become: yes
- name: Generate strong DHE parameter - https://weakdh.org/
command: openssl dhparam -out {{ssl_certs_dhparam_path}} {{ssl_certs_dhparam_size}} creates={{ssl_certs_dhparam_path}}
become: yes
- name: Deploy graylog2 nodes
hosts: graylog-primary,graylog-nodes
vars_files:
- "vars/main.yml"
- "vars/secret.yml"
become: yes
vars:
# We installed elasticsearch ourselves
graylog_install_elasticsearch: false
# Basic server settings
# Optional: Specify which Graylog version should be installed. Defaults to latest
# graylog_server_version = '2.2.2-1'
# Used for certain jobs that should only run on one server
graylog_is_master: "{{ hostvars[inventory_hostname]['graylog_master'] | default('false') }}"
# A secret that graylog uses for salting its password hashes. Generate with `pwgen -s 96 1`
# graylog_password_secret: "{{graylog_password_secret}}"
# A SHA2 hash of a password you will use for your initial login.
# Create a password hash using: echo -n yourpassword | shasum -a 256
# graylog_root_password_sha2: "{{graylog_root_password_sha2}}"
graylog_elasticsearch_network_host: "{{ansible_default_ipv4.address}}"
graylog_elasticsearch_discovery_zen_ping_unicast_hosts: "{{hostvars['elasticsearch-primary'].ansible_default_ipv4.address}}:9300"
graylog_elasticsearch_cluster_name: 'graylog'
# The web interface and the rest api should be accessable by the web browser
graylog_web_listen_uri: "http://{{ansible_default_ipv4.address}}:9000"
# This tells the browser how to connect to the Graylog API
graylog_web_endpoint_uri: "http://{{ansible_default_ipv4.address}}:12900"
# graylog_web_enable_tls: true
# graylog_web_tls_key_file: "{{ssl_certs_privkey_path}}"
# graylog_web_tls_cert_file: "{{ssl_certs_certificate_path}}"
# graylog_web_tls_key_password: "{{ssl_certs_password}}"
# REST API config
graylog_rest_listen_uri: "http://{{ansible_default_ipv4.address}}:12900"
graylog_rest_transport_uri: "http://{{ansible_default_ipv4.address}}:12900"
# graylog_rest_enable_tls: true
# graylog_rest_tls_key_file: "{{ssl_certs_privkey_path}}"
# graylog_rest_tls_cert_file: "{{ssl_certs_certificate_path}}"
# graylog_rest_tls_key_password: "{{ssl_certs_password}}"
nginx_sites:
default:
- listen 80
- listen 443 ssl default_server
- server_name server.example.org
- sub_filter_once off
- sub_filter_types *
- sub_filter 'http://{{ansible_default_ipv4.address}}:12900' 'https://server.example.org/api'
- sub_filter 'http://{{ansible_default_ipv4.address}}:9000' 'https://server.example.org'
- location /api/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Accept-Encoding "";
rewrite ^/api(.*)$ $1 break;
proxy_redirect http://{{ansible_default_ipv4.address}}:12900 https://server.example.org/api;
proxy_pass http://{{ansible_default_ipv4.address}}:12900; }
- location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Accept-Encoding "";
proxy_redirect http://{{ansible_default_ipv4.address}}:9000 https://server.example.org;
proxy_pass http://{{ansible_default_ipv4.address}}:9000; }
graylog: []
nginx_configs:
ssl:
- "ssl_certificate {{ssl_certs_certificate_path}}"
- "ssl_certificate_key {{ssl_certs_privkey_path}}"
# - "ssl_password_file {{ssl_certs_password_file_path}}"
- "ssl_dhparam {{ssl_certs_dhparam_path}}"
- "ssl_session_cache shared:SSL:10m"
- "ssl_session_timeout 5m"
- "ssl_protocols TLSv1 TLSv1.1 TLSv1.2"
- "ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'"
- "ssl_prefer_server_ciphers on"
roles:
- role: 'Graylog2.graylog-ansible-role'
tags:
- graylog
- graylog2_servers
- role: geerlingguy.java
when: ansible_distribution_release == 'xenial'
java_packages:
- openjdk-8-jdk
tags:
- elasticsearch
- graylog
- graylog2_servers
tasks:
- name: Create filter.d directory
file: path=/etc/fail2ban/filter.d/ state=directory mode=0755
- name: Create jail.d directory
file: path=/etc/fail2ban/jail.d/ state=directory mode=0755
- name: Deploy fail2ban config for nginx/Graylog proxy_pass
template: src="{{ item.src }}" dest="{{ item.dest }}"
notify: Restart fail2ban
with_items:
- { src: 'fail2ban-filter.conf.j2', dest: '/etc/fail2ban/filter.d/nginx-graylog-auth-fail.conf' }
- { src: 'fail2ban-jail.conf.j2', dest: '/etc/fail2ban/jail.d/nginx-graylog-auth-fail.conf' }
- name: Open necessary ports for graylog
ufw: from="{{ item.0 }}" port="{{ item.1 }}" rule=allow
with_nested:
- "{{ lookup('template', 'elasticsearch-ips.yaml.j2', convert_data=True) | from_yaml }}"
- [9350]
- name: Open port for graylog from the office & server IP
ufw: from="{{ item.0 }}" port="{{ item.1 }}" rule=allow
with_nested:
- "{{ lookup('template', 'elasticsearch-ips.yaml.j2', convert_data=True) | from_yaml }}"
- [9350]
- name: Open port for graylog from the office & server IP
ufw: from="{{ item.0 }}" port="{{ item.1 }}" rule=allow
with_nested:
- ["ip1", "ip2"]
- ["9000", "12900"]
- name: Open ports for graylog inputs
ufw: from="{{ item.0 }}" port="{{ item.1 }}" rule=allow
with_nested:
- "{{ lookup('template', 'graylog-source-list.yaml.j2', convert_data=True) | from_yaml }}"
- [12201, 12202, 12203]
- name: Open ports for logging into Graylog from the office IP
ufw: from="ip1" port="{{item}}" rule=allow
with_items:
- [12201, 12202, 12203]
- name: Open HTTPS port from any IP
ufw: from="any" port="443" rule=allow
- name: Open port for mongodb
ufw: from="127.0.0.1" port="27017" rule=allow
handlers:
- name: Restart fail2ban
service: name=fail2ban state=restarted