Graylog Server unavailable after installation via Ansible


(Andy) #1

Hi,

I installed graylog via the ansible-playbook from here. https://galaxy.ansible.com/Graylog2/graylog-ansible-role
It is the default configuration from the playbook.
Everything went fine, until I tried opening graylog in any webbrowser.

I took a look at the server logs (/var/log/graylog-server/server.log) which told me the following.

    "Incorrect HTTP method for uri [/graylog_*/_aliases] and method [GET], allowed: [PUT]"
    at org.graylog2.indexer.cluster.jest.JestUtils.specificException(JestUtils.java:95) ~[graylog.jar:?]
    at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:57) ~[graylog.jar:?]
    at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:62) ~[graylog.jar:?]
    at org.graylog2.indexer.indices.Indices.getIndexNamesAndAliases(Indices.java:308) ~[graylog.jar:?]
    at org.graylog2.indexer.MongoIndexSet.getNewestIndexNumber(MongoIndexSet.java:151) ~[graylog.jar:?]
    at org.graylog2.indexer.MongoIndexSet.getNewestIndex(MongoIndexSet.java:146) ~[graylog.jar:?]
    at org.graylog2.indexer.MongoIndexSet.setUp(MongoIndexSet.java:252) ~[graylog.jar:?]
    at org.graylog2.periodical.IndexRotationThread.checkAndRepair(IndexRotationThread.java:138) ~[graylog.jar:?]
    at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:76) ~[graylog.jar:?]
    at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_181]
    at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
    at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_181]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_181]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_181]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
    2018-09-20T11:24:37.554+02:00 INFO  [MongoIndexSet] Did not find a deflector alias. Setting one up now.
    2018-09-20T11:24:37.555+02:00 ERROR [IndexRotationThread] Couldn't point deflector to a new index
    org.graylog2.indexer.ElasticsearchException: Couldn't collect aliases for index pattern graylog_*

The elasticsearch log doesn’t really tell anything.

Any help would be very appriciated.

Thanks in advance


(Jan Doberstein) #2

Without any information that would help us to identify the issue everyone can only guess.

What did your configuration looks like (Graylog and Elasticsearch)? Did you run everything on one host? What are the available ressources? How did you call the playbook? Did you try to access the Graylog UI from the same system or is this remote?


(Andy) #3

I used the HowTo to install graylog via ansible. I did not change any configuration afterwards. I justed changed the playbook to skip the XPack installation. I called the playbook via ansible-playbook linux/setup_graylog.yml -l graylog --ask-become-pass
The playbook looks like this:

- hosts: "{{ hosts | default('all')}}"
  become: True
  vars:
    # Graylog is compatible with elasticsearch 5.x since version 2.3.0, so ensure to use the right combination for your installation
    # Also use the right branch of the Elasticsearch Ansible role, master supports 5.x
    es_api_basic_auth_username: 'admin'
    es_api_basic_auth_password: 'admin'
    es_major_version: "5.x"
    es_instance_name: 'graylog'
    es_scripts: False
    es_templates: False
    es_version_lock: False
    es_heap_size: 1g
    es_config: {
      node.name: "graylog",
      cluster.name: "graylog",
      http.port: 9200,
      transport.tcp.port: 9300,
      network.host: 0.0.0.0,
      node.data: true,
      node.master: true,
    }

    # Elasticsearch role already installed Java
    graylog_java_install: False

    graylog_install_mongodb: True

    # For Vagrant installations make sure port 9000 is forwarded
    graylog_web_endpoint_uri: 'http://localhost:9000/api/'
    # For other setups, use the external IP of the Graylog server
    # graylog_web_endpoint_uri: 'http://{{ ansible_host }}:9000/api/'

    nginx_sites:
      graylog:
        - listen 80
        - server_name graylog
        - location / {
          proxy_pass http://localhost:9000/;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_pass_request_headers on;
          proxy_connect_timeout 150;
          proxy_send_timeout 100;
          proxy_read_timeout 100;
          proxy_buffers 4 32k;
          client_max_body_size 8m;
          client_body_buffer_size 128k; }

  roles:
    - role: 'Graylog2.graylog-ansible-role'
      tags: graylog

The graylog.conf looks like this:

is_master = True
node_id_file = /etc/graylog/server/node-id
password_secret = 2jueVqZpwLLjaWxV
root_username = admin
root_password_sha2 = 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
root_email =
root_timezone = UTC
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://0.0.0.0:9000/api/
rest_enable_cors = True
rest_enable_gzip = True
rest_enable_tls = False
rest_tls_cert_file = /path/to/graylog.crt
rest_tls_key_file = /path/to/graylog.key
rest_tls_key_password = secret
rest_max_header_size = 8192
rest_max_initial_line_length = 4096
rest_thread_pool_size = 16
web_enable = True
web_listen_uri = http://0.0.0.0:9000/
web_endpoint_uri = http://localhost:9000/api/
web_enable_cors = True
web_enable_gzip = True
web_enable_tls = False
web_tls_cert_file =
web_tls_key_file =
web_tls_key_password =
web_max_header_size = 8192
web_max_initial_line_length = 4096
web_thread_pool_size = 16
elasticsearch_disable_version_check = True
no_retention = False
allow_leading_wildcard_searches = False
allow_highlighting = False
elasticsearch_hosts = http://127.0.0.1:9200
elasticsearch_analyzer = standard
elasticsearch_request_timeout = 1m
index_ranges_cleanup_interval = 1h
output_batch_size = 25
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
outputbuffer_processor_keep_alive_time = 5000
outputbuffer_processor_threads_core_pool_size = 3
outputbuffer_processor_threads_max_pool_size = 30
udp_recvbuffer_sizes = 1048576
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = True
message_journal_dir = /var/lib/graylog-server/journal
message_journal_max_age = 12h
message_journal_max_size = 5gb
message_journal_flush_age = 1m
message_journal_flush_interval = 1000000
message_journal_segment_age = 1h
message_journal_segment_size = 100mb
async_eventbus_processors = 2
lb_recognition_period_seconds = 3
lb_throttle_threshold_percentage = 95
stream_processing_timeout = 2000
stream_processing_max_faults = 3
alert_check_interval = 60
output_module_timeout = 10000
stale_master_timeout = 2000
shutdown_timeout = 30000
mongodb_uri = mongodb://127.0.0.1:27017/graylog
mongodb_max_connections = 100
mongodb_threads_allowed_to_block_multiplier = 5
rules_file =
transport_email_enabled = False
transport_email_hostname =
transport_email_port = 587
transport_email_use_auth = True
transport_email_use_tls = True
transport_email_use_ssl = True
transport_email_auth_username =
transport_email_auth_password =
transport_email_subject_prefix = [graylog]
transport_email_from_email =
transport_email_web_interface_url =
http_connect_timeout = 5s
http_read_timeout = 10s
http_write_timeout = 10s
disable_index_optimization = True
index_optimization_max_num_segments = 1
gc_warning_threshold = 1s
ldap_connection_timeout = 2000
disable_sigar = False
dashboard_widget_default_cache_time = 10s
content_packs_loader_enabled = True
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load =
proxied_requests_thread_pool_size = 32

Elasticsearch looks like this.

cluster.name: graylog
http.port: 9200
network.host: 0.0.0.0
node.data: true
node.master: true
node.name: graylog
transport.tcp.port: 9300




#################################### Paths ####################################

# Path to directory containing configuration (this file and logging.yml):


path.data: /var/lib/elasticsearch/graylog.fio.intern-graylog

path.logs: /var/log/elasticsearch/graylog.fio.intern-graylog


action.auto_create_index: true

Everything runs one one host. It has 8GB of RAM, 4x 2.20GHz and currently 10GB of storage for testing.
I tried accessing the UI from a remote system.


(Jan Doberstein) #4

and you have bound your Interface to localhost of the server where it runs …

web_endpoint_uri = http://localhost:9000/api/

That should be a hostname or IP of the system where Graylog is running.

In addition, your Elasticsearch is running listening on all interfaces (which might be a security issue) but you use localhost in the Graylog configuration.

Please check the Graylog documentation of different settings: http://docs.graylog.org/en/2.4/pages/configuration/server.conf.html


(Andy) #5

Looks better now after changing the web_endpoint_uri.
I’ll take care of the rest after I finished my testing. Thanks a lot.


(system) #6

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.