Issue with Docker and SSL

I am attempting to run the docker 2.4 container using docker-compose and details from here:

http://docs.graylog.org/en/2.4/pages/installation/docker.html

I am also working to get SSL going, and I mostly have it working. I can get to the website and logged in fine, and it even appears it may be working (I haven’t tried yet to get logs shipping into Graylog yet to see). I used these details to setup SSL and even used this with full success on a previous install that wasn’t using docker:

http://docs.graylog.org/en/2.4/pages/configuration/https.html

However, I’m getting ripped to shreds on the console by the below message, and I’m sure it’s because it’s trying to make an SSL connection to the docker network IP of the machine, which I don’t have as part of my SSL certificate. My certificate, btw, is a wildcard cert that we have from GoDaddy that we use across the company for a myriad of different services.

graylog_1        | 2018-07-30 19:53:22,063 WARN : org.graylog2.shared.rest.resources.ProxiedResource - Unable to call https://192.168.0.4:9000/api/system/inputstates on node <ad4f0d14-1500-4b25-92c3-31b5765f3c10>
graylog_1        | javax.net.ssl.SSLPeerUnverifiedException: Hostname 192.168.0.4 not verified:
<truncated rest of error>

graylog_1        | 2018-07-30 19:53:22,444 WARN : org.graylog2.shared.rest.resources.ProxiedResource - Unable to call https://192.168.0.4:9000/api/system/metrics/multiple on node <ad4f0d14-1500-4b25-92c3-31b5765f3c10>
graylog_1        | javax.net.ssl.SSLPeerUnverifiedException: Hostname 192.168.0.4 not verified:
<truncated rest of error>

Below is my graylog.conf and my docker-compose.yml file for review.

is_master = true
node_id_file = /usr/share/graylog/data/config/node-id
password_secret = secret_here
root_password_sha2 = root_pass_here
root_timezone = America/Chicago
plugin_dir = /usr/share/graylog/plugin
rest_listen_uri = https://0.0.0.0:9000/api
rest_enable_tls = true
rest_tls_cert_file = /usr/share/graylog/data/config/ssl/graylog.crt
rest_tls_key_file = /usr/share/graylog/data/config/ssl/graylog.key
web_listen_uri = https://0.0.0.0:9000/
web_endpoint_uri = https://logs.example.com:9000/api
web_enable_tls = true
web_tls_cert_file = /usr/share/graylog/data/config/ssl/graylog.crt
web_tls_key_file = /usr/share/graylog/data/config/ssl/graylog.key
elasticsearch_hosts = http://elasticsearch:9200
allow_leading_wildcard_searches = false
allow_highlighting = false
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /usr/share/graylog/data/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://mongo/graylog
mongodb_max_connections = 100
mongodb_threads_allowed_to_block_multiplier = 5
transport_email_enabled = true
transport_email_hostname = mail.example.com
transport_email_port = 587
transport_email_use_auth = true
transport_email_use_tls = true
transport_email_use_ssl = false
transport_email_auth_username = user
transport_email_auth_password = abcd1234
transport_email_subject_prefix = [Graylog]
transport_email_from_email = user@example.com
content_packs_loader_enabled = true
content_packs_dir = /usr/share/graylog/data/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32

And the compose file:

version: '2'
services:
  # MongoDB: https://hub.docker.com/_/mongo/
  mongodb:
    image: mongo:3
    volumes:
      - mongo_data:/data/db
  # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.10
    volumes:
      - es_data:/usr/share/elasticsearch/data
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      # Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/security-settings.html#general-security-settings
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
  # Graylog: https://hub.docker.com/r/graylog/graylog/
  graylog:
    image: graylog/graylog:2.4
    volumes:
      - graylog_journal:/usr/share/graylog/data/journal
      - ./config:/usr/share/graylog/data/config
      - ./config/ssl:/usr/share/graylog/data/config/ssl
    environment:
      - "GRAYLOG_SERVER_JAVA_OPTS=-Djavax.net.ssl.trustStore=/usr/share/graylog/data/config/ssl/cacerts.jks"
    links:
      - mongodb:mongo
      - elasticsearch
    depends_on:
      - mongodb
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Syslog TCP
      - 514:514
      # Syslog UDP
      - 514:514/udp
      # GELF TCP
      - 12201:12201
      # GELF UDP
      - 12201:12201/udp
# Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
  mongo_data:
    driver: local
  es_data:
    driver: local
  graylog_journal:
    driver: local

I can’t add some backed docker network to my certificate, but I’m sure that would stop this from happening. I’m not the most savvy with Docker, so I’m not sure if there is some other means I’m missing to handle this with docker. I know I’d have no issue with just a straight non-docker install.

Thanks.

I think I was able to get it figured out in the end by using details from the final response by py.taczynski in the below post, namely in leaving any of the uri definitions as default and only customizing the rest_transport_uri:

I had fooled around too much with the graylog.con file, so I returned it back to the docker stock configuration (at least for the various rui definitions) and then strictly defined rest_transport_uri, all others left default as listed here:

I’m no longer getting errors on the backend and the frontend is accessed via the public FQDN using my wildcard certificate with no issues. I will consider the issue as resolved now.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.