Graylog ssl termination haproxy

1. Describe your incident:
Iam using Haproxy in front off Graylog server and all SSL is handled by Haproxy for my sites.

2. Describe your environment:
Running as a lxd container in proxmox.
Internet → proxmox → haproxy → graylog , elastic, mongo, other sites.
LXD contianer
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal

### HaProxy config

frontend http-in
    mode http
    bind *:80
    bind *:443 ssl crt /etc/haproxy/ssl/ ssl verify none

    # For ssl
    acl is_https ssl_fc
    http-request set-header Upgrade h2c if { ssl_fc }
    http-request set-header HTTP2-Settings base64,(your-settings) if { ssl_fc }
    http-request set-header X-Forwarded-Proto https if is_https

    ### ACL ###
    acl graylog hdr(host) -i graylog.my-domain.se
    use_backend graylog-backend if graylog


    ### Backend ###
backend graylog-backend
    mode http
    option http-server-close
    option forwardfor
    server graylog01.my-domain.se 192.168.0.15:9000 check
	
	
### Graylog config
is_leader = true
node_id_file = /etc/graylog/server/node-id

bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = 192.168.0.15:9000
http_publish_uri = https://192.168.0.15:9000/
stream_aware_field_types=false
trusted_proxies = 127.0.0.1/32, 192.168.0.0/24
elasticsearch_hosts = http://elastic01.my-domain.se:9200
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = true
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://mongodb01.my-domain.se:27017/graylog
mongodb_max_connections = 1000
integrations_scripts_dir = /usr/share/graylog-server/scripts
  • Package Version:
    graylog-enterprise 5.0.6-1
    haproxy 2.2.9-2+deb11u5

  • Service logs, configurations, and environment variables:
    Haproxy tell me that it send traffic to right container.
    Graylog log give me:

[ProxiedResource] Unable to call https://192.168.0.15:9000/api/system/metrics/multiple on node <2bf787ee-1e45-4be9-abcd-2a6323e6fd0c>: Read timed out

But that is when running on http , when connecting with https it does not load anything.
Using openssl i get correct answer and cert does match.
Firefox inspection give me staus=block and NS_BINDING_ABORTED in the network tab.

Hey @landychev,

How many Graylog nodes within your cluster?

http_publish_uri is used to communicate between nodes, I’m assuming this doesn’t go via the the LB and since that is handling SSL would it be better to use the default of http?

Otherwise try adding the certs to whatever Keystore Graylog is using.

I did some adjustment and set http_publish_uri to default.
There are only one server right now for graylog but may be more later.

I did found this documentation about haproxy, but they did not cover SSL only apache and ngnix.
https://go2docs.graylog.org/5-0/setting_up_graylog/web_interface.htm#nginx

I do not see how adding cert to keystore on graylog has something do to when ssl termination is on haproxy part .
I iam thinking about to try apache conf example and se if it works.
I did try adding the following to my conf but it did nothing.

http-request add-header X-Forwarded-Host %[req.hdr(host)]   
http-request add-header X-Forwarded-Server %[req.hdr(host)]    
http-request add-header X-Forwarded-Port %[dst_port]

hey @landychev

Have you seen this?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.