Graylog 5.0 with Wazuh Indexer

This is on Debian 11

I am trying to get Wazuh-Indexer working with Graylog using this…

as a general guide.

These instructions were created using previous versions of everything so I have ben trying to use the more updated packages.

Wazuh-Indexer 4.4.1-1
MongoDB 6.0.5
Graylog 5.0.7-1

Wazuh-indexer is supposed to be forked from ElasticSearch and Wazuh 4x in particular is using OpenSearch 2.6, So I figured using the installation guide for Graylog 5.0 should work…and it has to a point. However Graylog is not yet accessible. Systemctl shows it is running but /var/log/graylog-server/server.log shows:

2023-05-05T16:03:24.036-03:00 INFO [VersionProbe] Elasticsearch is not available. Retry #147
2023-05-05T16:03:29.039-03:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: unexpected end of stream on http://xx.xx.xx.xx:9200/… - \n not found: limit=0 content=….

I found a tidbit that suggested commenting out the “compatibility.override_main_response_version: true” line in /etc/wazuh-indexer/opensearch.yml but then the log just shows:

2023-05-05T16:07:15.187-03:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: unexpected end of stream on http://xx.xx.xx.xx:9200/… - \n not found: limit=0 content=….
2023-05-05T16:07:15.187-03:00 INFO [VersionProbe] Elasticsearch is not available. Retry #2

In either case the Graylog web interface is not accessible.

the /etc/graylog/server/server.conf:

is_leader = true
node_id_file = /etc/graylog/server/node-id
password_secret = Randomized
root_username = admin
root_password_sha2 =Randomized
root_timezone = America/Halifax
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = xx.xx.xx.xx:9000
http_publish_uri = http://xx.xx.xx.xx:9000/
stream_aware_field_types=false
elasticsearch_hosts = http://graylog:####@xx.xx.xx.xx:9200
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000

and the /etc/wazuh-indexer/opensearch.yml

network.host: “xx.xx.xx.xx”
node.name: “host.fakedom.jk”
cluster.initial_master_nodes:

  • “host.fakedom.jk”
    discovery.seed_hosts:
    • “xx.xx.xx.xx”
      node.max_local_storage_nodes: “1”
      path.data: /var/lib/wazuh-indexer
      path.logs: /var/log/wazuh-indexer
      bootstrap.memory_lock: true
      plugins.security.ssl.http.pemcert_filepath: /etc/wazuh-indexer/certs/cerificate.pem
      plugins.security.ssl.http.pemkey_filepath: /etc/wazuh-indexer/certs/certificate-key.pem
      plugins.security.ssl.http.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
      plugins.security.ssl.transport.pemcert_filepath: /etc/wazuh-indexer/certs/certificate.pem
      plugins.security.ssl.transport.pemkey_filepath: /etc/wazuh-indexer/certs/certificare-key.pem
      plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
      plugins.security.ssl.http.enabled: true
      plugins.security.ssl.transport.enforce_hostname_verification: false
      plugins.security.ssl.transport.resolve_hostname: false
      plugins.security.authcz.admin_dn:
  • “CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US”
    plugins.security.check_snapshot_restore_write_privileges: true
    plugins.security.enable_snapshot_restore_privilege: true
    plugins.security.nodes_dn:
  • “CN=host,fakedom.jk,OU=Wazuh,O=Wazuh,L=California,C=US”
    plugins.security.restapi.roles_enabled:
  • “all_access”
  • “security_rest_api_access”
    plugins.security.system_indices.enabled: true
    plugins.security.system_indices.indices: [“.plugins-ml-model”, “.plugins-ml-task”, “.opendistro-alerting-config”, “.opendistro-alerting-alert*”, “.opendistro-anomaly-results*”, “.opendistro-anomaly-detector*”, “.opendistro-anomaly-checkpoints”, “.opendistro-anomaly-detection-state”, “.opendistro-reports-", ".opensearch-notifications-”, “.opensearch-notebooks”, “.opensearch-observability”, “.opendistro-asynchronous-search-response*”, “.replication-metadata-store”]

Anyone with any thoughts? Any other information needed to help?

BTW this is the first time I have tried anything like this…not a Linux newbie but far from an expert either. Also a little more comfortable with Ubuntu but the stack walkthrough specified Debian and when I started having trouble one of the things I did was restart using Debian 11 instead.

Hey @AKASupport

After looking over your config…

/etc/wazuh-indexer/opensearch.yml

Graylog doc’s show plugins.security.disabled: true
from here. This maybe your issue. I beieve you can only use a user name and password for Opensearch.

Thanks gsmith…I tried this but I was never able to get them to peacefully co-exist…I started over and used the the older versions used in the walkthrough…I’ve gotten farther that I had before at least…not finished yet though

Hey @AKASupport

Was I talking to you on Discord last weekend about this?

@gsmith, no you weren’t… I don’t use discord at all.

And for the record I’m not completely giving up on getting all this to work together with the new versions… but I do want to get the stack operational according to the walkthrough so i understand how it all integrates better.

Hello fellow person(s). Just following up with this as I have been having the same issue but for me it is communication between the wazuh.indexer and the graylog container. It seems to be a TLS/SSL issue. As I have inspected the container - I can not find the Java KeyStore, so I created one. This is not a good practice though as the config change will destroy its self if the container comes down. I went from *cant find a cert to * cert unknown - so I know it can see where I am putting the certs. I did notice that there is an openssl-graylog.cnf as well as graylog only accepts a certain type of cert (“When you are configuring TLS, you need to make sure that your certificate/key files are in the right format, which is X.509 for certificates and PKCS#8 for the private keys. Both must to be stored in PEM format.”) file that be utilized to create the certs with the root-ca from wazuh. I am not sure what the solution is - have you tried the above approaches? Any insights will be greatly appreciated from anyone to get a working containerized build of graylog and this damn wazuh platform lol

Hey @gHost
yeah with containers, I did exactly what you did. As for JAVA default keystore “cacerts” was what I use/used.
Example:

environment:
GRAYLOG_JAVA_OPTS: '... -Djavax.net.ssl.trustStore=/srv/custom_keystore/cacerts -Djavax.net.ssl.trustStorePassword=changeit'
GRAYLOG_HEAP_SIZE: '8g'

volumes:
- /path/to/keystore/cacerts:/srv/custom_keystore/cacerts
- graylog-shared:/data/shared

Couple tricks I have found using self-sign certs. I used the java default keystore, I placed certs in Graylog home directory because Graylog owns that director this made access easier
(i.e., chown graylog:graylog -R /etc/graylog) to insure permission were set. As for a container

Thank you, for such a quick response!

Ok - First thing I notice is that I do not have “GRAYLOG_HEAP_SIZE: ‘8g’”. I will have to research what this is - is this needed for a proper deployment?

Also, I know the first declaration is my local directory for the certs and the next is for the container’s. Does it specifically need to be in ‘/srv/custom_keystore/cacerts’ because of graylog, or is this personal practice?

volumes:

  • /this/is/the/wazuh/location/of/certs:/srv/custom_keystore/cacerts

As for signing the certs that is still a tad bit confusing - I have been utilizing : Using HTTPS

For trying to sign the cert/key {still confused about that} - what my approach was to try and use the commands in the documentation against the root-ca.pem & root-ca.key {still learning a lot about certs}.
Is there a way to mutate the the commands given to make they certs/keys into the proper format?

The Wazuh build that I am going off of is based on this document: Wazuh Docker deployment - Deployment on Docker · Wazuh documentation

I also did see that they have a scripting tool that generates all the certs, I tried to go off the code and still no joy.

Attached is a pastebin : # Wazuh App Copyright (C) 2017, Wazuh Inc. (License GPLv2)version: '3.7'se - Pastebin.com

It is a basic configuration of the wazuh-graylog-mongo stack.

Again, thank you very much for your timely response and information. :slight_smile:

EDIT: You will see the comments in the code, where my brain went to mush - my apologies lol. As for the section : Couple tricks I have found using self-sign certs. I used the java default keystore, I placed certs in Graylog home directory because Graylog owns that director this made access easier
(i.e., chown graylog:graylog -R /etc/graylog) to insure permission were set. As for a container" - Would this be accomplished by using “command” in the docker-compose? so I can make sure that happens every time the container comes down?

Hey @gHost

Damn that huge config :laughing: I havent used Wazuh but Graylog can connect to your indexer with uername/password

elasticsearch_hosts = https://node1:9200,https://user:password@node2:19200

So using certs with Wazuh/Opensearch probably not going to work. Hence why the documents state to disable plugins.security.disabled: true

That was copied by mistake , sry.

Here is an example of mine and yeah its old one HAAHA, you will also see my Graylog volume is using the configuration file, incase i need to reboot, etc… all i did was copy the original graylog config and place it in home directory for docker-compose to pick it up.

version: '2'
services:
   # MongoDB: https://hub.docker.com/_/mongo/
  mongodb:
    image: mongo:4
    network_mode: bridge
   # DB in share for persistence
    volumes:
      - mongo_data:/data/db
   # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    network_mode: bridge
    #data folder in share for persistence
    volumes:
      - es_data:/usr/share/elasticsearch/data
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
   # Graylog: https://hub.docker.com/r/graylog/graylog/
  graylog:
    image: graylog/graylog:4.2-jre11
    network_mode: bridge
    dns:
      - 192.168.2.15
      - 192.168.2.16
   # journal and config directories in local NFS share for persistence
    volumes:
      - graylog_journal:/usr/share/graylog/data/journal
      - graylog_bin:/usr/share/graylog/bin
      - graylog_data:/usr/share/graylog/data
      - graylog_server.conf: /usr/share/graylog/graylog.conf
    environment:
      # Container time Zone
      - TZ=America/Chicago
      # CHANGE ME (must be at least 16 characters)!
      - GRAYLOG_PASSWORD_SECRET=pJod1TRZAckHmqM2oQPqX1qnLVJS99jHm2DuCux2Bpiuu2XLTZuyb2YW9eHiKLTifjy7cLpeWI
      # Password: admin
      - GRAYLOG_ROOT_PASSWORD_SHA2=ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f
      - GRAYLOG_HTTP_BIND_ADDRESS=0.0.0.0:9000
      - GRAYLOG_HTTP_EXTERNAL_URI=http://192.168.1.28:9000/
      - GRAYLOG_ROOT_TIMEZONE=America/Chicago
      - GRAYLOG_ROOT_EMAIL=greg.smith@domain.com
      - GRAYLOG_HTTP_PUBLISH_URI=https://192.168.1.28:9000/
      - GRAYLOG_TRANSPORT_EMAIL_PROTOCOL=smtp
      - GRAYLOG_HTTP_ENABLE_CORS=true
      - GRAYLOG_TRANSPORT_EMAIL_WEB_INTERFACE_URL=http://192.168.1.28:9000/
      - GRAYLOG_TRANSPORT_EMAIL_HOSTNAME=192.168.1.28
      - GRAYLOG_TRANSPORT_EMAIL_ENABLED=true
      - GRAYLOG_TRANSPORT_EMAIL_PORT=25
      - GRAYLOG_TRANSPORT_EMAIL_USE_AUTH=false
      - GRAYLOG_TRANSPORT_EMAIL_USE_TLS=false
      - GRAYLOG_TRANSPORT_EMAIL_USE_SSL=false
      - GRAYLOG_TRANSPORT_FROM_EMAIL=root@localhost
      - GRAYLOG_TRANSPORT_SUBJECT_PREFIX=[graylog]      
      
    links:
      - mongodb:mongo
      - elasticsearch
    depends_on:
      - mongodb
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Syslog TCP
      - 8514:8514
      # Syslog UDP
      - 8514:8514/udp
      # GELF TCP
      - 12201:12201
      # GELF UDP
      - 12201:12201/udp
      # Reports
      - 9515:9515
      - 9515:9515/udp
      # email
      - 25:25
      - 25:25/udp     
       
#Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
  mongo_data:
    driver: local
  es_data:
    driver: local
  graylog_journal:
    driver: local
  graylog_bin:
    driver: local
  graylog_data:
    driver: local

By using GL config file you may use less Env varibles in the compose file.
As for certificates , i know the documentation has a lot info and most blogs/docs may not help everyone, need to fine tune it to your enviroment. here is also example of what i use, it may or may not help.
To sum up the documentation, i did years ago , TBH it might not work for you.

Create a file named openssl-graylog.cnf with the following content

[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
# Details about the issuer of the certificate
[req_distinguished_name]
C = US
ST = iowa
L = cedar rapids
O = enseva
OU = admin
CN = graylog.domain.com
[v3_req]
keyUsage = keyEncipherment, dataEncipherment,nonRepudiation
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
# IP addresses and DNS names the certificate should include
# Use IP.### for IP addresses and DNS.### for DNS names,
# with "###" being a consecutive number.
[alt_names]
IP.1 = my_ip_address
DNS.1 = graylog.domain.com

mkdir /etc/ssl/certs/graylog/ && cd /etc/ssl/certs/graylog/

openssl req -x509 -days 1095 -nodes -newkey rsa:2048 -config openssl-graylog.cnf -keyout pkcs5-plain.pem -out cert.pem
openssl pkcs8 -in pkcs5-plain.pem -topk8 -nocrypt -out pkcs8-plain.pem
openssl pkcs8 -in pkcs5-plain.pem -topk8 -out pkcs8-encrypted.pem -passout pass:secret
openssl req -config openssl-graylog.cnf -out graylog.csr -new -newkey rsa:2048 -nodes -keyout graylog.key
openssl req -x509 -sha512 -nodes -days 1095 -newkey rsa:2048 -config openssl-graylog.cnf -keyout graylog.key -out graylog.crt
openssl req -config openssl-graylog.cnf -out graylog.csr -key graylog.key -new
openssl x509 -x509toreq -in graylog.crt -out graylog.csr -signkey graylog.key
openssl pkcs12 -export -in graylog.crt -inkey graylog.key -out keystore.pfx
openssl pkcs12 -in keystore.pfx -nokeys -out graylog-certificate.pem
openssl pkcs12 -in keystore.pfx -nocerts -out graylog-pkcs5.pem
openssl pkcs8 -in graylog-pkcs5.pem -topk8 -out graylog-key.pem
keytool -import -trustcacerts -file graylog.crt -alias graylog.domain.com -keystore graylog_keystore.jks -storepass secret

keytool -list -v -keystore graylog_keystore.jks -alias graylog.domain.com
keytool -importkeystore -srckeystore graylog_keystore.jks -destkeystore keystore.p12 -deststoretype PKCS12

openssl pkcs12 -in keystore.p12 -nokeys -out graylog-certificate.pem
openssl pkcs8 -in graylog-pkcs5.pem -topk8 -out graylog-key.pem
cp -a “/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-7.b10.el7.x86_64/jre/lib/security/cacerts” /etc/ssl/certs/graylog/graylog.jks

keytool -importcert -keystore graylog.jks -storepass changeit (secret) -alias graylog.domain.com -file cert.pem

Graylog docker is UID is 1100

Thats all i have for ya

Sorry I cant be more help

Maybe this youtube video can help (posted today on Youtube):

Is from the same guy that made the original mentioned video in this post.
Is for Wazuh 4.4 and Graylog 5.

Regards,
Alejandro

1 Like

Thank you hella man and what timing - I just did a diff comparison against the wazuh certs tool that he tweaked and trying to get this thing spun up. @aguida79 have you been able to successfully deploy graylog with wazuh via docker deployment with SSL? Thank you again man :slight_smile:

Sorry @gHost , I didn’t try yet the installation, I only saw that new video, and remembered your post, and add the information here, only that. But I’m confident about the videos that the guy of SOCFortress do, they are always good stuff.

Regards,
Alejandro

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.