Configuring MetricBeat Input Error

Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question.
Don’t forget to select tags to help index your topic!

1. Describe your incident:
Just installed my metricbeats and am trying to setup input->beats. When I run .\metricbeats.exe -e, it won’t connect to my localhost instance of Graylog (docker).
Here’s an error from the running logs:
“log.level”:“error”,“@timestamp”:“2022-08-01T13:57:07.873-0700”,“log.logger”:“publisher_pipeline_output”,“log.origin”:{“file.name”:“pipeline/client_worker.go”,“file.line”:150},“message”:“Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get "http://localhost:9200": dial tcp [::1]:9200:
connectex: No connection could be made because the target machine actively refused it.”,“service.name”:“metricbeat”,“ecs.version”:“1.6.0”}

2. Describe your environment:

  • OS Information:
    Windows 10 running Docker

  • Package Version:
    Graylog 4.3.3
    Elatsic 7.10.2-amd64
    mongo latest

  • Service logs, configurations, and environment variables:
    elasticbeats.yml:

output.elasticsearch:

Array of hosts to connect to.

hosts: [“localhost:9200”]

Protocol - either http (default) or https.

#protocol: “https”

Authentication credentials - either API key or username/password.

#api_key: “id:api_key”
username: “admin”

My Global Beats Configuration for MetricBeats:

  • bind_address:

0.0.0.0

  • no_beats_prefix:

false

  • number_worker_threads:

8

  • override_source:

  • port:

5044

  • recv_buffer_size:

1048576

  • tcp_keepalive:

false

  • tls_cert_file:

  • tls_client_auth:

disabled

  • tls_client_auth_cert_file:

  • tls_enable:

false

  • tls_key_file:

  • tls_key_password:

Hello && welcome @iamstubar

If you could use the Markdown when posting configurations and/or log files that would be appreciated.

On that note, I’m not sure what you have going on.

Well it seams this is a network configuration issue. Here are a couple suggestions.

If MetricBeat is on the same server then try to use either the Container IP Address or 127.0.0.1 instead of localhost.

If MetricBeat IS NOT on the same server as Docker try using the IP address of the host (i.e., 192.168.1.100).

If those were tried already, then try troubleshooting these configuration.

Graylog Configuration

http_bind_address = <ip address>:9000
http_publish_uri = graylog.domain.com:9000/

Elasticsearch configuration.

network.host: <IP Address>
http.port: 9200

EDIT: I forgot to mention , ensure you have the correct ports open on the Docker container.

Hi Gsmith,

Thanks for responding. Metricbeat is installed locally on my workstation, and Graylog, ElasticSearch and MongoDb are running in a docker-compose configuration.
Graylog’s IP: 172.27.0.4
ElasticSearch IP: 172.27.0.3
MongoDb IP: 172.27.0.2

Here’s my docker-compose.yml:

version: ‘2’
services:
mongodb:
image: “mongo:latest”
volumes:
- mongo_data:/data/db

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2-amd64
volumes:
- es_data:/usr/share/elasticsearch/data
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- “ES_JAVA_OPTS=-Xms512m -Xmx512m”
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g

graylog:
image: graylog/graylog:4.3.3
volumes:
- graylog_data:/usr/share/graylog/data
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=
- GRAYLOG_ROOT_PASSWORD_SHA2=
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
entrypoint: /usr/bin/tini – wait-for-it elasticsearch:9200 – /docker-entrypoint.sh
links:
- mongodb:mongo
- elasticsearch
restart: always
depends_on:
- mongodb
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Metric Beat
- 5044:5044
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp

volumes:
mongo_data:
driver: local
es_data:
driver: local
graylog_data:
driver: local

I modified my metricbeat.yml:
output.elasticsearch:
hosts: [“172.27.0.4:5044”]

The error changed to:
Error dialing dial tcp 172.27.0.4:5044: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.",“service.name”:“metricbeat”,“network”:“tcp”,“address”:“172.27.0.4:5044”,“ecs.version”:“1.6.0”}

Hello
Thx for the add info.

This looks odd, not sure if you configured your Graylog configuration file or forgot to add environment variables.

Example:

environment:
     # Container time Zone
     - TZ=America/Chicago
     # CHANGE ME (must be at least 16 characters)!
     - GRAYLOG_PASSWORD_SECRET=pJod1TRZAckHmqM2oQPqX1qnLVJS99jHm2DuCux2Bpiuu2XLTZuyb2YW9eHiKLTifjy7cLpeWIjWgMtnwZf6Q79HW2nonDhN
     # Password: admin
     - GRAYLOG_ROOT_PASSWORD_SHA2=ef9259911881f383d4473e94f
     - GRAYLOG_HTTP_BIND_ADDRESS=0.0.0.0:9000
     - GRAYLOG_HTTP_EXTERNAL_URI=http://10.11.11.11:9000/ 
     - GRAYLOG_ROOT_TIMEZONE=America/Chicago
     - GRAYLOG_ROOT_EMAIL=greg.smith@domain.com
     - GRAYLOG_HTTP_PUBLISH_URI=http://10.11.11.11:9000/
     - GRAYLOG_TRANSPORT_EMAIL_PROTOCOL=smtp
     - GRAYLOG_HTTP_ENABLE_CORS=true

What this did for me was direct my log shippers to my GL Docker.

Example of remote device using FileBeat, port 5044

After looking at your YAML file, I think its a configuration/network issue.

I’ve added the fields to my docker-compose.yml:
HTTP_BIND_ADDRESS=0.0.0.0:9000
EXTERNAL_URI=HTTP://172.27.0.4:9000/
PUBLISH_URI=HTTP://172.27.0.4:9000/

It boots fine, but graylog logs show this repeating:

graylog-graylog-1 | 2022-08-03 23:43:38,337 WARN : org.graylog2.shared.rest.resources.ProxiedResource - Unable to call http://172.27.0.4:9000/api/system/metrics/multiple on node : connect timed out
graylog-graylog-1 | 2022-08-03 23:43:38,459 WARN : org.graylog2.shared.rest.resources.ProxiedResource - Unable to call http://172.27.0.4:9000/api/system/inputstates on node : connect timed out

So to your point, it definitely seems like a networking issue. The problem is, I’m not sure exactly how these are supposed to be configured and interact with each other to make the proper changes.

Again - thanks for the assistance.

@iamstubar

Try not to use your Container IP Address but the Host address instead. for those two. Since you have a network link & bridge? in the compose file you should be able to execute something like this

EXAMPLE:

EXTERNAL_URI=HTTP://192.168.1.100:9000/
PUBLISH_URI=HTTP://192.168.1.100:9000/

See if that works, also please use the markdown when posting configs/logs/files. It makes it easier to read and find issues.

EDIT: Look at mine it may help, if not post your docker-compose file here and use the markdown.

Some add info

graylog:
    image: graylog/graylog-enterprise:4.3.3-jre11
    network_mode: bridge
	
services:
   # MongoDB: https://hub.docker.com/_/mongo/
  mongodb:
    image: mongo:4.4
    network_mode: bridge

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2-amd64
    # image: opensearchproject/opensearch:1.3.2
    network_mode: bridge

Should look something like this, when you use “bridge” it should connect to docker default network called " docker0" this is your bridge.

docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:a1:12:58:d6 brd ff:ff:ff:ff:ff:ff
   inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

When you start Docker, a default bridge network (also called bridge) is created automatically, and newly-started containers connect to it unless otherwise specified.

Regardless of what I enter for EXTERNAL_URI AND PUBLISH_URI, the same problem arises, so I’ve removed those two from the docker-compose file and it seems to work to a degree. Additionally, when I run ‘docker network ls’, I can see we have a newly made “graylog_default” bridge. I have made it a bit further and have run into two new repeating errors (unknown beats protocol version: 69 and 71) from the Graylog logs when I have the metric beats running.

(channel [id: 0xf31b85eb, L:/172.25.0.4:5044 ! R:/172.25.0.1:45466]) (cause io.netty.handler.codec.DecoderException: java.lang.IllegalStateException: Unknown beats protocol version: 69)

My current docker-compose.yml:

-----------------------
version: '2'
services:
  # MongoDB: https://hub.docker.com/_/mongo/
  mongodb:
    image: "mongo:latest"
    volumes:
      - mongo_data:/data/db
   # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2-amd64
    volumes:
      - es_data:/usr/share/elasticsearch/data
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
  # Graylog: https://hub.docker.com/r/graylog/graylog/
  graylog:
    image: graylog/graylog:4.3.3
    volumes:
      - graylog_data:/usr/share/graylog/data
    environment:
      - GRAYLOG_PASSWORD_SECRET=
      - GRAYLOG_ROOT_PASSWORD_SHA2=
      - GRAYLOG_HTTP_BIND_ADDRESS=0.0.0.0:9000
    entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 --  /docker-entrypoint.sh
    links:
        - mongodb:mongo
        - elasticsearch
    restart: always
    depends_on:
      - mongodb
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Logstash
      - 5044:5044
      # Syslog TCP
      - 1514:1514
      # Syslog UDP
      - 1514:1514/udp
      # GELF TCP
      - 12201:12201
      # GELF UDP
      - 12201:12201/udp
# Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
  mongo_data:
    driver: local
  es_data:
    driver: local
  graylog_data:
    driver: local
-----------------------

With the image below, you can see how the input beat is configured as well as the numbers on the far right actually incrementing when I run metricbeat.exe -e.

Hello,
Thx for the added info,

I’m running Graylog Docker 4.3 no problems, I see some environment variables you don’t have that I do.

Here is my lab Docker-compose, maybe it will help you.

root@ansible:/usr/local/bin# cat docker-compose.yaml
version: '3'
services:
   # MongoDB: https://hub.docker.com/_/mongo/
  mongodb:
    image: mongo:4.4
    network_mode: bridge
   # DB in share for persistence
    volumes:
      - mongo_data:/data/db
   # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.6/docker.html
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2-amd64
    # image: opensearchproject/opensearch:1.3.2
    network_mode: bridge
    #data folder in share for persistence
    volumes:
      - es_data:/usr/share/elasticsearch/data
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      #- network.publish_host=10.200.6.28
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
   # Graylog: https://hub.docker.com/r/graylog/graylog/
  graylog:
    image: graylog/graylog-enterprise:4.3.3-jre11
    network_mode: bridge
    dns:
      - 10.11.11.15
      - 10.11.11.16
   # journal and config directories in local NFS share for persistence
    volumes:       
       - graylog_bin:/usr/share/graylog/bin
       - graylog_data:/usr/share/graylog/data/config
       - graylog_log:/usr/share/graylog/data/log
       - graylog_plugin:/usr/share/graylog/data/plugin
       - graylog_content:/usr/share/graylog/data/contentpacks
      
    environment:
      # Container time Zone
      - TZ=America/Chicago     
      - GRAYLOG_PASSWORD_SECRET=pJod1TRZAckHZuyb2YWLpeWIjWnwZf6Q79HW2nonDhN       
      - GRAYLOG_ROOT_PASSWORD_SHA2=ef92b778bafe77
      - GRAYLOG_HTTP_BIND_ADDRESS=0.0.0.0:9000
      - GRAYLOG_HTTP_EXTERNAL_URI=http://10.11.11.11:9000/
      - GRAYLOG_ROOT_TIMEZONE=America/Chicago
      - GRAYLOG_ROOT_EMAIL=greg.smith@enseva.com
      - GRAYLOG_HTTP_PUBLISH_URI=http://10.11.11.11:9000/
      - GRAYLOG_TRANSPORT_EMAIL_PROTOCOL=smtp
      - GRAYLOG_HTTP_ENABLE_CORS=true
      - GRAYLOG_TRANSPORT_EMAIL_WEB_INTERFACE_URL=http://10.11.11.11:9000/
      - GRAYLOG_TRANSPORT_EMAIL_HOSTNAME=10.11.11.11
      - GRAYLOG_TRANSPORT_EMAIL_ENABLED=true
      - GRAYLOG_TRANSPORT_EMAIL_PORT=25
      - GRAYLOG_TRANSPORT_EMAIL_USE_AUTH=false
      - GRAYLOG_TRANSPORT_EMAIL_USE_TLS=false
      - GRAYLOG_TRANSPORT_EMAIL_USE_SSL=false
      - GRAYLOG_TRANSPORT_FROM_EMAIL=root@localhost
      - GRAYLOG_TRANSPORT_SUBJECT_PREFIX=[graylog]
      - GRAYLOG_REPORT_DISABLE_SANDBOX=true
      - GRAYLOG_REPORT_RENDER_URI=http://10.11.11.11:9000
      # - GRAYLOG_REPORT_USER=graylog-report
      - GRAYLOG_REPORT_RENDER_ENGINE_PORT=9515
    links:
      - mongodb:mongo
      - elasticsearch
    depends_on:
      - mongodb
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Syslog TCP
      - 8514:8514
      # Elasticsearch
      - 9200:9200
      - 9300:9300
      # Syslog UDP
      - 8514:8514/udp
      # GELF TCP
      #- 12201:12201
      # GELF UDP
      - 12201:12201/udp
      # Reports
      - 9515:9515
      - 9515:9515/udp
      # beats
      - 5044:5044
      # email
      - 25:25
      - 25:25/udp
      # web
      - 80:80
      - 443:443
      - 21:21
      # Forwarder
      - 13302:13302
      - 13301:13301
      # keycloak
      - 8443:8443
      # packetbeat
      - 5055:5055
      # CEF Messages
      - 5555:5555
#Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
  mongo_data:
    driver: local
  es_data:
    driver: local
  graylog_journal:
    driver: local
  graylog_bin:
    driver: local
  graylog_data:
    driver: local
  graylog_log:
    driver: local
  graylog_plugin:
    driver: local
  graylog_content:
    driver: local

I’m feeling this is def a network configuration issue, I think all the other container are fine, maybe look into that.

Have you looked here?

@gsmith

I took your docker-compose and used it as my new template. What is the IP of your 10.11.11.11? Are you pointing it to your docker IP of Graylog or one of the other instances?

Seems like regardless of my configuration to docker-compose, outcome is pretty similar. I do get increments for total I/O counter for my Beats Input. Active connections increment as well, but still no readible logs.

In docker logs, I get repeated:

graylog-graylog-1        | 2022-08-10 18:21:18,129 ERROR: org.graylog2.plugin.inputs.transports.AbstractTcpTransport - Error in Input [Beats/62eaf8506364b35681b91e47] (channel [id: 0xe9194073, L:/172.17.0.4:5044 ! R:/172.17.0.1:58782]) (cause io.netty.handler.codec.DecoderException: java.lang.IllegalStateException: Unknown beats protocol version: 71)
graylog-graylog-1        | 2022-08-10 18:21:18,130 ERROR: org.graylog2.plugin.inputs.transports.AbstractTcpTransport - Error in Input [Beats/62eaf8506364b35681b91e47] (channel [id: 0xe9194073, L:/172.17.0.4:5044 ! R:/172.17.0.1:58782]) (cause io.netty.handler.codec.DecoderException: java.lang.IllegalStateException: Unknown beats protocol version: 69)

Metric Beats Output:
{"log.level":"info","@timestamp":"2022-08-10T11:22:52.979-0700","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":141},"message":"Attempting to reconnect to backoff(elasticsearch(http://127.0.0.1:5044)) with 8 reconnect attempt(s)","service.name":"metricbeat","ecs.version":"1.6.0"}

Hello,

I am not, That is the IP address of my Servers not my container. Since I have a bridge I don’t need to use my IP address of my container.

Ummm you have elasticsearch trying to connect to port 5044???

Seams like beat is running with TLS enabled, while your Input is configured without.

What does you new docker-compose configuration look like?

Well, I got it working! Unfortunately, I’m not sure exactly what it was, but alas, both the docker-composed containers, and the metric beats are running and communicating with each other. I appreciate the help as I know I didn’t give you all the necessary info from the start cause there were so many different iterations I was playing with.

I’ll post my final configuration so people in the future might find a helpful answer:

docker-compose.yml:

version: '2'
services:
   # MongoDB: https://hub.docker.com/_/mongo/
  mongodb:
    image: "mongo:latest"
    network_mode: bridge
   # DB in share for persistence
    volumes:
      - mongo_data:/data/db
   # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.6/docker.html
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2-amd64
    # image: opensearchproject/opensearch:1.3.2
    network_mode: bridge
    #data folder in share for persistence
    volumes:
      - es_data:/usr/share/elasticsearch/data
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
   # Graylog: https://hub.docker.com/r/graylog/graylog/
  graylog:
    image: graylog/graylog:4.3.3
    network_mode: bridge
    volumes:       
       - graylog_data:/usr/share/graylog/data
    environment:
      - GRAYLOG_PASSWORD_SECRET=
      - GRAYLOG_ROOT_PASSWORD_SHA2=
      - GRAYLOG_HTTP_BIND_ADDRESS=0.0.0.0:9000
#There will be errors in the graylog running log where 0.0.0.0 resolves to 172.17.0.4:9000 (expected this behavior, but it repeated a lot so, just an FYI. Strangely, it didn't work when I explicitly used http://172.17.0.4:9000, but I wonder if it's because I docker-compose down and docker-compose up and it may have made a new network and bridge?)
      - GRAYLOG_HTTP_EXTERNAL_URI=http://0.0.0.0:9000/
      - GRAYLOG_HTTP_PUBLISH_URI=http://0.0.0.0:9000/
    entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 --  /docker-entrypoint.sh
    links:
      - mongodb:mongo
      - elasticsearch
    depends_on:
      - mongodb
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Syslog TCP
      - 8514:8514
      # Elasticsearch
      - 9200:9200
      - 9300:9300
      # Syslog UDP
      - 8514:8514/udp
      # GELF TCP
      #- 12201:12201
      # GELF UDP
      - 12201:12201/udp
      # Reports
      - 9515:9515
      - 9515:9515/udp
      # beats
      - 5044:5044
      # email
      - 25:25
      - 25:25/udp
      # web
      - 80:80
      - 443:443
      - 21:21
      # Forwarder
      - 13302:13302
      - 13301:13301
      # keycloak
      - 8443:8443
      # packetbeat
      - 5055:5055
      # CEF Messages
      - 5555:5555
#Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
  mongo_data:
    driver: local
  es_data:
    driver: local
  graylog_journal:
    driver: local
  graylog_bin:
    driver: local
  graylog_data:
    driver: local
  graylog_log:
    driver: local
  graylog_plugin:
    driver: local
  graylog_content:
    driver: local

metricbeat.yml:

###################### Metricbeat Configuration Example #######################

# This file is an example configuration file highlighting only the most common
# options. The metricbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/metricbeat/index.html

# =========================== Modules configuration ============================

metricbeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Metricbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: 127.0.0.1:5044

  # Protocol - either `http` (default) or `https`.
  #protocol: "http"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "admin"
  #password: "admin"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["127.0.0.1:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~


# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Metricbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Metricbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the metricbeat.
#instrumentation:
    # Set to true to enable instrumentation of metricbeat.
    #enabled: false

    # Environment in which metricbeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true


Awesome glad it was resolved.

no worries it happens but I would really like to know what happened.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.