Beats Input - Show received messages is empty - Graylog-Appliance


I am testing the Graylog appliance “graylog-3.3.14-1.ova” and I am also a Linux newbie.
The input “syslog udp” (comes directly from the graylog server) is displayed correctly.
The firewall is inactive.

The beats input from a windows server is not displayed. However, you can see that the network counter is counting up.

There is data coming in here:

Throughput / Metrics
1 minute average rate: 0 msg/s
Network IO: 0B 0B (total: 1.1MiB 0B )
Active connections: 0 (9,444 total)
Empty messages discarded: 0
Hide details
 0b63ce27 / graylog
Network IO: 0B 0B (total: 1.1MiB 0B )
Active connections: 0 (9,444 total)
Empty messages discarded: 0

This is what the beats input looks like.

no_beats_prefix: true
number_worker_threads: 2
override_source: <empty>
port: 5044
recv_buffer_size: 1048576
tcp_keepalive: false
tls_cert_file: <empty>
tls_client_auth: disabled
tls_client_auth_cert_file: <empty>
tls_enable: false
tls_key_file: <empty>

The time zones are set like this:

User admin:       2021-11-17 14:42:42 +01:00
Your web browser: 2021-11-17 14:42:42 +01:00
Graylog server:   2021-11-17 14:42:42 +01:00

What else do I need to set for displaying the received messages?


Hi, merida

First check if the traffic(messages) from the windows server reaches the Graylog Server or not?
Is the port 5044 open?
sudo tcpdump -i lo host and udp/TCP port 5044
sudo netstat -peanut | grep “:5044”

Hi bahram,
Traffic from the Windows server arrives.

sudo tcpdump -i ens160 -q tcp port 5044
ubuntu@graylog:~$ sudo tcpdump -i ens160 -q tcp port 5044
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens160, link-type EN10MB (Ethernet), capture size 262144 bytes
15:14:34.064431 IP > graylog.5044: tcp 0
15:14:34.064473 IP graylog.5044 > tcp 0
15:14:34.064614 IP > graylog.5044: tcp 0
15:14:34.065603 IP > graylog.5044: tcp 125
15:14:34.065614 IP graylog.5044 > tcp 0
15:14:34.066454 IP graylog.5044 > tcp 0

sudo netstat -peanut | grep ":5044"
ubuntu@graylog:~$ sudo netstat -peanut | grep ":5044"
tcp6       0      0 :::5044                 :::*                    LISTEN      111        19517      1087/java
tcp6       0      0    TIME_WAIT   0          0          -
tcp6       0      0    TIME_WAIT   0          0          -

In the Throughput / Metrics section, the data rate also increases.

Throughput / Metrics
Network IO: 0B 0B (total: 1.2MiB 0B )

Did you configure Graylog Sidecar?
systemctl status elasticsearch => Running

No, sidecar is not installed.


As @bahram suggested

What is the output of that command?

If your using Winlogbeat/FileBeat can you show us your configurations?


here ist the result from: systemctl status elasticsearch:

ubuntu@graylog:~$ systemctl status elasticsearch
   elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-11-17 14:02:07 CET; 5 days ago
 Main PID: 488 (java)
    Tasks: 47 (limit: 4676)
   CGroup: /system.slice/elasticsearch.service
           └─488 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headle

Nov 17 14:01:57 graylog systemd[1]: Starting Elasticsearch...
Nov 17 14:02:07 graylog systemd[1]: Started Elasticsearch.

and the winlogbeat-config:
(I have configured only the Logstash Output.)

###################### Winlogbeat Configuration Example ########################

# This file is an example configuration file highlighting only the most common
# options. The winlogbeat.reference.yml file from the same directory contains
# all the supported options with more comments. You can use it as a reference.
# You can find the full configuration reference here:

# ======================== Winlogbeat specific options =========================

# event_logs specifies a list of event logs to monitor as well as any
# accompanying options. The YAML data type of event_logs is a list of
# dictionaries.
# The supported keys are name (required), tags, fields, fields_under_root,
# forwarded, ignore_older, level, event_id, provider, and include_xml. Please
# visit the documentation for the complete details of each option.

  - name: Application
    ignore_older: 72h

  - name: System

  - name: Security
      - script:
          lang: javascript
          id: security
          file: ${path.home}/module/security/config/winlogbeat-security.js

  - name: Microsoft-Windows-Sysmon/Operational
      - script:
          lang: javascript
          id: sysmon
          file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js

  - name: Windows PowerShell
    event_id: 400, 403, 600, 800
      - script:
          lang: javascript
          id: powershell
          file: ${path.home}/module/powershell/config/winlogbeat-powershell.js

  - name: Microsoft-Windows-PowerShell/Operational
    event_id: 4103, 4104, 4105, 4106
      - script:
          lang: javascript
          id: powershell
          file: ${path.home}/module/powershell/config/winlogbeat-powershell.js

  - name: ForwardedEvents
    tags: [forwarded]
      - script:
          lang: javascript
          id: security
          file: ${path.home}/module/security/config/winlogbeat-security.js
      - script:
          lang: javascript
          id: sysmon
          file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
      - script:
 Windows PowerShell
          lang: javascript
          id: powershell
          file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
      - script:
          lang: javascript
          id: powershell
          file: ${path.home}/module/powershell/config/winlogbeat-powershell.js

# ====================== Elasticsearch template settings =======================

  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the
# website.

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.

# =============================== Elastic Cloud ================================

# These settings simplify using Winlogbeat with the Elastic Cloud (

# The setting overwrites the `output.elasticsearch.hosts` and
# `` options.
# You can find the `` in the Elastic Cloud web UI.

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
  # The Logstash hosts
  hosts: [""]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Winlogbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Winlogbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.

# ============================== Instrumentation ===============================

# Instrumentation support for the winlogbeat.
    # Set to true to enable instrumentation of winlogbeat.
    #enabled: false

    # Environment in which winlogbeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.

    # Secret token for the APM Server(s).

# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

Belt and braces, I would first test the input by running on the following command on the Graylog server itself:

echo “Hello Beats Input, please work locally. Regards, Merida” | nc -w 1 -u [Graylog host IP or name] 5044

And then run the same command from the server on which your filebeats instance is installed:

echo “Hello Beats Input, please work from remote machine. Regards, Merida” | nc -w 1 -u [Graylog host IP or name] 5044

If both of those work, you know the network connectivity is not the issue. The next most likely issue is that Filebeats does not actually have permission to read the logs you are trying to ingest.

Set the CHMOD of those log files appropriately and reboot Filebeat.

Filebeat writes a log file, might be worth looking at that.

Graylog’s log file might also be useful.

1 Like


when running the command on the Graylog server:

echo "Hello Beats Input, please work locally. Regards, Merida" | nc -w 1 -u 5044

Only the character “>” comes up.

I think this is not the right output, right? (I am a Linux beginner!)
Did I write the command wrong here?



Couple things you can comment out in your configuration file unless you need them is the following.

  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
  index.number_of_shards: 1

I have a basic setting that works in my lab.

   hosts: [""]
  data: C:\Program Files\Graylog\sidecar\cache\winlogbeat\data
  logs: C:\Program Files\Graylog\sidecar\logs

   - name: Application
   - name: System
   - name: Security

This section I had to add is for my logging, so if there is a problem I can grab the logs to find out what going wrong or to fix an issue. I did create the directory before starting winlogbeat service and make sure WinLogBeat service has access to the directory.

logging.level: debug <--- once your all set you should change this to INFO
logging.to_files: true
  path: C:\winlogbeat\logs
  name: winlogbeat.log

Since your a beginner make sure the indents in your winlogbeat.yml file is correct. YAML files are touchy about indents and spaces. If you get the logging to work for winlogbeat there might be something in there that can help. here are some more steps you can do.

NOTE: run as administrator for powershell, it will help.
PS C:\Program Files\Winlogbeat> .\winlogbeat.exe test config -c .\winlogbeat.yml -e

If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example:

PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-winlogbeat.ps1

1 Like

You won’t see any message in the terminal after you send that command, but you should find a message reaches Graylog to that port. You might try it with if you are running the command directly on the Graylog server.


i have commented out everything from the winlogbeat configuration file except the logstash and the eventlogs.

After that it worked.

Thanks to all for the help.

Hello @merida
Nice, I’m glad you resolved your issue. If you could mark this ticket resolved it would help others for future searches :slight_smile:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.