Filebeat configuration

Hi,

Please how can I configure Filebeat to send logs to Graylog !!!

1 Like

You configure Filebeat just as you normally would (see https://www.elastic.co/guide/en/beats/filebeat/6.3/filebeat-getting-started.html) with a Logstash output and create a Beats input in Graylog.

It doesn’t work :

This is the input graylog

And this is mu outputs :

output.logstash:
  # The Logstash hosts
  hosts: ["172.16.250.30:5044", "172.16.250.29:5044"]

I could see logs on Kibana but not on Graylog.

Please help me, i want to use Graylog !!

I delete one output and kept just the output to graylog server because they say :

“The list of known Logstash servers to connect to. If load balancing is disabled, but multiple hosts are configured, one host is selected randomly”

But still doesn’t work !!!

It’s been more than 1h that I have been waiting for logs to load but there is still nothing !!

Heyo @asalma,

There can’t be nothing because Graylog is not receiving anything (Look at the stats on the right of the input settings).

Is there any firewall between the sender and Graylog blocking the port? Is the local firewall of the server preventing data to be sent? Is there any error message inside the logs of filebeat or Graylog that could relate to this?

Greetings,
Philipp

If there is a firewall why did I received logs on Kibana ?

These are the logs of filebeat :

2018-06-15T14:54:51.998+0200    INFO    instance/beat.go:492    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/lo$
2018-06-15T14:54:51.999+0200    INFO    instance/beat.go:499    Beat UUID: c2d4f63c-b36a-42f3-8932-d733b54408f3
2018-06-15T14:54:51.999+0200    INFO    [beat]  instance/beat.go:716    Beat info       {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat$
2018-06-15T14:54:51.999+0200    INFO    [beat]  instance/beat.go:725    Build info      {"system_info": {"build": {"commit": "a04cb664d5fbd4b1aab485d1766f3979c138fd38", "libbea$
2018-06-15T14:54:51.999+0200    INFO    [beat]  instance/beat.go:728    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.9.4"}}}
2018-06-15T14:54:52.001+0200    INFO    [beat]  instance/beat.go:732    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2018-04-19T18:13:50+02:00$
2018-06-15T14:54:52.002+0200    INFO    [beat]  instance/beat.go:761    Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","d$
2018-06-15T14:54:52.002+0200    INFO    instance/beat.go:225    Setup Beat: filebeat; Version: 6.3.0
2018-06-15T14:54:52.003+0200    INFO    pipeline/module.go:81   Beat name: frghcslnetv03
2018-06-15T14:54:52.102+0200    INFO    [monitoring]    log/log.go:97   Starting metrics logging every 30s
2018-06-15T14:54:52.102+0200    INFO    instance/beat.go:315    filebeat start running.
2018-06-15T14:54:52.103+0200    INFO    registrar/registrar.go:112      Loading registrar data from /var/lib/filebeat/registry
2018-06-15T14:54:52.116+0200    INFO    registrar/registrar.go:123      States Loaded from registrar: 0
2018-06-15T14:54:52.117+0200    WARN    beater/filebeat.go:354  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output$
2018-06-15T14:54:52.117+0200    INFO    crawler/crawler.go:48   Loading Inputs: 3
2018-06-15T14:54:52.117+0200    INFO    log/input.go:111        Configured paths: [/var/log/network/FWEquin_WAN_R03-01M.log]
2018-06-15T14:54:52.117+0200    INFO    input/input.go:87       Starting input of type: log; ID: 5206743805792494089
2018-06-15T14:54:52.118+0200    INFO    crawler/crawler.go:109  Stopping Crawler
2018-06-15T14:54:52.118+0200    INFO    crawler/crawler.go:119  Stopping 1 inputs
2018-06-15T14:54:52.119+0200    INFO    log/input.go:411        Scan aborted because input stopped.
2018-06-15T14:54:52.119+0200    INFO    input/input.go:121      input ticker stopped
2018-06-15T14:54:52.119+0200    INFO    input/input.go:138      Stopping Input: 5206743805792494089
2018-06-15T14:54:52.119+0200    INFO    crawler/crawler.go:135  Crawler stopped
2018-06-15T14:54:52.119+0200    INFO    registrar/registrar.go:243      Stopping Registrar
2018-06-15T14:54:52.119+0200    INFO    registrar/registrar.go:169      Ending Registrar
2018-06-15T14:54:52.132+0200    INFO    [monitoring]    log/log.go:132  Total non-zero metrics  {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":110,"time":{"ms":11$
2018-06-15T14:54:52.132+0200    INFO    [monitoring]    log/log.go:133  Uptime: 150.231045ms
2018-06-15T14:54:52.132+0200    INFO    [monitoring]    log/log.go:110  Stopping metrics logging.
2018-06-15T14:54:52.132+0200    INFO    instance/beat.go:321    filebeat stopped.
2018-06-15T14:54:52.135+0200    ERROR   instance/beat.go:691    Exiting: Error in initing input: can not convert 'string' into 'bool' accessing 'filebeat.inputs.1.enabled' (sou$
Exiting: Error in initing input: can not convert 'string' into 'bool' accessing 'filebeat.inputs.1.enabled' (source:'/etc/filebeat/filebeat.yml')

Where can I find logs of graylog (the path please) ?

Was Kibana on the same server?

Have a look at the log, the following line states your problem:

2018-06-15T14:54:52.135+0200    ERROR   instance/beat.go:691    Exiting: Error in initing input: can not convert 'string' into 'bool' accessing 'filebeat.inputs.1.enabled' (sou$
Exiting: Error in initing input: can not convert 'string' into 'bool' accessing 'filebeat.inputs.1.enabled' (source:'/etc/filebeat/filebeat.yml')

Filebeat is not even starting, that’s why Graylog is not receiving anything.

Logs are commonly found at /var/log/graylog-server/, but have a look at
http://docs.graylog.org/en/2.4/pages/configuration/file_location.html?
for more info :slight_smile:

[root@frghcslnetv03 filebeat]# service filebeat start
Starting filebeat: 2018-06-15T15:16:33.632+0200 INFO    instance/beat.go:492    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-06-15T15:16:33.633+0200    INFO    instance/beat.go:499    Beat UUID: c2d4f63c-b36a-42f3-8932-d733b54408f3
2018-06-15T15:16:33.633+0200    INFO    [beat]  instance/beat.go:716    Beat info       {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "c2d4f63c-b36a-42f3-8932-d733b54408f3"}}}
2018-06-15T15:16:33.633+0200    INFO    [beat]  instance/beat.go:725    Build info      {"system_info": {"build": {"commit": "a04cb664d5fbd4b1aab485d1766f3979c138fd38", "libbeat": "6.3.0", "time": "2018-06-11T22:34:44.000Z", "version": "6.3.0"}}}
2018-06-15T15:16:33.633+0200    INFO    [beat]  instance/beat.go:728    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.9.4"}}}
2018-06-15T15:16:33.637+0200    INFO    [beat]  instance/beat.go:732    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2018-04-19T18:13:50+02:00","containerized":true,"hostname":"frghcslnetv03","ips":["127.0.0.1/8","::1/128","172.16.250.10/24","fe80::250:56ff:fe8e:53c9/64","10.153.1.250/24","fe80::250:56ff:fe8e:53ca/64"],"kernel_version":"2.6.32-573.22.1.el6.centos.plus.x86_64","mac_addresses":["00:50:56:8e:53:c9","00:50:56:8e:53:ca"],"os":{"family":"redhat","platform":"centos","name":"CentOS","version":"6.9 (Final)","major":6,"minor":9,"patch":0,"codename":"Final"},"timezone":"CEST","timezone_offset_sec":7200}}}
2018-06-15T15:16:33.638+0200    INFO    [beat]  instance/beat.go:761    Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40","41","42","43","44","45","46","47","48","49","50","51","52","53","54","55","56","57","58","59","60","61","62","63"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40","41","42","43","44","45","46","47","48","49","50","51","52","53","54","55","56","57","58","59","60","61","62","63"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40","41","42","43","44","45","46","47","48","49","50","51","52","53","54","55","56","57","58","59","60","61","62","63"],"ambient":null}, "cwd": "/", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 10990, "ppid": 10989, "seccomp": {"mode":""}, "start_time": "2018-06-15T15:16:32.600+0200"}}}
2018-06-15T15:16:33.639+0200    INFO    instance/beat.go:225    Setup Beat: filebeat; Version: 6.3.0
2018-06-15T15:16:33.641+0200    INFO    pipeline/module.go:81   Beat name: frghcslnetv03
Config OK
                                                           [  OK  ]

He started beacause I received logs on Kibana …

Kibana, Filebeat and Graylog aren’t on the same server !!

My /var/log/graylog-server/server.log is empty !!

I need help please to solve this !!

I have this even when filebeat is started :slight_smile:

[root@frghcslnetv03 filebeat]# service filebeat status
filebeat-god is stopped

Without the complete configuration of Graylog, Elasticsearch, and Filebeat, as well as the complete logs of these components we won’t be able to help you.

Also, what’s the complete output of the following command on the machine running Filebeat?

# service filebeat status ; ps -ef | grep filebeat
[root@frghcslnetv03 filebeat]#  service filebeat status ; ps -ef | grep filebeat
filebeat-god is stopped
root     16075  7388  0 16:40 pts/0    00:00:00 grep filebeat

Do i have to send you : elasticearch.yml , filebeat.yml et server.conf ?

So, Filebeat is not running and the service filebeat status command is correct.

Yes.

server.conf :


############################
# GRAYLOG CONFIGURATION FILE
############################
#
# This is the Graylog configuration file. The file has to use ISO 8859-1/Latin-1 character encoding.
# Characters that cannot be directly represented in this encoding can be written using Unicode escapes
# as defined in https://docs.oracle.com/javase/specs/jls/se8/html/jls-3.html#jls-3.3, using the \u prefix.
# For example, \u002c.
#
# * Entries are generally expected to be a single line of the form, one of the following:
#
# propertyName=propertyValue
# propertyName:propertyValue
#
# * White space that appears between the property name and property value is ignored,
#   so the following are equivalent:
#
# name=Stephen
# name = Stephen
#
# * White space at the beginning of the line is also ignored.
#
# * Lines that start with the comment characters ! or # are ignored. Blank lines are also ignored.
#
# * The property value is generally terminated by the end of the line. White space following the
#   property value is not ignored, and is treated as part of the property value.
#
# * A property value can span several lines if each line is terminated by a backslash (‘\’) character.
#   For example:
#
# targetCities=\
#         Detroit,\
#         Chicago,\
#         Los Angeles
#
#   This is equivalent to targetCities=Detroit,Chicago,Los Angeles (white space at the beginning of lines is ignored).
#
# * The characters newline, carriage return, and tab can be inserted with characters \n, \r, and \t, respectively.
#
# * The backslash character must be escaped as a double backslash. For example:
#
# path=c:\\docs\\doc1
#

# If you are running more than one instances of Graylog server you have to select one of these
# instances as master. The master will perform some periodical tasks that non-masters won't perform.
is_master = true
# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea
# to use an absolute file path here if you are starting Graylog server from init scripts or similar.
node_id_file = /etc/graylog/server/node-id

# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.
# Generate one by using for example: pwgen -N 1 -s 96
password_secret = 0pD0cotPKQ6qtzjA7cmh2Eknuu4fxLxVrGJq7TomcybKaykdhg9Qt3JlMW1ZVNalEAmeaxesstakAvMwACVum5t5UP8hU667

# The default root user is named 'admin'
root_username = admin

# You MUST specify a hash password for the root user (which you only need to initially set up the
# system and in case you lose connectivity to your authentication backend)
# This password cannot be changed using the API or via the web interface. If you need to change it,
# modify it in this file.
# Create one by using for example: echo -n yourpassword | shasum -a 256
# and put the resulting hash value into the following line
root_password_sha2 = e3c652f0ba0b4801205814f8b6bc49672c4c74e25b497770bb89b22cdeb4e951

# The email address of the root user.
# Default is empty
#root_email = ""

# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.
# Default is UTC
#root_timezone = UTC

# Set plugin directory here (relative or absolute)
plugin_dir = /usr/share/graylog-server/plugin

# REST API listen URI. Must be reachable by other Graylog server nodes if you run a cluster.
# When using Graylog Collectors, this URI will be used to receive heartbeat messages and must be accessible for all collectors.
rest_listen_uri = http://172.16.250.30:9000/api/

# REST API transport address. Defaults to the value of rest_listen_uri. Exception: If rest_listen_uri
# is set to a wildcard IP address (0.0.0.0) the first non-loopback IPv4 system address is used.
# If set, this will be promoted in the cluster discovery APIs, so other nodes may try to connect on
# this address and it is used to generate URLs addressing entities in the REST API. (see rest_listen_uri)
# You will need to define this, if your Graylog server is running behind a HTTP proxy that is rewriting
# the scheme, host name or URI.
# This must not contain a wildcard address (0.0.0.0).
#rest_transport_uri = http://192.168.1.1:9000/api/

# Enable CORS headers for REST API. This is necessary for JS-clients accessing the server directly.
# If these are disabled, modern browsers will not be able to retrieve resources from the server.
# This is enabled by default. Uncomment the next line to disable it.
#rest_enable_cors = false
# Enable GZIP support for REST API. This compresses API responses and therefore helps to reduce
# overall round trip times. This is enabled by default. Uncomment the next line to disable it.
#rest_enable_gzip = false

# Enable HTTPS support for the REST API. This secures the communication with the REST API with
# TLS to prevent request forgery and eavesdropping. This is disabled by default. Uncomment the
# next line to enable it.
#rest_enable_tls = true

# The X.509 certificate chain file in PEM format to use for securing the REST API.
#rest_tls_cert_file = /path/to/graylog.crt

# The PKCS#8 private key file in PEM format to use for securing the REST API.
#rest_tls_key_file = /path/to/graylog.key

# The password to unlock the private key used for securing the REST API.
#rest_tls_key_password = secret

# The maximum size of the HTTP request headers in bytes.
#rest_max_header_size = 8192

# The size of the thread pool used exclusively for serving the REST API.
#rest_thread_pool_size = 16

# Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For
# header. May be subnets, or hosts.
#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128

# Enable the embedded Graylog web interface.
# Default: true
#web_enable = false

# Web interface listen URI.
# Configuring a path for the URI here effectively prefixes all URIs in the web interface. This is a replacement
# for the application.context configuration parameter in pre-2.0 versions of the Graylog web interface.
web_listen_uri = http://172.16.250.30:9000/

# Web interface endpoint URI. This setting can be overriden on a per-request basis with the X-Graylog-Server-URL header.
# Default: $rest_transport_uri
#web_endpoint_uri = http://172.16.250.30

# Enable CORS headers for the web interface. This is necessary for JS-clients accessing the server directly.
# If these are disabled, modern browsers will not be able to retrieve resources from the server.
#web_enable_cors = false

# Enable/disable GZIP support for the web interface. This compresses HTTP responses and therefore helps to reduce
# overall round trip times. This is enabled by default. Uncomment the next line to disable it.

filebeat.yml :

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log
  enabled: true
  paths:
    - /var/log/network/FWEquin_WAN_R03-01M.log
  fields:
    log_type: fortigate
- type: log
  enabled: true
    - /var/log/network/frghcfwdmz01m.log
  fields:
    log_type: paloalto
- type: log
  enabled: true
    - /var/log/network/frghcfwint01m-fwcommon-2.log
  fields:
    log_type: cisco_asa

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
 #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
 # hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.16.250.30:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

elasticsearch.yml :

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: network-logs
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: network-2
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /var/lib/elasticsearch

path.data: /data/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["172.16.250.29"]
#
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

That’s incomplete.

This looks like only one of multiple Elasticsearch nodes.

Please also provide the requested logs.

Yes but the reste of the configuration is what come by default !!

This is the seconde node of my cluster which have two nodes, this node is installed with graylog the other one is on an other server !!

What do you mean by requested logs ?

That doesn’t matter. Please provide all requested information.

This:

See http://docs.graylog.org/en/2.4/pages/configuration/file_location.html for hints where to find these log files.

Filebeat logs :

2018-06-15T15:50:23.133+0200    INFO    instance/beat.go:492    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path:$
2018-06-15T15:50:23.133+0200    INFO    instance/beat.go:499    Beat UUID: c2d4f63c-b36a-42f3-8932-d733b54408f3
2018-06-15T15:50:23.133+0200    INFO    [beat]  instance/beat.go:716    Beat info       {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib$
2018-06-15T15:50:23.133+0200    INFO    [beat]  instance/beat.go:725    Build info      {"system_info": {"build": {"commit": "a04cb664d5fbd4b1aab485d1766f3979c138fd38"$
2018-06-15T15:50:23.133+0200    INFO    [beat]  instance/beat.go:728    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go$
2018-06-15T15:50:23.135+0200    INFO    [beat]  instance/beat.go:732    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2018-04-19T18:13$
2018-06-15T15:50:23.135+0200    INFO    [beat]  instance/beat.go:761    Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["$
2018-06-15T15:50:23.135+0200    INFO    instance/beat.go:225    Setup Beat: filebeat; Version: 6.3.0
2018-06-15T15:50:23.136+0200    INFO    pipeline/module.go:81   Beat name: frghcslnetv03
2018-06-15T15:50:23.137+0200    INFO    instance/beat.go:315    filebeat start running.
2018-06-15T15:50:23.137+0200    INFO    registrar/registrar.go:75       No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2018-06-15T15:50:23.138+0200    INFO    [monitoring]    log/log.go:97   Starting metrics logging every 30s
2018-06-15T15:50:23.140+0200    INFO    registrar/registrar.go:112      Loading registrar data from /var/lib/filebeat/registry
2018-06-15T15:50:23.140+0200    INFO    registrar/registrar.go:123      States Loaded from registrar: 0
2018-06-15T15:50:23.140+0200    WARN    beater/filebeat.go:354  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsear$
2018-06-15T15:50:23.140+0200    INFO    crawler/crawler.go:48   Loading Inputs: 3
2018-06-15T15:50:23.141+0200    INFO    log/input.go:111        Configured paths: [/var/log/network/FWEquin_WAN_R03-01M.log]
2018-06-15T15:50:23.141+0200    INFO    input/input.go:87       Starting input of type: log; ID: 5206743805792494089
2018-06-15T15:50:23.141+0200    INFO    crawler/crawler.go:109  Stopping Crawler
2018-06-15T15:50:23.141+0200    INFO    crawler/crawler.go:119  Stopping 1 inputs
2018-06-15T15:50:23.144+0200    INFO    log/input.go:411        Scan aborted because input stopped.
2018-06-15T15:50:23.144+0200    INFO    input/input.go:121      input ticker stopped
2018-06-15T15:50:23.144+0200    INFO    input/input.go:138      Stopping Input: 5206743805792494089
2018-06-15T15:50:23.144+0200    INFO    crawler/crawler.go:135  Crawler stopped
2018-06-15T15:50:23.144+0200    INFO    registrar/registrar.go:243      Stopping Registrar
2018-06-15T15:50:23.144+0200    INFO    registrar/registrar.go:169      Ending Registrar
2018-06-15T15:50:23.149+0200    INFO    [monitoring]    log/log.go:132  Total non-zero metrics  {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":$
2018-06-15T15:50:23.150+0200    INFO    [monitoring]    log/log.go:133  Uptime: 29.432466ms
2018-06-15T15:50:23.150+0200    INFO    [monitoring]    log/log.go:110  Stopping metrics logging.
2018-06-15T15:50:23.150+0200    INFO    instance/beat.go:321    filebeat stopped.
2018-06-15T15:50:23.153+0200    ERROR   instance/beat.go:691    Exiting: Error in initing input: can not convert 'string' into 'bool' accessing 'filebeat.inputs.1.enab$
Exiting: Error in initing input: can not convert 'string' into 'bool' accessing 'filebeat.inputs.1.enabled' (source:'/etc/filebeat/filebeat.yml')