Sidecar cannot get the configuration for filebeat

yes. i have created a global Beats Input for it. let me try to disable TLS between filebeat and Beats input

collector remain failed. no filebeat.yml generated when i disabled TLS for Beats Input and Collector Output and even clean all options related to keys and certs.

Is there any successful configuration sample for collector with filebeat work with TLS ?

Not generating the filebeat.yml indicate that the collector-sidecar can’t communicate with Graylog - that is what you need to investigate in.

which part i should investigate in ? I already make the configuration simple enough but keep the graylog rest and web interface with TLS enabled.

my beats input has been configured as following:

Name: mybeatsinput
Global: true
Binding address: 0.0.0.0
Port: 5044
Receive Buffer Size: 1048576
TCP keepalive: enabled

all other options for mybeatsinput configuration are default and untouched.
and it is show 3 RUNNING

and my collector configuration myntplogcollector’s Beats Outputs toglc configured as:

Name: toglc
Type: [FileBeat] Beats output
Hosts: ['gl1.mylogs.com:5044','gl2.mylogs.com:5044','gl3.mylogs.com:5044']
Loadbalancing: enabled.

and all other default options keep untouched.

and my collector configuration myntplogcollector’s Beats Inputs fromntpcollectors configured as:

Name: fromntpcollectors
Forward to: toglc [filebeat]
Type: [FileBeat] file input
Path to Logfile: ['/var/log/chrony/*.log']
Tail files: yes

and other options keep default and untouched.

the configuration has been update successfully with tag linux, ntp, chronyd in 3 update operations and each time with one.

and the collector status shows as failing, but the logs file listed here:

Log Files

Recently modified files will be highlighted in blue.

Modified	Size	Path
2018-02-15 01:47:56	72	  /var/log/chrony
2018-02-16 16:19:44	174264	  /var/log/chrony/measurements.log
2018-02-16 16:19:44	148122	  /var/log/chrony/statistics.log
2018-02-16 16:19:44	125631	  /var/log/chrony/tracking.log

the sidecar side i installed as:

#rpm -qa|grep sidecar
collector-sidecar-0.1.4-1.x86_64

# pwd
/etc/graylog/collector-sidecar
# cat collector_sidecar.yml
server_url: https://gl1.mylogs.com:9000/api/
update_interval: 10
tls_skip_verify: true
send_status: true
list_log_files: /var/log/chrony
node_id: clr.mylogs.com
collector_id: file:/etc/graylog/collector-sidecar/collector-id
cache_path: /var/cache/graylog/collector-sidecar
log_path: /var/log/graylog/collector-sidecar
log_rotation_time: 86400
log_max_age: 604800
tags:
    - linux
    - ntp
    - chronyd
backends:
    - name: nxlog
      enabled: false
      binary_path: /usr/bin/nxlog
      configuration_path: /etc/graylog/collector-sidecar/generated/nxlog.conf
    - name: filebeat
      enabled: true
      binary_path: /usr/bin/filebeat
      configuration_path: /etc/graylog/collector-sidecar/generated/filebeat.yml

the sidecar’s log output as:

# cat collector_sidecar.log
time="2018-02-16T16:00:57+08:00" level=info msg="Starting signal distributor" 
time="2018-02-16T16:00:57+08:00" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-02-16T16:00:58+08:00" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 1/3." 
time="2018-02-16T16:00:58+08:00" level=info msg="[filebeat] Stopping" 
time="2018-02-16T16:01:00+08:00" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-02-16T16:01:01+08:00" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 2/3." 
time="2018-02-16T16:01:01+08:00" level=info msg="[filebeat] Stopping" 
time="2018-02-16T16:01:03+08:00" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-02-16T16:01:04+08:00" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 3/3." 
time="2018-02-16T16:01:04+08:00" level=info msg="[filebeat] Stopping" 
time="2018-02-16T16:01:06+08:00" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-02-16T16:01:07+08:00" level=error msg="[filebeat] Unable to start collector after 3 tries, giving up!" 
time="2018-02-16T16:01:07+08:00" level=info msg="[RequestConfiguration] No configuration found for configured tags!" 
time="2018-02-16T16:01:17+08:00" level=info msg="[RequestConfiguration] No configuration found for configured tags!" 
time="2018-02-16T16:01:27+08:00" level=info msg="[RequestConfiguration] No configuration found for configured tags!" 
time="2018-02-16T16:01:37+08:00" level=info msg="[RequestConfiguration] No configuration found for configured tags!" 
time="2018-02-16T16:01:47+08:00" level=info msg="[RequestConfiguration] No configuration found for configured tags!" 
time="2018-02-16T16:01:57+08:00" level=info msg="[RequestConfiguration] No configuration found for configured tags!" 
time="2018-02-16T16:02:07+08:00" level=info msg="[RequestConfiguration] No configuration found for configured tags!" 

the filebeat log output as:

# cat filebeat_stderr.log
filebeat2018/02/16 08:00:57.415414 beat.go:339: CRIT Exiting: error loading config file: stat /etc/graylog/collector-sidecar/generated/filebeat.yml: no such file or directory
Exiting: error loading config file: stat /etc/graylog/collector-sidecar/generated/filebeat.yml: no such file or directory
filebeat2018/02/16 08:01:00.417280 beat.go:339: CRIT Exiting: error loading config file: stat /etc/graylog/collector-sidecar/generated/filebeat.yml: no such file or directory
Exiting: error loading config file: stat /etc/graylog/collector-sidecar/generated/filebeat.yml: no such file or directory
filebeat2018/02/16 08:01:03.419152 beat.go:339: CRIT Exiting: error loading config file: stat /etc/graylog/collector-sidecar/generated/filebeat.yml: no such file or directory
Exiting: error loading config file: stat /etc/graylog/collector-sidecar/generated/filebeat.yml: no such file or directory
filebeat2018/02/16 08:01:06.418878 beat.go:339: CRIT Exiting: error loading config file: stat /etc/graylog/collector-sidecar/generated/filebeat.yml: no such file or directory

and the rest interface can be access from sidecar host:

# curl -I https://gl1.mylogs.com:9000/api/
HTTP/1.1 200 OK
X-Graylog-Node-ID: bcb2f984-5c5d-4e83-81cd-102c4a299b37
X-Runtime-Microseconds: 1033
Content-Length: 232
Content-Type: application/json
Date: Fri, 16 Feb 2018 08:34:26 GMT

and as i have point out that the chronyd log files have been already listed on the collector status page, this should means that sidecar can communicate with graylog.

I have checked graylog server log, i found there are some errors and warnings like:

2018-02-15T00:53:13.525+08:00 ERROR [AuditLogger] Unable to write audit log entry because there is no valid license.
2018-02-15T00:53:14.536+08:00 ERROR [MongoAuditLogPeriodical] Not running cleanup for auditlog entries in MongoDB because there is no valid license.
2018-02-15T00:53:18.095+08:00 ERROR [LookupDataAdapter] Couldn't start data adapter <abuse-ch-ransomware-domains/5a84697cfa192905586929bb/@2f58e98c>
2018-02-15T00:53:18.096+08:00 ERROR [LookupDataAdapter] Couldn't start data adapter <abuse-ch-ransomware-ip/5a84697cfa192905586929bc/@2eeda28>
2018-02-15T00:53:18.107+08:00 ERROR [LookupDataAdapter] Couldn't start data adapter <tor-exit-node/5a84697cfa192905586929bd/@7c15ac9c>
2018-02-15T00:53:18.111+08:00 ERROR [LookupDataAdapter] Couldn't start data adapter <spamhaus-drop/5a84697cfa192905586929b9/@66352769>
2018-02-15T00:53:26.183+08:00 ERROR [AuditLogger] Unable to write audit log entry because there is no valid license.
...
2018-02-15T13:45:54.371+08:00 ERROR [LookupDataAdapter] Couldn't start data adapter <spamhaus-drop/5a84697cfa192905586929b9/@1b7e5a00>
2018-02-15T13:45:54.427+08:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to https://es3.mylogs.com:9200)
2018-02-15T13:45:54.469+08:00 ERROR [MongoAuditLogPeriodical] Not running cleanup for auditlog entries in MongoDB because there is no valid license.
2018-02-15T13:46:12.260+08:00 ERROR [AuditLogger] Unable to write audit log entry because there is no valid license.
2018-02-15T13:46:24.480+08:00 ERROR [IndexRotationThread] Couldn't point deflector to a new index
2018-02-15T13:46:30.490+08:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (没有到主机的路由 (Host unreachable))
2018-02-15T13:46:51.533+08:00 ERROR [IndexRotationThread] Couldn't point deflector to a new index

and warning info like:

2018-02-15T00:53:13.532+08:00 WARN  [DeadEventLoggingListener] Received unhandled event of type <org.graylog2.plugin.lifecycles.Lifecycle> from event bus <AsyncEventBus{graylog-eventbus}>
2018-02-15T00:53:17.698+08:00 WARN  [LookupTableService] Lookup table otx-api-domain is referencing a missing data adapter 5a84697cfa192905586929ba, check if it started properly.
2018-02-15T00:53:17.698+08:00 WARN  [LookupTableService] Lookup table spamhaus-drop is referencing a missing data adapter 5a84697cfa192905586929b9, check if it started properly.
2018-02-15T00:53:17.698+08:00 WARN  [LookupTableService] Lookup table whois is referencing a missing data adapter 5a84697cfa192905586929be, check if it started properly.
2018-02-15T00:53:17.699+08:00 WARN  [LookupTableService] Lookup table otx-api-ip is referencing a missing data adapter 5a84697cfa192905586929b8, check if it started properly.
2018-02-15T00:53:17.699+08:00 WARN  [LookupTableService] Lookup table abuse-ch-ransomware-domains is referencing a missing data adapter 5a84697cfa192905586929bb, check if it started properly.
2018-02-15T00:53:17.699+08:00 WARN  [LookupTableService] Lookup table abuse-ch-ransomware-ip is referencing a missing data adapter 5a84697cfa192905586929bc, check if it started properly.
2018-02-15T00:53:17.699+08:00 WARN  [LookupTableService] Lookup table tor-exit-node-list is referencing a missing data adapter 5a84697cfa192905586929bd, check if it started properly.
2018-02-15T00:53:18.102+08:00 WARN  [LookupTableService] Lookup table otx-api-domain is referencing a missing data adapter 5a84697cfa192905586929ba, check if it started properly.
2018-02-15T00:53:18.103+08:00 WARN  [LookupTableService] Lookup table tor-exit-node-list is referencing a missing data adapter 5a84697cfa192905586929bd, check if it started properly.
2018-02-15T00:53:18.104+08:00 WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-02-15T00:53:18.109+08:00 WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-02-15T00:59:34.184+08:00 WARN  [ProxiedResource] Unable to call https://gl2.mylogs.com:9000/api/system/metrics/multiple on node <1df94488-3bd2-4116-aeda-5850b63eaa61>
2018-02-15T00:59:36.183+08:00 WARN  [ProxiedResource] Unable to call https://gl2.mylogs.com:9000/api/system/metrics/multiple on node <1df94488-3bd2-4116-aeda-5850b63eaa61>
...
2018-02-15T01:19:24.430+08:00 WARN  [DeadEventLoggingListener] Received unhandled event of type <org.graylog2.plugin.lifecycles.Lifecycle> from event bus <AsyncEventBus{graylog-eventbus}>
2018-02-15T01:19:24.783+08:00 WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-02-15T01:19:24.819+08:00 WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-02-15T01:38:50.808+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=Global Beats Inputs, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 212992.
2018-02-15T02:50:51.711+08:00 WARN  [ProxiedResource] Unable to call https://gl3.mylogs.com:9000/api/system/metrics/multiple on node <719b1168-ee41-47b3-bce3-d26379b2cbb5>
2018-02-15T02:51:12.701+08:00 WARN  [ProxiedResource] Unable to call https://gl3.mylogs.com:9000/api/system/metrics/multiple on node <719b1168-ee41-47b3-bce3-d26379b2cbb5>
...
2018-02-15T03:09:48.672+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=Global Beats Inputs, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 212992.
2018-02-15T03:09:58.273+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=Global Beats Inputs, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 212992.
2018-02-15T03:17:31.543+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=Global Beats Inputs, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 212992.
2018-02-15T03:17:35.221+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=Global Beats Inputs, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 212992.
2018-02-15T03:20:36.640+08:00 WARN  [DeadEventLoggingListener] Received unhandled event of type <org.graylog2.plugin.lifecycles.Lifecycle> from event bus <AsyncEventBus{graylog-eventbus}>
2018-02-15T03:20:37.074+08:00 WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-02-15T03:20:37.108+08:00 WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-02-15T03:20:54.563+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=Global Beats Inputs, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 212992.
2018-02-15T03:48:20.801+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=globalbeatsinput, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 212992.
2018-02-15T03:48:24.591+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=globalbeatsinput, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 212992.

Is the filebeat failed related to those problem and how to correct those issues ?

@cdeng Please properly format your configuration and text snippets for readability: http://commonmark.org/help/

Example:

``` 
Some text
More text
```

OK. thanks. I got it.

Now I remove the unlicensed graylog-enterprise-plugins, the warning and errors in server.log as following:

# cat server.log|grep WARN
2018-02-16T22:02:58.455+08:00 WARN  [DeadEventLoggingListener] Received unhandled event of type <org.graylog2.plugin.lifecycles.Lifecycle> from event bus <AsyncEventBus{graylog-eventbus}>
2018-02-16T22:02:58.753+08:00 WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-02-16T22:02:58.754+08:00 WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-02-16T22:03:14.703+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=test, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 212992.

# cat server.log|grep ERROR
2018-02-16T22:03:00.165+08:00 ERROR [LookupDataAdapter] Couldn't start data adapter <tor-exit-node/5a84697cfa192905586929bd/@6c7ab9b9>

I tried SEND GELF via HTTP also failed.

the GELF HTTP input configured as following:

    bind_address: gl1.mylogs.com
    decompress_size_limit: 8388608
    enable_cors: true
    idle_writer_timeout: 60
    max_chunk_size: 65536
    override_source: <empty>
    port: 12201
    recv_buffer_size: 1048576
    tcp_keepalive: false
    tls_cert_file: <empty>
    tls_client_auth:  disabled
    tls_client_auth_cert_file: <empty>
    tls_enable: false
    tls_key_file: <empty>
    tls_key_password: ********

and my HTTP POST GELF message as following:

# curl -X POST -H 'Content-Type: application/json' -d '{ "version": "1.1","host":"clr.mylogs.com","short_message":"a test message","full_message":"this is a test message","timestamp":1518794441.331,"level":6,"_app":"cmd", "_who":"charles"}' 'http://gl1.mylogs.com:12201/gelf'
curl: (7) Failed connect to gl1.mylogs.com:12201; 没有到主机的路由     (i.e. No route to host available)

so i suspect if i have configured options not fully supported by system this time, here is my server.conf for graylog(i remove the password related info), can you help to check on it?:

# general configuration
is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = --secret-for-password here--
root_username = admin
root_password_sha2 = --password sha2 digest here--
root_email = "cdeng@live.cn"
root_timezone = Asia/Shanghai
plugin_dir = /usr/share/graylog-server/plugin

# REST interface
rest_listen_uri = https://gl1.mylogs.com:9000/api/
rest_transport_uri = https://gl1.mylogs.com:9000/api/
rest_enable_cors = true
rest_enable_gzip = true
rest_enable_tls = true
rest_tls_cert_file = /etc/graylog/server/rest-cert.pem
rest_tls_key_file = /etc/graylog/server/rest-key.pem
# key without password protected
#rest_tls_key_password = ********
rest_max_header_size = 8192
rest_thread_pool_size = 16
trusted_proxies = 192.168.1.0/24,10.10.10.0/24

# Web Interface
web_enable = true
web_listen_uri = https://gl1.mylogs.com:9000/
web_endpoint_uri = https://gl1.mylogs.com:9000/api/
web_enable_cors = true
web_enable_gzip = true
web_enable_tls = true
web_tls_cert_file = /etc/graylog/server/web-cert.pem
web_tls_key_file = /etc/graylog/server/web-key.pem
# key without password protected
#web_tls_key_password = ********
web_max_header_size = 8192
web_max_initial_line_length = 4096
web_thread_pool_size = 16

# elasticsearch cluster connections
elasticsearch_hosts = https://graylog:password_here@es1.mylogs.com:9200,\
                      https://graylog:password_here@es2.mylogs.com:9200,\
                      https://graylog:password_here@es3.mylogs.com:9200
elasticsearch_connect_timeout = 10s
elasticsearch_socket_timeout = 60s
# the software should come with bug to deal with this option value being '-1s', explicitly setting this will get an error on start.
#elasticsearch_idle_timeout = -1s
elasticsearch_max_total_connections = 20
elasticsearch_max_total_connections_per_route = 2
elasticsearch_max_retries = 2
elasticsearch_discovery_enabled = false
#elasticsearch_discovery_filter = rack:42
elasticsearch_discovery_frequency = 30s
elasticsearch_compression_enabled = false

# index
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_size_per_index = 1073741824
elasticsearch_max_time_per_index = 1d
elasticsearch_disable_version_check = false
no_retention = false
elasticsearch_max_number_of_indices = 30
retention_strategy = close
elasticsearch_shards = 3
elasticsearch_replicas = 2
elasticsearch_index_prefix = graylog
elasticsearch_template_name = graylog-internal
allow_leading_wildcard_searches = false
allow_highlighting = true
elasticsearch_analyzer = standard
elasticsearch_request_timeout = 1m
disable_index_optimization = false
index_optimization_max_num_segments = 1
elasticsearch_index_optimization_timeout = 1h
elasticsearch_index_optimization_jobs = 20
index_ranges_cleanup_interval = 1h
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
outputbuffer_processor_keep_alive_time = 5000
outputbuffer_processor_threads_core_pool_size = 3
outputbuffer_processor_threads_max_pool_size = 30
udp_recvbuffer_sizes = 1048576
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
message_journal_max_age = 12h
message_journal_max_size = 5gb
message_journal_flush_age = 1m
message_journal_flush_interval = 1000000
message_journal_segment_age = 1h
message_journal_segment_size = 100mb
async_eventbus_processors = 2
lb_recognition_period_seconds = 3
lb_throttle_threshold_percentage = 95
stream_processing_timeout = 2000
stream_processing_max_faults = 3
alert_check_interval = 60
output_module_timeout = 10000
stale_master_timeout = 2000
shutdown_timeout = 30000

# MongoDB Cluster Connections
mongodb_uri = mongodb://admin:password_here@mg1.mylogs.com,mg2.mylogs.com,mg3.mylogs.com/graylog?replicaSet=rs01&ssl=true
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5

#  Droll Engine
#rules_file = /etc/graylog/server/rules.drl

# email Notify
#transport_email_enabled = false
#transport_email_hostname = mail.mylogs.com
#transport_email_port = 587
#transport_email_use_auth = true
#transport_email_use_tls = true
#transport_email_use_ssl = true
#transport_email_auth_username = graylog@mylogs.com
#transport_email_auth_password = ********
#transport_email_subject_prefix = [graylog]
#transport_email_from_email = graylog@mylogs.com
#transport_email_web_interface_url = https://gl1.mylogs.com:9000/

# HTTPS
http_connect_timeout = 5s
http_read_timeout = 10s
http_write_timeout = 10s
#http_proxy_uri =

# Various
gc_warning_threshold = 1s
ldap_connection_timeout = 2000
disable_sigar = false
dashboard_widget_default_cache_time = 10s
content_packs_loader_enabled = true
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32

I got logs from graylog server.log:

2018-02-17T00:55:20.337+08:00 INFO  [ServerBootstrap] Services started, startup times in ms: {BufferSynchronizerService [RUNNING]=2, OutputSetupService [RUNNING]=2, ConfigurationEtagService [RUNNING]=3, JournalReader [RUNNING]=3, InputSetupService [RUNNING]=3, KafkaJournal [RUNNING]=16, StreamCacheService [RUNNING]=38, PeriodicalsService [RUNNING]=372, LookupTableService [RUNNING]=2904, JerseyService [RUNNING]=15607}
2018-02-17T00:55:20.338+08:00 INFO  [InputSetupService] Triggering launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] to Running [LB:ALIVE]
2018-02-17T00:55:20.354+08:00 INFO  [ServerBootstrap] Graylog server up and running.
2018-02-17T00:55:20.373+08:00 INFO  [InputStateListener] Input [GELF HTTP/5a86fabbfa19290b5a365b09] is now STARTING
2018-02-17T00:55:20.415+08:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input GELFHttpInput{title=myGelfHttps, type=org.graylog2.inputs.gelf.http.GELFHttpInput, nodeId=bcb2f984-5c5d-4e83-81cd-102c4a299b37} should be 1048576 but is 212992.
2018-02-17T00:55:20.419+08:00 INFO  [InputStateListener] Input [GELF HTTP/5a86fabbfa19290b5a365b09] is now RUNNING
2018-02-17T00:57:14.829+08:00 WARN  [ProxiedResource] Unable to call https://gl2.mylogs.com:9000/api/system/metrics/multiple on node <1df94488-3bd2-4116-aeda-5850b63eaa61>
java.net.ConnectException: Failed to connect to gl2.mylogs.com/10.10.10.32:9000
        at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:240) ~[graylog.jar:?]

it reports a rest api unable to call, but i can call it successfully with curl on the same host:

# curl -u admin -I https://gl2.mylogs.com:9000/api/system/inputstates
Enter host password for user 'admin':
HTTP/1.1 200 OK
X-Graylog-Node-ID: 1df94488-3bd2-4116-aeda-5850b63eaa61
X-Runtime-Microseconds: 1261
Content-Length: 13
Content-Type: application/json
Date: Fri, 16 Feb 2018 17:40:22 GMT

should it be a bug ?

It seems that if the selinux setting being SELINUX=enforcing, we have to manually open port on the firewall for the input ports. after manually open the port on the firewall, send GELF via HTTP/S works.

if there any listening port been used by sidecar to be opened on the firewall?

collector-sidecar does not listen on any host - it will just connect to Graylogs REST API to get the configuration.

Your Sidecar Logfiles writes your Problem in clear text and plain words

“[RequestConfiguration] No configuration found for configured tags!”

Create a configuration in Graylog, assign a Tag to that configuration which is configured at the host where collector-sidecar runs and it will work.

I really no idea what can i do. let me post all my configuration step by step:

  1. firstly i create a global beats input as following:

–continued

after save, i got the global beats inputs run on the graylog cluster:

then i create the collector configuartion, firstly i create the tag “linux” for the configuration as following(the half on top are my create, the half on bottom are the response from system):

similarly, i have create tag “ntp” and “chronyd” also for this collector configuration.

secondly, i create the output configuration for the collector configuration:

thirdly i create the input configuration for the collector configuration: