GL Cluster installation with settings using nginx

Does it work if you configure Filebeat to directly send messages to Graylog’s Beats input instead of nginx?

no
it does not work getting the same results

What’s the result exactly? Also try starting Filebeat in debug mode: https://www.elastic.co/guide/en/beats/filebeat/5.5/enable-filebeat-debugging.html

/usr/bin/filebeat -c /etc/filebeat/filebeat.yml -d "*"
Exiting: error initializing publisher: Error loading template /usr/bin/filebeat.template-es6x.json: open /usr/bin/filebeat.template-es6x.json: no such file or directory

You have to use the configuration file that has been generated by the Graylog Collector Sidecar…

but when I start it as a service no fault of such. I had to use sidecar just because I tought it could be easiser to configure filebeat. Also sidecar adds a line of output logstash which I do not have

“logstash” is the name of the output in Filebeats which is using the Beats protocol (which in turn can be processed by a Beats input in Graylog): https://www.elastic.co/guide/en/beats/filebeat/5.5/logstash-output.html

so should i keep such a line in my filebeat.yml
mine is like this
output.elasticsearch:

Array of hosts to connect to.

hosts: [“89.22.104.82:8000”]

No, that’s wrong. You have to use a “logstash” output. Take a look at the configuration the Graylog Collector Sidecar generated for Filebeat.

Don’t just arbitrarily change the configuration of Filebeat.

/usr/bin/filebeat -c /etc/graylog/collector-sidecar/generated/filebeat.yml -e -d "*"


2017/07/18 09:56:16.460669 beat.go:285: INFO Home path: [/usr/bin] Config path: [/usr/bin] Data path: [/var/cache/graylog/collector-sidecar/filebeat/data] Logs path: [/var/log/graylog/collector-sidecar]
2017/07/18 09:56:16.460690 beat.go:186: INFO Setup Beat: filebeat; Version: 5.4.2
2017/07/18 09:56:16.460698 processor.go:44: DBG  Processors:
2017/07/18 09:56:16.460706 beat.go:192: DBG  Initializing output plugins
2017/07/18 09:56:16.460715 metrics.go:23: INFO Metrics logging every 30s
2017/07/18 09:56:16.460758 logstash.go:90: INFO Max Retries set to: 3
2017/07/18 09:56:16.460801 outputs.go:108: INFO Activated logstash as output plugin.
2017/07/18 09:56:16.460809 publish.go:238: DBG  Create output worker
2017/07/18 09:56:16.460868 publish.go:280: DBG  No output is defined to store the topology. The server fields might not be filled.
2017/07/18 09:56:16.460891 publish.go:295: INFO Publisher name: dcls.dogado.net
2017/07/18 09:56:16.461315 async.go:63: INFO Flush Interval set to: 1s
2017/07/18 09:56:16.461327 async.go:64: INFO Max Bulk Size set to: 2048
2017/07/18 09:56:16.461336 async.go:72: DBG  create bulk processing worker (interval=1s, bulk size=2048)
2017/07/18 09:56:16.461434 modules.go:93: ERR Not loading modules. Module directory not found: /usr/bin/module
2017/07/18 09:56:16.461479 beat.go:221: INFO filebeat start running.
2017/07/18 09:56:16.461500 registrar.go:85: INFO Registry file set to: /var/cache/graylog/collector-sidecar/filebeat/data/registry
2017/07/18 09:56:16.461535 registrar.go:106: INFO Loading registrar data from /var/cache/graylog/collector-sidecar/filebeat/data/registry
2017/07/18 09:56:16.461561 registrar.go:123: INFO States Loaded from registrar: 0
2017/07/18 09:56:16.461590 crawler.go:38: INFO Loading Prospectors: 1
2017/07/18 09:56:16.461666 registrar.go:236: INFO Starting Registrar
2017/07/18 09:56:16.461690 prospector.go:83: DBG  File Configs: [/home/cb/Fake-Apache-Log-Generator-master/*.log']
2017/07/18 09:56:16.461685 sync.go:41: INFO Start sending events to output
2017/07/18 09:56:16.461702 prospector_log.go:44: DBG  exclude_files: []
2017/07/18 09:56:16.461709 prospector_log.go:65: INFO Prospector with previous states loaded: 0
2017/07/18 09:56:16.461766 prospector.go:124: INFO Starting prospector of type: log; id: 3728556300174877083
2017/07/18 09:56:16.461776 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/07/18 09:56:16.461788 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017/07/18 09:56:16.461819 prospector_log.go:70: DBG  Start next scan
2017/07/18 09:56:16.461919 prospector_log.go:91: DBG  Prospector states cleaned up. Before: 0, After: 0
2017/07/18 09:56:21.461927 spooler.go:89: DBG  Flushing spooler because of timeout. Events flushed: 0
2017/07/18 09:56:26.462055 spooler.go:89: DBG  Flushing spooler because of timeout. Events flushed: 0
2017/07/18 09:56:26.462066 prospector.go:183: DBG  Run prospector
2017/07/18 09:56:26.462096 prospector_log.go:70: DBG  Start next scan
2017/07/18 09:56:26.462204 prospector_log.go:91: DBG  Prospector states cleaned up. Before: 0, After: 0
2017/07/18 09:56:31.462190 spooler.go:89: DBG  Flushing spooler because of timeout. Events flushed: 0
2017/07/18 09:56:36.462283 prospector.go:183: DBG  Run prospector
2017/07/18 09:56:36.462307 prospector_log.go:70: DBG  Start next scan
2017/07/18 09:56:36.462301 spooler.go:89: DBG  Flushing spooler because of timeout. Events flushed: 0
2017/07/18 09:56:36.462384 prospector_log.go:91: DBG  Prospector states cleaned up. Before: 0, After: 0

There are no new events and thus Filebeat doesn’t send anything to Graylog.

Just take a step back, try to make a single Graylog node work with Filebeat and then extend it node-by-node.

This posting of snippets and incomplete information is taking too long for me, so I have to refer you to the Graylog Enterprise Support or you’ll have to wait for somebody else to help you in this discussion forum.

thank you very much for help.
Just one last thing ;
I think may be it can be more usefull if you can develop documentation for a bit of unexperienced admins like me.
thx

@CanBuyukburc

as you can see the gaps in the documentation from your point of view, a pull request with changes that would clarify the documentation is always welcome.

with kind regards
Jan

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.