Filebeat configuration

Graylog-server logs :

"Incorrect HTTP method for uri [/graylog_*/_aliases] and method [GET], allowed: [PUT]"
        at org.graylog2.indexer.cluster.jest.JestUtils.specificException(JestUtils.java:95) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:57) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:62) ~[graylog.jar:?]
        at org.graylog2.indexer.indices.Indices.getIndexNamesAndAliases(Indices.java:308) ~[graylog.jar:?]
        at org.graylog2.indexer.MongoIndexSet.getNewestIndexNumber(MongoIndexSet.java:151) ~[graylog.jar:?]
        at org.graylog2.indexer.MongoIndexSet.getNewestIndex(MongoIndexSet.java:146) ~[graylog.jar:?]
        at org.graylog2.indexer.MongoIndexSet.setUp(MongoIndexSet.java:252) ~[graylog.jar:?]
        at org.graylog2.periodical.IndexRotationThread.checkAndRepair(IndexRotationThread.java:138) ~[graylog.jar:?]
        at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:76) ~[graylog.jar:?]
        at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_171]
        at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
        at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_171]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_171]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_171]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_171]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
2018-06-18T09:37:07.479+02:00 INFO  [MongoIndexSet] Did not find a deflector alias. Setting one up now.
2018-06-18T09:37:07.479+02:00 ERROR [IndexRotationThread] Couldn't point deflector to a new index
org.graylog2.indexer.ElasticsearchException: Couldn't collect aliases for index pattern graylog_*

"Incorrect HTTP method for uri [/graylog_*/_aliases] and method [GET], allowed: [PUT]"
        at org.graylog2.indexer.cluster.jest.JestUtils.specificException(JestUtils.java:95) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:57) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:62) ~[graylog.jar:?]
        at org.graylog2.indexer.indices.Indices.getIndexNamesAndAliases(Indices.java:308) ~[graylog.jar:?]
        at org.graylog2.indexer.MongoIndexSet.getNewestIndexNumber(MongoIndexSet.java:151) ~[graylog.jar:?]
        at org.graylog2.indexer.MongoIndexSet.getNewestIndex(MongoIndexSet.java:146) ~[graylog.jar:?]
        at org.graylog2.indexer.MongoIndexSet.setUp(MongoIndexSet.java:252) ~[graylog.jar:?]
        at org.graylog2.periodical.IndexRotationThread.checkAndRepair(IndexRotationThread.java:138) ~[graylog.jar:?]
        at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:76) ~[graylog.jar:?]
        at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_171]
        at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
        at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_171]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_171]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_171]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_171]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]

Elasticsearch logs :

[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.16][2]]
[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.18][3]]
[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.17][3]]
[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.18][4]]
[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.16][0]]
[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.17][1]]
[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.17][0]]
[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.17][4]]
[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.18][1]]
[2018-06-18T09:38:19,219][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.18][0]]
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][1]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][2]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][3]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][0]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.16][2]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.16][4]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.17][3]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.17][2]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.17][1]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.17][0]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][4]]: segment writing can't keep up
[2018-06-18T09:38:24,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.16][1]]: segment writing can't keep up
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.18][2]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.16][1]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.16][4]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.16][2]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.18][3]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.17][3]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.18][4]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.17][1]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.17][0]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.18][1]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.18][0]]
[2018-06-18T09:38:25,477][INFO ][o.e.i.IndexingMemoryController] [network-2] stop throttling indexing for shard [[logstash-2018.06.17][2]]
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][4]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.17][4]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][2]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][1]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][0]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.18][3]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.16][0]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.16][2]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.16][1]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.16][4]]: segment writing can't keep up
[2018-06-18T09:38:29,219][INFO ][o.e.i.IndexingMemoryController] [network-2] now throttling indexing for shard [[logstash-2018.06.17][2]]: segment writing can't keep up

The configuration of Filebeat (/etc/filebeat/filebeat.yml) is still invalid.

Your Elasticsearch cluster can’t cope with the load you’re throwing at it.

Also, which exact version of Elasticsearch are you using? Graylog 2.x doesn’t work with Elasticsearch 6.x.

Which version of graylog works with Elasticsearch 6.x ?

Filebeat logs :

2018-06-18T10:03:56.844+0200    INFO    instance/beat.go:468    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/lo$
2018-06-18T10:03:56.848+0200    INFO    instance/beat.go:475    Beat UUID: ce015e51-2788-4164-8952-9d2a9b4be894
2018-06-18T10:03:56.848+0200    INFO    instance/beat.go:213    Setup Beat: filebeat; Version: 6.2.4
2018-06-18T10:03:56.849+0200    INFO    pipeline/module.go:76   Beat name: frghcslnetv04
2018-06-18T10:03:56.849+0200    INFO    instance/beat.go:301    filebeat start running.
2018-06-18T10:03:56.849+0200    INFO    registrar/registrar.go:110      Loading registrar data from /var/lib/filebeat/registry
2018-06-18T10:03:56.849+0200    INFO    [monitoring]    log/log.go:97   Starting metrics logging every 30s
2018-06-18T10:03:56.850+0200    INFO    registrar/registrar.go:121      States Loaded from registrar: 13
2018-06-18T10:03:56.850+0200    WARN    beater/filebeat.go:261  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output$
2018-06-18T10:03:56.850+0200    INFO    crawler/crawler.go:48   Loading Prospectors: 3
2018-06-18T10:03:56.856+0200    INFO    log/prospector.go:111   Configured paths: [/var/log/network/frghcfwint01m-fwcommon-2.log /var/log/network/frghcfwint01m-fwcommon.log]
2018-06-18T10:03:56.857+0200    INFO    log/prospector.go:111   Configured paths: [/var/log/network/FWEquin_WAN_R03-01M.log]
2018-06-18T10:03:56.876+0200    INFO    log/harvester.go:216    Harvester started for file: /var/log/network/frghcfwint01m-fwcommon-2.log
2018-06-18T10:03:56.884+0200    INFO    log/harvester.go:216    Harvester started for file: /var/log/network/frghcfwint01m-fwcommon.log
2018-06-18T10:03:56.902+0200    INFO    log/prospector.go:111   Configured paths: [/var/log/network/frghcfwdmz01m.log]
2018-06-18T10:03:56.904+0200    INFO    crawler/crawler.go:82   Loading and starting Prospectors completed. Enabled prospectors: 3
2018-06-18T10:03:56.904+0200    INFO    cfgfile/reload.go:127   Config reloader started
2018-06-18T10:03:56.904+0200    INFO    cfgfile/reload.go:219   Loading of config files completed.
2018-06-18T10:03:56.905+0200    INFO    log/harvester.go:216    Harvester started for file: /var/log/network/frghcfwdmz01m.log
2018-06-18T10:04:26.852+0200    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":201$

currently none.

and that is covered in the documentation:

http://docs.graylog.org/en/2.4/pages/configuration/elasticsearch.html

I install : elasticsearch 5.3, but i still have this :

These are the logs of elasticsearch :

[2018-06-18T10:38:56,789][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/data/elasticsearch/elasticsearch]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:127) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:58) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.3.0.jar:5.3.0]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/data/elasticsearch/elasticsearch]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
        at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:260) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.node.Node.<init>(Node.java:258) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.node.Node.<init>(Node.java:238) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.3.0.jar:5.3.0]
        ... 6 more
Caused by: java.io.IOException: failed to obtain lock on /data/elasticsearch/nodes/0
        at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:239) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.node.Node.<init>(Node.java:258) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.node.Node.<init>(Node.java:238) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.3.0.jar:5.3.0]
        ... 6 more
Caused by: java.nio.file.AccessDeniedException: /data/elasticsearch/nodes/0/node.lock
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
        at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) ~[?:?]
        at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[?:1.8.0_171]
        at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[?:1.8.0_171]
        at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:113) ~[lucene-core-6.4.1.jar:6.4.1 72f75b2503fa0aa4f0aff76d439874feb923bb0e - jpountz - 2017-02-01 14:43:32]
        at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-6.4.1.jar:6.4.1 72f75b2503fa0aa4f0aff76d439874feb923bb0e - jpountz - 2017-02-01 14:43:32]
        at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-6.4.1.jar:6.4.1 72f75b2503fa0aa4f0aff76d439874feb923bb0e - jpountz - 2017-02-01 14:43:32]
        at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:226) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.node.Node.<init>(Node.java:258) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.node.Node.<init>(Node.java:238) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.3.0.jar:5.3.0]
        ... 6 more

Why not the latest stable version of Elasticsearch 5.x? Elasticsearch 5.6.10 released | Elastic Blog

Because Graylog couldn’t index messages into Elasticsearch, the messages were written into the on-disk journal of Graylog. Either delete the journal while Graylog is stopped, or wait until the messages have been indexed.

It looks like Elasticsearch isn’t able to write into the configured directories.

Maybe you should try to install the Graylog virtual appliance: http://docs.graylog.org/en/2.4/pages/installation/virtual_machine_appliances.html
That way, you don’t have to install each component separately.

When I want to install the version they drive me to the last one 6.3

https://www.elastic.co/de/downloads/past-releases/elasticsearch-5-6-10

can you tell me how to unistall graylog, I want to install graylog from the begining !!

Please read the step-by-step installation guide:

I did this and I had this :

2018-06-18T11:56:16.523+02:00 WARN  [KafkaJournal] Journal utilization (104.0%) has gone over 95%.
2018-06-18T11:56:16.525+02:00 INFO  [KafkaJournal] Journal usage is 104.00% (threshold 100%), changing load balancer status from ALIVE to THROTTLED
2018-06-18T11:56:16.526+02:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2018-06-18T11:56:16.526+02:00 INFO  [IndexerClusterCheckerThread] Indexer not fully initialized yet. Skipping periodic cluster check.
2018-06-18T11:56:21.782+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #14).
2018-06-18T11:56:22.047+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #14).
2018-06-18T11:56:22.176+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #14).
2018-06-18T11:56:36.893+02:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2018-06-18T11:56:36.936+02:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2018-06-18T11:56:38.285+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #15).
2018-06-18T11:56:38.458+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #15).
2018-06-18T11:56:38.585+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #15).
2018-06-18T11:56:40.229+02:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2018-06-18T11:56:40.268+02:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2018-06-18T11:56:46.509+02:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2018-06-18T11:56:46.509+02:00 INFO  [IndexerClusterCheckerThread] Indexer not fully initialized yet. Skipping periodic cluster check.
2018-06-18T11:56:46.713+02:00 WARN  [V20161130141500_DefaultStreamRecalcIndexRanges] Interrupted or timed out waiting for Elasticsearch cluster, checking again.
2018-06-18T11:57:08.327+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #16).
2018-06-18T11:57:08.478+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #16).
2018-06-18T11:57:08.606+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #16).
2018-06-18T11:57:16.509+02:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2018-06-18T11:57:16.510+02:00 INFO  [IndexerClusterCheckerThread] Indexer not fully initialized yet. Skipping periodic cluster check.
2018-06-18T11:57:16.514+02:00 WARN  [KafkaJournal] Journal utilization (104.0%) has gone over 95%.
2018-06-18T11:57:38.346+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #17).
2018-06-18T11:57:38.499+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #17).
2018-06-18T11:57:38.627+02:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #17).
2018-06-18T11:57:46.508+02:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2018-06-18T11:57:46.509+02:00 INFO  [IndexerClusterCheckerThread] Indexer not fully initialized yet. Skipping periodic cluster check.
2018-06-18T11:57:46.715+02:00 WARN  [V20161130141500_DefaultStreamRecalcIndexRanges] Interrupted or timed out waiting for Elasticsearch cluster, checking again.

The step-by-step guides can be copy & pasted into a running Debian, Ubuntu, or RHEL/CentOS Linux and will work. We’ve tested them multiple times.

If they don’t work for you, you’ve probably digressed from them. In that case, you’ll have to tell us the exact commands you’ve entered and their complete output, as well as the complete logs of all installed and configured components (Graylog, Elasticsearch, MongoDB) if you want us to be able to help you.

In that specific case, Graylog is unable to communicate with Elasticsearch on https://127.0.0.1:9200/.

Yes I did all the steps and I know that my problem si with Elasticsearch because I always have the same errors here :

The version of elasticsearch is 5.6.10

elasticsearch.yml :

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/elasticsearch
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["172.16.250.30"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

And the logs are :

at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) ~[?:1.8.0_171]
        at java.nio.file.Files.createDirectories(Files.java:767) ~[?:1.8.0_171]
        at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:492) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Security.addPath(Security.java:448) ~[elasticsearch-5.6.10.jar:5.6.10]
        ... 12 more
[2018-06-18T11:53:48,018][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Unable to access 'path.data' (/data/elasticsearch)
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.6.10.jar:5.6.10]
Caused by: java.lang.IllegalStateException: Unable to access 'path.data' (/data/elasticsearch)
        at org.elasticsearch.bootstrap.Security.addPath(Security.java:450) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:291) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:246) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Security.configure(Security.java:119) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:228) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) ~[elasticsearch-5.6.10.jar:5.6.10]
        ... 6 more
Caused by: java.nio.file.AccessDeniedException: /data/elasticsearch
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:1.8.0_171]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_171]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:1.8.0_171]
        at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384) ~[?:1.8.0_171]
        at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_171]
        at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) ~[?:1.8.0_171]
        at java.nio.file.Files.createDirectories(Files.java:767) ~[?:1.8.0_171]
        at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:492) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Security.addPath(Security.java:448) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:291) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:246) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Security.configure(Security.java:119) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:228) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) ~[elasticsearch-5.6.10.jar:5.6.10]
        ... 6 more

On graylog :

{"type":"unavailable_shards_exception","reason":"[graylog_deflector][1] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[graylog_deflector][1]] containing [92] requests]"}

Elasticsearch is still unable to access the data path you’ve configured.

I know !!!

How can I resolve this, it is been three days that i am stuck there :confused: