Could not connect to http://127.0.0.1:9200

I have been running into an issue getting the Graylog server to load.

The Graylog service output from systemd:

ubuntu@ip-172-31-0-172:~$ sudo systemctl status graylog-server
● graylog-server.service - Graylog server
   Loaded: loaded (/usr/lib/systemd/system/graylog-server.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2018-01-08 19:34:22 UTC; 6min ago
     Docs: http://docs.graylog.org/
 Main PID: 5809 (graylog-server)
    Tasks: 120
   Memory: 592.1M
      CPU: 33.626s
   CGroup: /system.slice/graylog-server.service
           ├─5809 /bin/sh /usr/share/graylog-server/bin/graylog-server
           └─5812 /usr/bin/java -jar -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb /usr/share/graylog-server/graylog.jar server -f /etc/graylog/server/server.conf -np

Jan 08 19:38:59 ip-172-31-0-172 graylog-server[5809]: 19:38:59.045 [scheduled-daemon-7] INFO  org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully initialized yet. Skipping periodic cluster check.
Jan 08 19:39:29 ip-172-31-0-172 graylog-server[5809]: 19:39:29.045 [scheduled-daemon-29] ERROR org.graylog2.indexer.cluster.Cluster - Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
Jan 08 19:39:29 ip-172-31-0-172 graylog-server[5809]: 19:39:29.045 [scheduled-27] INFO  org.graylog2.periodical.IndexRetentionThread - Elasticsearch cluster not available, skipping index retention checks.
Jan 08 19:39:29 ip-172-31-0-172 graylog-server[5809]: 19:39:29.045 [scheduled-daemon-29] INFO  org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully initialized yet. Skipping periodic cluster check.
Jan 08 19:39:29 ip-172-31-0-172 graylog-server[5809]: 19:39:29.291 [periodical-org.graylog2.periodical.ConfigurationManagementPeriodical-0] WARN  org.graylog2.migrations.V20161130141500_DefaultStreamRecalcIndexRanges - Interrupted or timed out waiting for Elasticsearch cluster, checking again.
Jan 08 19:39:59 ip-172-31-0-172 graylog-server[5809]: 19:39:59.045 [scheduled-daemon-1] ERROR org.graylog2.indexer.cluster.Cluster - Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
Jan 08 19:39:59 ip-172-31-0-172 graylog-server[5809]: 19:39:59.045 [scheduled-daemon-1] INFO  org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully initialized yet. Skipping periodic cluster check.
Jan 08 19:40:29 ip-172-31-0-172 graylog-server[5809]: 19:40:29.045 [scheduled-daemon-21] ERROR org.graylog2.indexer.cluster.Cluster - Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
Jan 08 19:40:29 ip-172-31-0-172 graylog-server[5809]: 19:40:29.045 [scheduled-daemon-21] INFO  org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully initialized yet. Skipping periodic cluster check.
Jan 08 19:40:29 ip-172-31-0-172 graylog-server[5809]: 19:40:29.292 [periodical-org.graylog2.periodical.ConfigurationManagementPeriodical-0] WARN  org.graylog2.migrations.V20161130141500_DefaultStreamRecalcIndexRanges - Interrupted or timed out waiting for Elasticsearch cluster, checking again.

My Graylog config file:

is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = password_secret
root_username = admin
root_password_sha2 = sha2zam
root_email = "patrick@my_email.org"
root_timezone = America/Detroit
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://0.0.0.0:12900/api/
rest_enable_cors = true
rest_enable_gzip = true
rest_enable_tls = false
web_enable = true
web_listen_uri = http://0.0.0.0:9000
web_endpoint_uri = http://192.168.1.1:12900/api/
web_enable_cors = true
web_enable_gzip = true
web_enable_tls = false
elasticsearch_hosts = http://127.0.0.1:9200
elasticsearch_connect_timeout = 10s
elasticsearch_socket_timeout = 60s
elasticsearch_max_total_connections = 20
elasticsearch_max_total_connections_per_route = 2
elasticsearch_max_retries = 2
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 1
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
elasticsearch_discovery_zen_ping_unicast_hosts = 127.0.0.1
elasticsearch_transport_tcp_port = 9300
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32

My elasticsearch.yml file

cluster.name: graylog
node.name: graylog-test
path.conf: "/etc/elasticsearch"
path.data: "/var/lib/elasticsearch"
path.logs: "/var/log/elasticsearch"
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300

My collector-sidecar.yml:

server_url: http://0.0.0.0:9000/api/
update_interval: 10
tls_skip_verify: false
send_status: false
list_log_files: 
node_id: 
collector_id: file:/etc/graylog/collector-sidecar/collector-id
cache_path: /var/cache/graylog/collector-sidecar
log_path: /var/log/graylog/collector-sidecar
log_rotation_time: 86400
log_max_age: 604800
tags: linux
backends:
    - name: nxlog
      enabled: false
      binary_path: /usr/bin/nxlog
      configuration_path: /etc/graylog/collector-sidecar/generated/nxlog.conf
    - name: filebeat
      enabled: true
      binary_path: /usr/bin/filebeat
      configuration_path: /etc/graylog/collector-sidecar/generated/filebeat.yml

I am installing everything on Ubuntu 16.04. My binary versions are:

  • Graylog: 2.4.0-9
  • Elasticsearch: 5.6.2
  • Collector-sidecar: 0.1.3

Any help would be greatly appreciated. I have not found a solution, and have been struggling with this for a lot longer than I would like.

Are Graylog and Elasticsearch running on the same machine?
Has Elasticsearch successfully been started?
What’s in the logs of your Elasticsearch node?

Everything is running on the same machine, and Elasticsearch is up.

● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2018-01-08 14:22:39 UTC; 6h ago
     Docs: http://www.elastic.co
 Main PID: 28398 (java)
    Tasks: 33
   Memory: 2.1G
      CPU: 47.411s
   CGroup: /system.slice/elasticsearch.service
           └─28398 /usr/bin/java -Xms1975m -Xmx1975m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Djdk.io.permissionsUseCanonicalPath=true -Dio.netty.noUn

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

The log output seems incorrect. This isn’t the name of the cluster in my config, but it may be a clue:

ubuntu@ip-172-31-0-172:~$ cat /var/log/elasticsearch/escluster.log 
[2018-01-08T14:22:41,548][INFO ][o.e.n.Node               ] [graylog-test] initializing ...
[2018-01-08T14:22:41,651][INFO ][o.e.e.NodeEnvironment    ] [graylog-test] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [5.2gb], net total_space [7.6gb], spins? [no], types [ext4]
[2018-01-08T14:22:41,651][INFO ][o.e.e.NodeEnvironment    ] [graylog-test] heap size [1.9gb], compressed ordinary object pointers [true]
[2018-01-08T14:22:41,653][INFO ][o.e.n.Node               ] [graylog-test] node name [graylog-test], node ID [hmpbVwTnRb6o3KrvAP2big]
[2018-01-08T14:22:41,653][INFO ][o.e.n.Node               ] [graylog-test] version[5.6.2], pid[28398], build[57e20f3/2017-09-23T13:16:45.703Z], OS[Linux/4.4.0-1041-aws/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
[2018-01-08T14:22:41,653][INFO ][o.e.n.Node               ] [graylog-test] JVM arguments [-Xms1975m, -Xmx1975m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2018-01-08T14:22:42,731][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [aggs-matrix-stats]
[2018-01-08T14:22:42,731][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [ingest-common]
[2018-01-08T14:22:42,731][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [lang-expression]
[2018-01-08T14:22:42,731][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [lang-groovy]
[2018-01-08T14:22:42,731][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [lang-mustache]
[2018-01-08T14:22:42,731][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [lang-painless]
[2018-01-08T14:22:42,731][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [parent-join]
[2018-01-08T14:22:42,731][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [percolator]
[2018-01-08T14:22:42,731][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [reindex]
[2018-01-08T14:22:42,732][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [transport-netty3]
[2018-01-08T14:22:42,732][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [transport-netty4]
[2018-01-08T14:22:42,732][INFO ][o.e.p.PluginsService     ] [graylog-test] no plugins loaded
[2018-01-08T14:22:45,644][INFO ][o.e.d.DiscoveryModule    ] [graylog-test] using discovery type [zen]
[2018-01-08T14:22:46,282][INFO ][o.e.n.Node               ] [graylog-test] initialized
[2018-01-08T14:22:46,282][INFO ][o.e.n.Node               ] [graylog-test] starting ...
[2018-01-08T14:22:46,918][INFO ][o.e.t.TransportService   ] [graylog-test] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2018-01-08T14:22:50,115][INFO ][o.e.c.s.ClusterService   ] [graylog-test] new_master {graylog-test}{hmpbVwTnRb6o3KrvAP2big}{YqHCphypRN6XBni38fU5xQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-01-08T14:22:50,163][INFO ][o.e.g.GatewayService     ] [graylog-test] recovered [0] indices into cluster_state
[2018-01-08T14:22:50,173][INFO ][o.e.h.n.Netty4HttpServerTransport] [graylog-test] publish_address {127.0.0.1:9201}, bound_addresses {[::1]:9201}, {127.0.0.1:9201}
[2018-01-08T14:22:50,173][INFO ][o.e.n.Node               ] [graylog-test] started

Here’s an update from a fresh VM:

SYSTEMD outputs:

ubuntu@ip-172-31-2-181:~$ sudo systemctl status graylog-server
● graylog-server.service - Graylog server
   Loaded: loaded (/usr/lib/systemd/system/graylog-server.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2018-01-08 21:06:40 UTC; 7min ago
     Docs: http://docs.graylog.org/
 Main PID: 29305 (graylog-server)
    Tasks: 118
   Memory: 642.8M
      CPU: 34.905s
   CGroup: /system.slice/graylog-server.service
           ├─29305 /bin/sh /usr/share/graylog-server/bin/graylog-server
           └─29307 /usr/bin/java -jar -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb /usr/share/graylog-server/graylog.jar server -f /etc/graylog/server/server.conf -np

Jan 08 21:06:55 ip-172-31-2-181 graylog-server[29305]: 21:06:55.655 [JerseyService STARTING] INFO  org.glassfish.grizzly.http.server.HttpServer - [HttpServer] Started.
Jan 08 21:06:55 ip-172-31-2-181 graylog-server[29305]: 21:06:55.655 [JerseyService STARTING] INFO  org.graylog2.shared.initializers.JerseyService - Started REST API at <http://0.0.0.0:12900/api/>
Jan 08 21:06:55 ip-172-31-2-181 graylog-server[29305]: 21:06:55.657 [JerseyService STARTING] INFO  org.graylog2.shared.initializers.JerseyService - Enabling CORS for HTTP endpoint
Jan 08 21:06:57 ip-172-31-2-181 graylog-server[29305]: 21:06:57.659 [JerseyService STARTING] INFO  org.glassfish.grizzly.http.server.NetworkListener - Started listener bound to [0.0.0.0:9000]
Jan 08 21:06:57 ip-172-31-2-181 graylog-server[29305]: 21:06:57.659 [JerseyService STARTING] INFO  org.glassfish.grizzly.http.server.HttpServer - [HttpServer-1] Started.
Jan 08 21:06:57 ip-172-31-2-181 graylog-server[29305]: 21:06:57.660 [JerseyService STARTING] INFO  org.graylog2.shared.initializers.JerseyService - Started Web Interface at <http://0.0.0.0:9000/>
Jan 08 21:06:57 ip-172-31-2-181 graylog-server[29305]: 21:06:57.660 [JerseyService STARTING] INFO  org.graylog2.shared.initializers.ServiceManagerListener - Services are healthy
Jan 08 21:06:57 ip-172-31-2-181 graylog-server[29305]: 21:06:57.661 [main] INFO  org.graylog2.bootstrap.ServerBootstrap - Services started, startup times in ms: {KafkaJournal [RUNNING]=18, OutputSetupService [RUNNING]=20, JournalReader [RUNNING]=20, InputSetupService [RUNNING]=20, BufferSynchronizerServic
Jan 08 21:06:57 ip-172-31-2-181 graylog-server[29305]: 21:06:57.662 [main] INFO  org.graylog2.bootstrap.ServerBootstrap - Graylog server up and running.
Jan 08 21:06:57 ip-172-31-2-181 graylog-server[29305]: 21:06:57.663 [eventbus-handler-1] INFO  org.graylog2.shared.initializers.InputSetupService - Triggering launching persisted inputs, node transitioned from Uninitialized?[LB:DEAD] to Running?[LB:ALIVE]

ubuntu@ip-172-31-2-181:~$ sudo systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2018-01-08 21:04:16 UTC; 12min ago
     Docs: http://www.elastic.co
 Main PID: 28161 (java)
    Tasks: 37
   Memory: 2.1G
      CPU: 13.937s
   CGroup: /system.slice/elasticsearch.service
           └─28161 /usr/bin/java -Xms1975m -Xmx1975m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Djdk.io.permissionsUseCanonicalPath=true -Dio.netty.noUn

Jan 08 21:04:16 ip-172-31-2-181 systemd[1]: Starting Elasticsearch...
Jan 08 21:04:16 ip-172-31-2-181 systemd[1]: Started Elasticsearch.

Log Files:

root@ip-172-31-2-181:/home/ubuntu# cat /var/log/elasticsearch/graylog.log 
[2018-01-08T21:04:18,298][INFO ][o.e.n.Node               ] [graylog-test] initializing ...
[2018-01-08T21:04:18,415][INFO ][o.e.e.NodeEnvironment    ] [graylog-test] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [5.4gb], net total_space [7.6gb], spins? [no], types [ext4]
[2018-01-08T21:04:18,415][INFO ][o.e.e.NodeEnvironment    ] [graylog-test] heap size [1.9gb], compressed ordinary object pointers [true]
[2018-01-08T21:04:18,416][INFO ][o.e.n.Node               ] [graylog-test] node name [graylog-test], node ID [oOd-h3mlTwWDR8c-fqSmeQ]
[2018-01-08T21:04:18,417][INFO ][o.e.n.Node               ] [graylog-test] version[5.6.2], pid[28161], build[57e20f3/2017-09-23T13:16:45.703Z], OS[Linux/4.4.0-1041-aws/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
[2018-01-08T21:04:18,417][INFO ][o.e.n.Node               ] [graylog-test] JVM arguments [-Xms1975m, -Xmx1975m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2018-01-08T21:04:19,687][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [aggs-matrix-stats]
[2018-01-08T21:04:19,687][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [ingest-common]
[2018-01-08T21:04:19,687][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [lang-expression]
[2018-01-08T21:04:19,688][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [lang-groovy]
[2018-01-08T21:04:19,688][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [lang-mustache]
[2018-01-08T21:04:19,688][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [lang-painless]
[2018-01-08T21:04:19,688][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [parent-join]
[2018-01-08T21:04:19,688][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [percolator]
[2018-01-08T21:04:19,689][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [reindex]
[2018-01-08T21:04:19,689][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [transport-netty3]
[2018-01-08T21:04:19,689][INFO ][o.e.p.PluginsService     ] [graylog-test] loaded module [transport-netty4]
[2018-01-08T21:04:19,689][INFO ][o.e.p.PluginsService     ] [graylog-test] no plugins loaded
[2018-01-08T21:04:22,513][INFO ][o.e.d.DiscoveryModule    ] [graylog-test] using discovery type [zen]
[2018-01-08T21:04:22,991][INFO ][o.e.n.Node               ] [graylog-test] initialized
[2018-01-08T21:04:22,991][INFO ][o.e.n.Node               ] [graylog-test] starting ...
[2018-01-08T21:04:23,160][INFO ][o.e.t.TransportService   ] [graylog-test] publish_address {172.31.2.181:9300}, bound_addresses {[::]:9300}
[2018-01-08T21:04:23,170][INFO ][o.e.b.BootstrapChecks    ] [graylog-test] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2018-01-08T21:04:26,231][INFO ][o.e.c.s.ClusterService   ] [graylog-test] new_master {graylog-test}{oOd-h3mlTwWDR8c-fqSmeQ}{6AFja72GTBunbjXKLMyWiw}{172.31.2.181}{172.31.2.181:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-01-08T21:04:26,261][INFO ][o.e.g.GatewayService     ] [graylog-test] recovered [0] indices into cluster_state
[2018-01-08T21:04:26,263][INFO ][o.e.h.n.Netty4HttpServerTransport] [graylog-test] publish_address {172.31.2.181:9200}, bound_addresses {[::]:9200}
[2018-01-08T21:04:26,263][INFO ][o.e.n.Node               ] [graylog-test] started
[2018-01-08T21:06:47,793][INFO ][o.e.c.m.MetaDataCreateIndexService] [graylog-test] [graylog_0] creating index, cause [api], templates [graylog-internal], shards [1]/[0], mappings [message]
[2018-01-08T21:06:48,149][INFO ][o.e.c.r.a.AllocationService] [graylog-test] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[graylog_0][0]] ...]).

root@ip-172-31-2-181:/home/ubuntu# cat /var/log/graylog/collector-sidecar/collector_sidecar.log
time="2018-01-08T21:05:25Z" level=info msg="Starting signal distributor" 
time="2018-01-08T21:05:25Z" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-01-08T21:05:26Z" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 1/3." 
time="2018-01-08T21:05:26Z" level=info msg="[filebeat] Stopping" 
time="2018-01-08T21:05:28Z" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-01-08T21:05:29Z" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 2/3." 
time="2018-01-08T21:05:29Z" level=info msg="[filebeat] Stopping" 
time="2018-01-08T21:05:31Z" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-01-08T21:05:32Z" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 3/3." 
time="2018-01-08T21:05:32Z" level=info msg="[filebeat] Stopping" 
time="2018-01-08T21:05:34Z" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-01-08T21:05:35Z" level=error msg="[RequestConfiguration] Fetching configuration failed: Get http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9?tags=%5B%22linux%22%2C%22apache%22%5D: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:05:35Z" level=error msg="[filebeat] Unable to start collector after 3 tries, giving up!" 
time="2018-01-08T21:05:35Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/collectors/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:05:45Z" level=error msg="[RequestConfiguration] Fetching configuration failed: Get http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9?tags=%5B%22linux%22%2C%22apache%22%5D: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:05:45Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/collectors/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:05:55Z" level=error msg="[RequestConfiguration] Fetching configuration failed: Get http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9?tags=%5B%22linux%22%2C%22apache%22%5D: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:05:55Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/collectors/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:05Z" level=error msg="[RequestConfiguration] Fetching configuration failed: Get http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9?tags=%5B%22linux%22%2C%22apache%22%5D: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:05Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/collectors/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:15Z" level=error msg="[RequestConfiguration] Fetching configuration failed: Get http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9?tags=%5B%22linux%22%2C%22apache%22%5D: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:15Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/collectors/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:25Z" level=error msg="[RequestConfiguration] Fetching configuration failed: Get http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9?tags=%5B%22linux%22%2C%22apache%22%5D: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:25Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/collectors/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:35Z" level=error msg="[RequestConfiguration] Fetching configuration failed: Get http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9?tags=%5B%22linux%22%2C%22apache%22%5D: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:35Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://127.0.0.1:9000/api/plugins/org.graylog.plugins.collector/collectors/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9: dial tcp 127.0.0.1:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:40Z" level=info msg="Stopping signal distributor" 
time="2018-01-08T21:06:40Z" level=info msg="Starting signal distributor" 
time="2018-01-08T21:06:40Z" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-01-08T21:06:41Z" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 1/3." 
time="2018-01-08T21:06:41Z" level=info msg="[filebeat] Stopping" 
time="2018-01-08T21:06:43Z" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-01-08T21:06:44Z" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 2/3." 
time="2018-01-08T21:06:44Z" level=info msg="[filebeat] Stopping" 
time="2018-01-08T21:06:46Z" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-01-08T21:06:47Z" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 3/3." 
time="2018-01-08T21:06:47Z" level=info msg="[filebeat] Stopping" 
time="2018-01-08T21:06:49Z" level=info msg="[filebeat] Starting (exec driver)" 
time="2018-01-08T21:06:50Z" level=error msg="[RequestConfiguration] Fetching configuration failed: Get http://0.0.0.0:9000/api/plugins/org.graylog.plugins.collector/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9?tags=%5B%22linux%22%5D: dial tcp 0.0.0.0:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:50Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://0.0.0.0:9000/api/plugins/org.graylog.plugins.collector/collectors/d58353f9-f3ad-4a5f-8e7a-e470b67ab6b9: dial tcp 0.0.0.0:9000: getsockopt: connection refused" 
time="2018-01-08T21:06:50Z" level=error msg="[filebeat] Unable to start collector after 3 tries, giving up!"

I guess you should get your Elasticsearch configuration sorted out.

None of these match the elasticsearch_hosts setting in your Graylog configuration file.

I updated my post. I think you’re referencing two different versions of the output. Which files would you recommend I post to avoid further confusion? As far as I can tell, everything should be “ok”.

Correct, and both show that Elasticsearch is not listening on 127.0.0.1:9200.

in addition your collector is not connecting to your Graylog Servers REST API

From your collector-sidecar.yml

server_url: http://0.0.0.0:9000/api/

That need to be some IP/Hostname where the sidecar can reach the REST API

I am starting to pull my hair out over this. I did some modifications, as suggested, but Graylog still fails to load. All services are running on the machine. I am regularly receiving two different pages when reconfiguring the server.conf:

server.conf with below settings results in following image:

rest_listen_uri = http://172.31.2.181:12900/api/
# rest_transport_uri = $rest_listen_uri
web_listen_uri = http://172.31.2.181:9000
web_endpoint_uri = http://172.31.2.181:12900/api/
elasticsearch_hosts = http://172.31.2.181:9200/
elasticsearch_discovery_zen_ping_unicast_hosts = 127.0.0.1
elasticsearch_transport_tcp_port = 9300

OR

server.conf with below settings results in a 404 when logging in:

rest_listen_uri = http://172.31.2.181:12900/api/
# rest_transport_uri = $rest_listen_uri (default)
web_listen_uri = http://172.31.2.181:9000
# web_endpoint_uri = $rest_transport_uri (default)
elasticsearch_hosts = http://172.31.2.181:9200/
elasticsearch_discovery_zen_ping_unicast_hosts = 127.0.0.1
elasticsearch_transport_tcp_port = 9300

I feel like I am fundamentally misunderstanding how to configure this service. I’ve tried everything from the docs, but nothing works for me.

Additional goodies:

elasticsearch.yml:

---
cluster.name: graylog
node.name: graylog-test
path.conf: "/etc/elasticsearch"
path.data: "/var/lib/elasticsearch"
path.logs: "/var/log/elasticsearch"
network.host: 172.31.2.181
http.port: 9200
transport.tcp.port: 9300

collector_sidecar.yml:

server_url: http://172.31.2.181:12900/api
update_interval: 10
tls_skip_verify: false
send_status: false
list_log_files: 
node_id: 
collector_id: file:/etc/graylog/collector-sidecar/collector-id
cache_path: /var/cache/graylog/collector-sidecar
log_path: /var/log/graylog/collector-sidecar
log_rotation_time: 86400
log_max_age: 604800
tags: linux
backends:
    - name: nxlog
      enabled: false
      binary_path: /usr/bin/nxlog
      configuration_path: /etc/graylog/collector-sidecar/generated/nxlog.conf
    - name: filebeat
      enabled: true
      binary_path: /usr/bin/filebeat
      configuration_path: /etc/graylog/collector-sidecar/generated/filebeat.yml

Neither of your two variants of the Graylog configuration file contains http://172.31.2.181:9000/api/ as the URI for the Graylog REST API (which is shown on the screenshot).

Are you sure you’re editing the correct files?
Have you restarted Graylog after modifying the configuration file?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.