Elasticsearch cluster not available, skipping index retention checks

Current Behavior

Graylog not able to connect to elasticsearch as I see in the logs

2017-07-25T15:27:08.292Z INFO  [node] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] starting ...
2017-07-25T15:27:08.294Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlertScannerThread] periodical in [10s], polling every [60s].
2017-07-25T15:27:08.294Z INFO  [Periodicals] Starting [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] periodical in [0s], polling every [1s].
2017-07-25T15:27:08.295Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [120s], polling every [20s].
2017-07-25T15:27:08.296Z INFO  [Periodicals] Starting [org.graylog2.periodical.ContentPackLoaderPeriodical] periodical, running forever.
2017-07-25T15:27:08.296Z INFO  [Periodicals] Starting [org.graylog2.periodical.GarbageCollectionWarningThread] periodical, running forever.
2017-07-25T15:27:08.296Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2017-07-25T15:27:08.297Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRetentionThread] periodical in [0s], polling every [300s].
2017-07-25T15:27:08.297Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRotationThread] periodical in [0s], polling every [10s].
2017-07-25T15:27:08.297Z INFO  [IndexRetentionThread] Elasticsearch cluster not available, skipping index retention checks.
2017-07-25T15:27:08.299Z INFO  [Periodicals] Starting [org.graylog2.periodical.NodePingThread] periodical in [0s], polling every [1s].
2017-07-25T15:27:08.300Z INFO  [Periodicals] Starting [org.graylog2.periodical.VersionCheckThread] periodical in [300s], polling every [1800s].
2017-07-25T15:27:08.301Z INFO  [Periodicals] Starting [org.graylog2.periodical.ThrottleStateUpdaterThread] periodical in [1s], polling every [1s].
2017-07-25T15:27:08.302Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventPeriodical] periodical in [0s], polling every [1s].
2017-07-25T15:27:08.302Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventCleanupPeriodical] periodical in [0s], polling every [86400s].
2017-07-25T15:27:08.303Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterIdGeneratorPeriodical] periodical, running forever.
2017-07-25T15:27:08.303Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesMigrationPeriodical] periodical, running forever.
2017-07-25T15:27:08.303Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesCleanupPeriodical] periodical in [15s], polling every [3600s].
2017-07-25T15:27:08.304Z INFO  [connection] Opened connection [connectionId{localValue:3, serverValue:330}] to 127.0.0.1:27017
2017-07-25T15:27:08.304Z INFO  [connection] Opened connection [connectionId{localValue:5, serverValue:332}] to 127.0.0.1:27017
2017-07-25T15:27:08.304Z INFO  [connection] Opened connection [connectionId{localValue:6, serverValue:333}] to 127.0.0.1:27017
2017-07-25T15:27:08.304Z INFO  [connection] Opened connection [connectionId{localValue:4, serverValue:331}] to 127.0.0.1:27017
2017-07-25T15:27:08.307Z INFO  [connection] Opened connection [connectionId{localValue:7, serverValue:334}] to 127.0.0.1:27017
2017-07-25T15:27:08.313Z INFO  [connection] Opened connection [connectionId{localValue:8, serverValue:335}] to 127.0.0.1:27017
2017-07-25T15:27:08.313Z INFO  [connection] Opened connection [connectionId{localValue:9, serverValue:336}] to 127.0.0.1:27017
2017-07-25T15:27:08.334Z INFO  [PeriodicalsService] Not starting [org.graylog2.periodical.UserPermissionMigrationPeriodical] periodical. Not configured to run on this node.
2017-07-25T15:27:08.334Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlarmCallbacksMigrationPeriodical] periodical, running forever.
2017-07-25T15:27:08.335Z INFO  [Periodicals] Starting [org.graylog2.periodical.ConfigurationManagementPeriodical] periodical, running forever.
2017-07-25T15:27:08.340Z INFO  [Periodicals] Starting [org.graylog2.periodical.LdapGroupMappingMigration] periodical, running forever.
2017-07-25T15:27:08.340Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexFailuresPeriodical] periodical, running forever.
2017-07-25T15:27:08.341Z INFO  [Periodicals] Starting [org.graylog.plugins.usagestatistics.UsageStatsNodePeriodical] periodical in [300s], polling every [21600s].
2017-07-25T15:27:08.341Z INFO  [Periodicals] Starting [org.graylog.plugins.usagestatistics.UsageStatsClusterPeriodical] periodical in [300s], polling every [21600s].
2017-07-25T15:27:08.343Z INFO  [Periodicals] Starting [org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration] periodical, running forever.
2017-07-25T15:27:08.344Z INFO  [Periodicals] Starting [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread] periodical in [0s], polling every [3600s].
2017-07-25T15:27:08.428Z INFO  [LegacyDefaultStreamMigration] Legacy default stream has no connections, no migration needed.
2017-07-25T15:27:08.444Z INFO  [IndexerClusterCheckerThread] Indexer not fully initialized yet. Skipping periodic cluster check.
2017-07-25T15:27:08.509Z INFO  [V20161130141500_DefaultStreamRecalcIndexRanges] Cluster not connected yet, delaying migration until it is reachable.
2017-07-25T15:27:08.566Z INFO  [transport] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] publish_address {127.0.0.1:9350}, bound_addresses {127.0.0.1:9350}
2017-07-25T15:27:08.570Z INFO  [discovery] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] graylog/5dFjDDDvRimPW9VOdogNQg
2017-07-25T15:27:08.684Z INFO  [JerseyService] Enabling CORS for HTTP endpoint
2017-07-25T15:27:11.572Z WARN  [discovery] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] waited for 3s and no initial state was set by the discovery
2017-07-25T15:27:11.573Z INFO  [node] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] started
2017-07-25T15:27:11.626Z INFO  [service] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] detected_master {graylog}{alHd4ZyiROS7paFPZjU8sA}{94.130.32.23}{94.130.32.23:9300}{master=true}, added {{graylog}{alHd4ZyiROS7paFPZjU8sA}{94.130.32.23}{94.130.32.23:9300}{master=true},}, reason: zen-disco-receive(from master [{graylog}{alHd4ZyiROS7paFPZjU8sA}{94.130.32.23}{94.130.32.23:9300}{master=true}])
2017-07-25T15:27:16.583Z INFO  [NetworkListener] Started listener bound to [0.0.0.0:9000]
2017-07-25T15:27:16.584Z INFO  [HttpServer] [HttpServer] Started.
2017-07-25T15:27:16.584Z INFO  [JerseyService] Started REST API at <http://0.0.0.0:9000/api/>
2017-07-25T15:27:16.584Z INFO  [JerseyService] Started Web Interface at <http://0.0.0.0:9000/>
2017-07-25T15:27:16.585Z INFO  [ServiceManagerListener] Services are healthy
2017-07-25T15:27:16.586Z INFO  [ServerBootstrap] Services started, startup times in ms: {InputSetupService [RUNNING]=8, KafkaJournal [RUNNING]=11, JournalReader [RUNNING]=11, OutputSetupService [RUNNING]=11, BufferSynchronizerService [RUNNING]=15, ConfigurationEtagService [RUNNING]=18, StreamCacheService [RUNNING]=130, PeriodicalsService [RUNNING]=142, IndexerSetupService [RUNNING]=3347, JerseyService [RUNNING]=8296}
2017-07-25T15:27:16.586Z INFO  [InputSetupService] Triggering launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] to Running [LB:ALIVE]
2017-07-25T15:27:16.594Z INFO  [ServerBootstrap] Graylog server up and running.
2017-07-25T15:27:16.606Z INFO  [InputStateListener] Input [GELF UDP/59774ed1dc3aaa50d9cc7d39] is now STARTING
2017-07-25T15:27:16.640Z WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input GELFUDPInput{title=sao, type=org.graylog2.inputs.gelf.udp.GELFUDPInput, nodeId=null} should be 262144 but is 212992.
2017-07-25T15:27:16.642Z INFO  [InputStateListener] Input [GELF UDP/59774ed1dc3aaa50d9cc7d39] is now RUNNING
2017-07-25T15:27:17.968Z DEBUG [OffsetIndex] Adding index entry 61454 => 32693296 to 00000000000000039096.index.
2017-07-25T15:27:18.054Z DEBUG [OffsetIndex] Adding index entry 61459 => 32700617 to 00000000000000039096.index.

Context

I am sending some logs to UDP input of graylog . I can see the input is coming as the message count increasing , but there is no output . When I check in the logs its says

2017-07-25T15:27:08.297Z INFO [IndexRetentionThread] Elasticsearch cluster not available, skipping index retention checks.

Configurations are:

Everything is running on the same box

graylog server conf

is_master = True
node_id_file = /etc/graylog/server/node-id
password_secret =***********
root_username = admin
root_password_sha2 = ****************
root_email =
root_timezone = UTC
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://0.0.0.0:9000/api/
rest_enable_cors = True
rest_enable_gzip = True
rest_enable_tls = False
rest_tls_cert_file = /path/to/graylog.crt
rest_tls_key_file = /path/to/graylog.key
rest_tls_key_password = secret
rest_max_header_size = 8192
rest_max_initial_line_length = 4096
rest_thread_pool_size = 16
web_enable = True
web_listen_uri = http://0.0.0.0:9000/
web_enable_cors = True
web_enable_gzip = True
web_enable_tls = False
web_tls_cert_file =
web_tls_key_file =
web_tls_key_password =
web_max_header_size = 8192
web_max_initial_line_length = 4096
web_thread_pool_size = 16
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_size_per_index = 1073741824
elasticsearch_max_time_per_index = 1d
elasticsearch_disable_version_check = True
no_retention = False
elasticsearch_max_number_of_indices = 30
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
elasticsearch_template_name = graylog-internal
allow_leading_wildcard_searches = False
allow_highlighting = False
elasticsearch_cluster_name = graylog
elasticsearch_node_name_prefix = graylog-
elasticsearch_discovery_zen_ping_unicast_hosts = 127.0.0.1:9300
elasticsearch_node_master = false
elasticsearch_node_data = false
elasticsearch_transport_tcp_port = 9350
elasticsearch_http_enabled = False
elasticsearch_cluster_discovery_timeout = 5000
elasticsearch_network_host =
elasticsearch_network_bind_host =
elasticsearch_network_publish_host =
elasticsearch_discovery_initial_state_timeout = 3s
elasticsearch_analyzer = standard
elasticsearch_request_timeout = 1m
index_ranges_cleanup_interval = 1h
output_batch_size = 25
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
outputbuffer_processor_keep_alive_time = 5000
outputbuffer_processor_threads_core_pool_size = 3
outputbuffer_processor_threads_max_pool_size = 30
udp_recvbuffer_sizes = 1048576
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = True
message_journal_dir = /var/lib/graylog-server/journal
message_journal_max_age = 12h
message_journal_max_size = 5gb
message_journal_flush_age = 1m
message_journal_flush_interval = 1000000
message_journal_segment_age = 1h
message_journal_segment_size = 100mb
async_eventbus_processors = 2
lb_recognition_period_seconds = 3
lb_throttle_threshold_percentage = 95
stream_processing_timeout = 2000
stream_processing_max_faults = 3
alert_check_interval = 60
output_module_timeout = 10000
stale_master_timeout = 2000
shutdown_timeout = 30000
mongodb_uri = mongodb://127.0.0.1:27017/graylog
mongodb_max_connections = 100
mongodb_threads_allowed_to_block_multiplier = 5
rules_file =
transport_email_enabled = False
transport_email_hostname =
transport_email_port = 587
transport_email_use_auth = True
transport_email_use_tls = True
transport_email_use_ssl = True
transport_email_auth_username =
transport_email_auth_password =
transport_email_subject_prefix = [graylog]
transport_email_from_email =
transport_email_web_interface_url =
http_connect_timeout = 5s
http_read_timeout = 10s
http_write_timeout = 10s
disable_index_optimization = True
index_optimization_max_num_segments = 1
gc_warning_threshold = 1s
ldap_connection_timeout = 2000
disable_sigar = False
dashboard_widget_default_cache_time = 10s
content_packs_loader_enabled = True
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load =
proxied_requests_thread_pool_size = 32

elasticsearch

bootstrap.mlockall: false
cluster.name: graylog
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: 127.0.0.1:9300
http.port: 9200
network.host: 0.0.0.0
node.data: true
node.master: true
node.name: graylog
transport.tcp.port: 9300
path.conf: /etc/elasticsearch/graylog
path.data: /var/lib/elasticsearch/graylog.example-graylog
path.work: /tmp/elasticsearch/graylog.example-graylog
path.logs: /var/log/elasticsearch/graylog.example-graylog
  • when I check elasticsearch health that’s green and graylog interface also shows everything good.

curl localhost:9200/_cluster/health?pretty=true
{
  "cluster_name" : "graylog",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 4,
  "active_shards" : 4,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
  • I tried with sending a sample message as well, Elasticsearch works.
curl -X POST 'http://localhost:9200/tutorial/helloworld/1' -d '{ "message": "Hello World!" }'
{"_index":"tutorial","_type":"helloworld","_id":"1","_version":1,"_shards":{"total":1,"successful":1,"failed":0},"created":true}root@graylog:/tmp#
root@graylog:/tmp#
curl -X GET 'http://localhost:9200/tutorial/helloworld/1'
{"_index":"tutorial","_type":"helloworld","_id":"1","_version":1,"found":true,"_source":{ "message": "Hello World!" }}root@graylog:/tmp#
  • Graylog Version:
graylog-server                      2.2.3-1
  • Elasticsearch Version:
ii  elasticsearch                       2.4.3
  • MongoDB Version:
ii  mongodb-org                         3.2.15                                amd64        MongoDB open source document-oriented database system (metapackage)
ii  mongodb-org-mongos                  2.6.12                                amd64        MongoDB sharded cluster query router
ii  mongodb-org-server                  3.2.15                                amd64        MongoDB database server
ii  mongodb-org-shell                   3.2.15                                amd64        MongoDB shell client
ii  mongodb-org-tools                   2.6.12                                amd64        MongoDB tools
  • Operating System:
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.2 LTS
Release:	16.04
Codename:	xenial

Few more details:

@joschi thanks for help .

I’ve checked there are no iptables rules and the socket is available .
Everything is running on the same.

root@graylog:/etc/elasticsearch# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain sshguard (0 references)
target     prot opt source               destination

Plus I’ve checked the netstat output as well .

root@graylog:/etc/elasticsearch# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 94.130.32.23:9300       0.0.0.0:*               LISTEN      13617/java
tcp        0      0 94.130.32.23:12900      0.0.0.0:*               LISTEN      14292/java
tcp        0      0 94.130.32.23:9350       0.0.0.0:*               LISTEN      14292/java
tcp        0      0 94.130.32.23:9000       0.0.0.0:*               LISTEN      14292/java
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      11188/mongod
tcp        0      0 94.130.32.23:9200       0.0.0.0:*               LISTEN      13617/java
root@graylog:/etc/elasticsearch# curl 94.130.32.23:9200/_cat/master?v
id                     host         ip           node
iHLQiWFXSKC6QwwLe-dQIA 94.130.32.23 94.130.32.23 graylog-logger

Please help .

many thanks

You’ve configure elasticsearch_discovery_zen_ping_unicast_hosts to contact 127.0.0.1:9300 for the Elasticsearch node but it runs on 94.130.32.23:9300 (see netstat output).

That obviously won’t work. Either bind the Elasticsearch node to 127.0.0.1 (via the network.host setting in the elasticsearch.yml configuration file) or set elasticsearch_discovery_zen_ping_unicast_hosts (in the Graylog configuration file) to the correct address.

Thanks @jochen for message.

As you have suggested I’ve bound elasticsearch on localhost(127.0.0.1)

root@graylog:/etc/graylog/server# cat /etc/elasticsearch/graylog/elasticsearch.yml

discovery.zen.ping.timeout: 10s
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300"]
cluster.name: graylog
node.name: graylog-vikilogger
network.host: 127.0.0.1


root@graylog:/etc/graylog/server# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:9300          0.0.0.0:*               LISTEN      5913/java
tcp        0      0 127.0.0.1:9350          0.0.0.0:*               LISTEN      5547/java
tcp        0      0 0.0.0.0:9000            0.0.0.0:*               LISTEN      5547/java
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      1048/mongod
tcp        0      0 0.0.0.0:1003            0.0.0.0:*               LISTEN      1053/sshd
tcp        0      0 127.0.0.1:9200          0.0.0.0:*               LISTEN      5913/java
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1119/nginx -g daemo
tcp6       0      0 :::1003                 :::*                    LISTEN      1053/sshd
root@graylog:/etc/graylog/server# grep -i elasticsearch server.conf
elasticsearch_shards = 4
elasticsearch_replicas = 1
elasticsearch_index_prefix = graylog
elasticsearch_cluster_name = graylog
elasticsearch_http_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = 127.0.0.1:9300
elasticsearch_network_host = 127.0.0.1
elasticsearch_cluster_discovery_timeout = 5000
elasticsearch_discovery_initial_state_timeout = 3s
elasticsearch_analyzer = standard
root@graylog:/etc/graylog/server# 

I still have the same error message


2017-07-26T14:29:24.115Z INFO  [node] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] starting ...
2017-07-26T14:29:24.116Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlertScannerThread] periodical in [10s], polling every [60s].
2017-07-26T14:29:24.116Z INFO  [Periodicals] Starting [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] periodical in [0s], polling every [1s].
2017-07-26T14:29:24.119Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [120s], polling every [20s].
2017-07-26T14:29:24.120Z INFO  [Periodicals] Starting [org.graylog2.periodical.ContentPackLoaderPeriodical] periodical, running forever.
2017-07-26T14:29:24.120Z INFO  [Periodicals] Starting [org.graylog2.periodical.GarbageCollectionWarningThread] periodical, running forever.
2017-07-26T14:29:24.120Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2017-07-26T14:29:24.121Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRetentionThread] periodical in [0s], polling every [300s].
2017-07-26T14:29:24.121Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRotationThread] periodical in [0s], polling every [10s].
2017-07-26T14:29:24.121Z INFO  [IndexRetentionThread] Elasticsearch cluster not available, skipping index retention checks.
2017-07-26T14:29:24.122Z INFO  [Periodicals] Starting [org.graylog2.periodical.NodePingThread] periodical in [0s], polling every [1s].
2017-07-26T14:29:24.127Z INFO  [Periodicals] Starting [org.graylog2.periodical.VersionCheckThread] periodical in [300s], polling every [1800s].
2017-07-26T14:29:24.128Z INFO  [Periodicals] Starting [org.graylog2.periodical.ThrottleStateUpdaterThread] periodical in [1s], polling every [1s].
2017-07-26T14:29:24.128Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventPeriodical] periodical in [0s], polling every [1s].
2017-07-26T14:29:24.128Z INFO  [connection] Opened connection [connectionId{localValue:4, serverValue:48}] to 127.0.0.1:27017
2017-07-26T14:29:24.128Z INFO  [connection] Opened connection [connectionId{localValue:3, serverValue:49}] to 127.0.0.1:27017
2017-07-26T14:29:24.129Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventCleanupPeriodical] periodical in [0s], polling every [86400s].
2017-07-26T14:29:24.130Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterIdGeneratorPeriodical] periodical, running forever.
2017-07-26T14:29:24.130Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesMigrationPeriodical] periodical, running forever.
2017-07-26T14:29:24.131Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesCleanupPeriodical] periodical in [15s], polling every [3600s].
2017-07-26T14:29:24.134Z INFO  [connection] Opened connection [connectionId{localValue:6, serverValue:50}] to 127.0.0.1:27017
2017-07-26T14:29:24.134Z INFO  [connection] Opened connection [connectionId{localValue:5, serverValue:51}] to 127.0.0.1:27017
2017-07-26T14:29:24.138Z INFO  [connection] Opened connection [connectionId{localValue:9, serverValue:54}] to 127.0.0.1:27017
2017-07-26T14:29:24.138Z INFO  [connection] Opened connection [connectionId{localValue:7, serverValue:53}] to 127.0.0.1:27017
2017-07-26T14:29:24.138Z INFO  [connection] Opened connection [connectionId{localValue:8, serverValue:52}] to 127.0.0.1:27017
2017-07-26T14:29:24.139Z INFO  [connection] Opened connection [connectionId{localValue:10, serverValue:55}] to 127.0.0.1:27017
2017-07-26T14:29:24.162Z INFO  [PeriodicalsService] Not starting [org.graylog2.periodical.UserPermissionMigrationPeriodical] periodical. Not configured to run on this node.
2017-07-26T14:29:24.162Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlarmCallbacksMigrationPeriodical] periodical, running forever.
2017-07-26T14:29:24.162Z INFO  [Periodicals] Starting [org.graylog2.periodical.ConfigurationManagementPeriodical] periodical, running forever.
2017-07-26T14:29:24.239Z INFO  [Periodicals] Starting [org.graylog2.periodical.LdapGroupMappingMigration] periodical, running forever.
2017-07-26T14:29:24.240Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexFailuresPeriodical] periodical, running forever.
2017-07-26T14:29:24.241Z INFO  [Periodicals] Starting [org.graylog.plugins.usagestatistics.UsageStatsNodePeriodical] periodical in [300s], polling every [21600s].
2017-07-26T14:29:24.242Z INFO  [Periodicals] Starting [org.graylog.plugins.usagestatistics.UsageStatsClusterPeriodical] periodical in [300s], polling every [21600s].
2017-07-26T14:29:24.244Z INFO  [Periodicals] Starting [org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration] periodical, running forever.
2017-07-26T14:29:24.245Z INFO  [Periodicals] Starting [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread] periodical in [0s], polling every [3600s].
2017-07-26T14:29:24.247Z INFO  [IndexerClusterCheckerThread] Indexer not fully initialized yet. Skipping periodic cluster check.
2017-07-26T14:29:24.252Z INFO  [LegacyDefaultStreamMigration] Legacy default stream has no connections, no migration needed.
2017-07-26T14:29:24.323Z INFO  [V20161130141500_DefaultStreamRecalcIndexRanges] Cluster not connected yet, delaying migration until it is reachable.
2017-07-26T14:29:24.362Z INFO  [transport] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] publish_address {127.0.0.1:9350}, bound_addresses {127.0.0.1:9350}
2017-07-26T14:29:24.367Z INFO  [discovery] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] graylog/OSiuAXUARpCHhkdcoclgNg
2017-07-26T14:29:24.485Z INFO  [JerseyService] Enabling CORS for HTTP endpoint
2017-07-26T14:29:27.369Z WARN  [discovery] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] waited for 3s and no initial state was set by the discovery
2017-07-26T14:29:27.369Z INFO  [node] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] started
2017-07-26T14:29:27.427Z INFO  [service] [graylog-a2c36082-47d7-446d-acbd-84e010df1fcd] detected_master {graylog-vikilogger}{lGPTY7uoSPW5K9nEPgfS4A}{127.0.0.1}{127.0.0.1:9300}, added {{graylog-vikilogger}{lGP
TY7uoSPW5K9nEPgfS4A}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-receive(from master [{graylog-vikilogger}{lGPTY7uoSPW5K9nEPgfS4A}{127.0.0.1}{127.0.0.1:9300}])
2017-07-26T14:29:32.242Z INFO  [NetworkListener] Started listener bound to [0.0.0.0:9000]
2017-07-26T14:29:32.243Z INFO  [HttpServer] [HttpServer] Started.
2017-07-26T14:29:32.244Z INFO  [JerseyService] Started REST API at <http://0.0.0.0:9000/api/>
2017-07-26T14:29:32.244Z INFO  [JerseyService] Started Web Interface at <http://0.0.0.0:9000/>
2017-07-26T14:29:32.245Z INFO  [ServiceManagerListener] Services are healthy
2017-07-26T14:29:32.246Z INFO  [InputSetupService] Triggering launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] to Running [LB:ALIVE]
2017-07-26T14:29:32.246Z INFO  [ServerBootstrap] Services started, startup times in ms: {KafkaJournal [RUNNING]=6, JournalReader [RUNNING]=8, InputSetupService [RUNNING]=9, OutputSetupService [RUNNING]=12, Bu
fferSynchronizerService [RUNNING]=13, ConfigurationEtagService [RUNNING]=17, PeriodicalsService [RUNNING]=139, StreamCacheService [RUNNING]=140, IndexerSetupService [RUNNING]=3325, JerseyService [RUNNING]=813
2}
2017-07-26T14:29:32.254Z INFO  [ServerBootstrap] Graylog server up and running.
2017-07-26T14:29:32.266Z INFO  [InputStateListener] Input [GELF UDP/59774ed1dc3aaa50d9cc7d39] is now STARTING
2017-07-26T14:29:32.322Z INFO  [InputStateListener] Input [GELF UDP/59774ed1dc3aaa50d9cc7d39] is now RUNNING
2017-07-26T14:29:34.179Z DEBUG [OffsetIndex] Adding index entry 21962 => 31917190 to 00000000000000000000.index.
2017-07-26T14:29:35.178Z DEBUG [OffsetIndex] Adding index entry 21965 => 31921786 to 00000000000000000000.index.
2017-07-26T14:29:36.533Z DEBUG [AbstractValidatingSessionManager] No sessionValidationScheduler set.  Attempting to create default instance.
2017-07-26T14:29:36.534Z INFO  [AbstractValidatingSessionManager] Enabling session validation scheduler...
2017-07-26T14:29:37.185Z DEBUG [OffsetIndex] Adding index entry 21969 => 31927028 to 00000000000000000000.index.
2017-07-26T14:29:38.194Z DEBUG [OffsetIndex] Adding index entry 21972 => 31931239 to 00000000000000000000.index.

I don’t spot a single error message in these logs.

this one I am looking at

2017-07-26T14:29:24.121Z INFO  [IndexRetentionThread] Elasticsearch cluster not available, skipping index retention checks.

Plus I see only incoming messages nothing goes in the elasticsearch.

This message shows that Graylog successfully connected to the Elasticsearch cluster.

Yes I am able to see the instance has joined the cluster fine.

root@graylog:/etc/graylog/server# curl localhost:9200/_cat/nodes
127.0.0.1 127.0.0.1  1 13 0.04 d * graylog-logger
127.0.0.1 127.0.0.1 14 13 0.04 c - graylog-a2c36082-47d7-446d-acbd-84e010df1fcd

But nothing goes in the elasticsearch.

root@graylog:/etc/graylog/server# curl -s 'localhost:9200/_cat/indices?v'
health status index     pri rep docs.count docs.deleted store.size pri.store.size
green  open   graylog_0   4   0          0            0       640b           640b
root@graylog:/etc/graylog/server# curl -s 'localhost:9200/_cat/shards'
graylog_0 2 p STARTED 0 160b 127.0.0.1 graylog-logger
graylog_0 1 p STARTED 0 160b 127.0.0.1 graylog-logger
graylog_0 3 p STARTED 0 160b 127.0.0.1 graylog-logger
graylog_0 0 p STARTED 0 160b 127.0.0.1 graylog-logger

I am not getting why there is no out for messages

What type of inputs are you using?
How did you configure these inputs?
How are the clients sending messages to these inputs?

What type of inputs are you using?
I am using udp input in graylog

> db.inputs.find()
{ "_id" : ObjectId("59774ed1dc3aaa50d9cc7d39"), "creator_user_id" : "admin", "configuration" : { "override_source" : null, "recv_buffer_size" : 262144, "bind_address" : "0.0.0.0", "port" : 12201, "decompress_size_limit" : 8388608 }, "name" : "GELF UDP", "created_at" : ISODate("2017-07-26T10:08:02.541Z"), "global" : false, "type" : "org.graylog2.inputs.gelf.udp.GELFUDPInput", "title" : "sao", "content_pack" : null, "node_id" : "a2c36082-47d7-446d-acbd-84e010df1fcd" }

How did you configure these inputs?
How are the clients sending messages to these inputs?

This is the output configured in logstash

output {
  udp {
    host => "x.x.x.x"
    port => "12201"
  }
}

Is any of the messages sent by Logstash a valid GELF message?

Try sending a message manually: http://docs.graylog.org/en/2.3/pages/gelf.html#sending-gelf-messages-via-udp-using-netcat

I am able to send message using below command

echo -n '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": 5, "_some_info": "foo" }' | nc -w2 -u x.x.x.x 12201

and its getting delivered fine.

In this case the problem is Logstash or the clients delivering their messages to Logstash.

Please check http://docs.graylog.org/en/2.3/pages/gelf.html for information about how a valid GELF message has to look like.

Yes , thanks for help.

Is there any option I can convert nested json to gelf format ?

There’s a JSON extractor in Graylog you could try to use.

Alternatively there’s a json filter in Logstash.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.