AWS / Graylog2.3: Deflector exists as an index and is not an alias

Hi,

i have the problem, that i see no messages underneath search, but the error message:

Deflector exists as an index and is not an alias. (triggered 3 minutes ago)
The deflector is meant to be an alias but exists as an index. Multiple failures of infrastructure can lead to this. Your messages are still indexed but searches and all maintenance tasks will fail or produce incorrect results. It is strongly recommend that you act as soon as possible.

My config that is rolled out (opsworks, aws elastic search, singleinstance mongodb):

> app = search("aws_opsworks_app", "shortname:graylog").first
> environment = app[:environment]
> # get master ip or exit
> graylog_layer = search("aws_opsworks_layer", "shortname:graylog").first
> master_ip = "0.0.0.0"
> node_ip = node.ipaddress
> 
> search("aws_opsworks_instance").each do |instance|
>     if instance["layer_ids"].first == graylog_layer["layer_id"]
>         master_ip = instance["private_ip"]
>     end
> end
> Chef::Log.info("*** MASTER IP ADDRESS ***")
> Chef::Log.info(master_ip)
> Chef::Log.info("***  NODE IP ADDRESS  ***")
> Chef::Log.info(node_ip)
> Chef::Log.info("*************************")
> 
> # settings
> node.override[:graylog2][:major_version]                                    = environment[:GrayLogMajorVersion]
> node.override[:graylog2][:server][:version]                                 = environment[:GrayLogServerVersion]
> 
> node.override[:graylog2][:ip_of_master]                                     = master_ip
> if node_ip == master_ip
>     node.override[:graylog2][:is_master]                                    = true
> end
> node.override[:graylog2][:lb_recognition_period_seconds]                    = 30
> 
> node.override[:graylog2][:root_password_sha2]                               = environment[:GrayLogRootPasswordSha2]
> node.override[:graylog2][:password_secret]                                  = environment[:GrayLogPasswordSecret]
> 
> # Uris
> node.override[:graylog2][:web][:endpoint_uri]                               = "https://#{environment[:GrayLogUrl]}/api"
> node.override[:graylog2][:web][:listen_uri]                                 = "http://0.0.0.0:9000/"
> node.override[:graylog2][:rest][:listen_uri]                                = "http://0.0.0.0:9000/api/"
> 
> # Elasticsearch http client (GL >= 2.3)
> node.override[:graylog2][:elasticsearch][:hosts]                           = environment[:ElasticSearchUrl]
> node.override[:graylog2][:elasticsearch][:max_total_connections]           = 20
> node.override[:graylog2][:elasticsearch][:max_total_connections_per_route] = 2
> #node.override[:graylog2][:elasticsearch][:connect_timeout]                 = nil # '10s'
> #node.override[:graylog2][:elasticsearch][:socket_timeout]                  = nil # '60s'
> #node.override[:graylog2][:elasticsearch][:idle_timeout]                    = nil # '-1s'
> 
> # WARNING: Automatic node discovery does not work if Elasticsearch requires authentication, e. g. with Shield.
> # http://docs.graylog.org/en/2.3/pages/configuration/elasticsearch.html
> # Automatic node discovery does not work when using the Amazon Elasticsearch Service because Amazon blocks certain Elasticsearch API endpoints.
> node.override[:graylog2][:elasticsearch][:discovery_enabled]               = false
> #node.override[:graylog2][:elasticsearch][:discovery_filter]                = nil
> #node.override[:graylog2][:elasticsearch][:discovery_frequency]             = nil # '30s'
> 
> 
> node.override[:graylog2][:mongodb][:uri]                                   = "mongodb://#{environment[:MongoDBHost]}:#{environment[:MongoDBPort]}/#{environment[:MongoDBDatabase]}"
> node.override[:graylog2][:mongodb][:max_connections]                       = 100
> node.override[:graylog2][:mongodb][:threads_allowed_to_block_multiplier]   = 5
> 
> include_recipe 'graylog2::server'

p.s. i have stoped graylog, deleted the index and restarted graylog again … but the index was recreated

Is it a new setup or did you change/renamed your cluster?

it is a new setup … but my mistake i intialised graylog with
[:discovery_enabled] = true

after that i stoped all graylog instances (autoscaling to 0/0/0), deleted the index and restarted graylog instances (2/2/2)

but the index was recreated

seems to work now…

i stopped all inputs, but did not shut down graylog…

deleted the index deflector via curl

startet the inputs again

Hej @markus7811

the setting [:discovery_enabled] = true did not work with AWS as we have written in the documentation

http://docs.graylog.org/en/2.3/pages/configuration/elasticsearch.html#automatic-node-discovery

regards
Jan

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.