Indice filled up, but not searchable

Hi,

I made a fresh installation, following the documentation… I have an indice that is fiiled up with 109,217 documents, and weight 36.6MB.
When I launch a search, the “Loading…” strings appears, and no result is ever displayed.
In the /var/log/graylog-server/server.log :

2017-03-23T10:55:43.850+01:00 WARN [SearchResource] Unable to execute search: all shards failed

What I did :

  1. Installed Debian 9
  2. Followed the documentation : http://docs.graylog.org/en/latest/pages/installation/os/debian.html
  3. added an input
  4. added some logs to the input
  5. launched a search and admire the «Loading…» string
  6. checked the /var/log/graylog-server/server.log for the error mentionned above.

My environment :

  • Graylog Version: 2.2.2-1
  • Elasticsearch Version: 2.4.4
  • MongoDB Version: 3.2.11-2
  • Operating System: Debian 9 (Stretch)
  • Browser version: Chrome 57.0.2987.110 (64-bit)

Is there any way to debug this ? My elastic search server seems to work (no error in the logs).

Any idea ?

How exactly did you install and configure Graylog and Elasticsearch?
Are there any (other) error messages in the logs of your Graylog or Elasticsearch nodes?

I think I striclty followed the documentation “docs.graylog.org/en/latest/pages/installation/os/debian.html

I used deb packages, nothing from sources.

Here is the graylog-server :

root@graylog:~# grep -v -e “^#” -e “^\s*$” /etc/graylog/server/server.conf
is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = xxx
root_password_sha2 = xxx
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http[broke the link to bypass the 2 links limitation]://192.168.124.3:12900/
rest_transport_uri = http[broke the link to bypass the 2 links limitation]://192.168.124.3:12900/
web_listen_uri = http[broke the link to bypass the 2 links limitation]://192.168.124.3:9000/
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
root@graylog:~#

Here is the elasticsearch conf (I only added the max_result_window because I red that my problem could come from here, but no improvement) :

root@graylog:~# grep -v -e “^#” -e “^\s*$” /etc/elasticsearch/elasticsearch.yml
cluster.name: graylog
max_result_window: 100000
root@graylog:~#

There are the ES startup logs :

==> /var/log/elasticsearch/graylog.log <==
 
[2017-03-24 13:48:59,314][INFO ][node                  
   ] [Guido Carosella] version[2.4.4], pid[3880], 
build[fcbb46d/2017-01-03T11:33:16Z]
[2017-03-24 13:48:59,315][INFO ][node                     ] [Guido Carosella] initializing ...
[2017-03-24 13:48:59,742][INFO ][plugins                
  ] [Guido Carosella] modules [reindex, lang-expression, lang-groovy], 
plugins [], sites []
[2017-03-24 13:48:59,759][INFO ][env                    
  ] [Guido Carosella] using [1] data paths, mounts [[/var 
(/dev/mapper/graylog--vg-var)]], net usable_space [738mb], net 
total_space [1.4gb], spins? [possibly], types [ext4]
[2017-03-24 13:48:59,759][INFO ][env                    
  ] [Guido Carosella] heap size [1007.3mb], compressed ordinary object 
pointers [true]
[2017-03-24 13:49:01,308][INFO ][node                     ] [Guido Carosella] initialized
[2017-03-24 13:49:01,309][INFO ][node                     ] [Guido Carosella] starting ...
[2017-03-24 13:49:01,377][INFO ][transport              
  ] [Guido Carosella] publish_address {127.0.0.1:9300}, bound_addresses 
{[::1]:9300}, {127.0.0.1:9300}
[2017-03-24 13:49:01,381][INFO ][discovery                ] [Guido Carosella] graylog/eqK8JCiqTVCQ-nGeOZPSAQ
[2017-03-24 13:49:04,451][INFO ][cluster.service        
  ] [Guido Carosella] new_master {Guido 
Carosella}{eqK8JCiqTVCQ-nGeOZPSAQ}{127.0.0.1}{127.0.0.1:9300}, reason: 
zen-disco-join(elected_as_master, [0] joins received)
[2017-03-24 13:49:04,483][INFO ][http                  
   ] [Guido Carosella] publish_address {127.0.0.1:9200}, bound_addresses
 {[::1]:9200}, {127.0.0.1:9200}
[2017-03-24 13:49:04,484][INFO ][node                     ] [Guido Carosella] started
[2017-03-24 13:49:04,526][INFO ][gateway                  ] [Guido Carosella] recovered [1] indices into cluster_state
[2017-03-24 13:49:04,978][INFO ][cluster.routing.allocation] [Guido 
Carosella] Cluster health status changed from [RED] to [GREEN] (reason: 
[shards started [[graylog_0][0], [graylog_0][2], [graylog_0][1], 
[graylog_0][0]] ...]). 

And there are the graylog-server logs :

Tell me if you need any information.

This looks a bit tight. Are you sure you only want to provide ~700 MB of disk space to Elasticsearch?

Thank you for your reply!

I followed your recommendation :

[2017-03-24 19:26:18,745][INFO ][env ] [Garokk the Petrified Man] using [1] data paths, mounts [[/var (/dev/mapper/graylog–vg-var)]], net usable_space [8.3gb], net total_space [9.3gb], spins? [possibly], types [ext4]

Still no improvement… :confused:

Hello Raphux,

I too face same issue and error!

did you solved this issue or still persist ?

Thanks in advance…