Hello. A few days ago I upgraded Graylog 3.3.15 to 4.2.7 and ElasticSearch cluster for this graylog 6.8.8 to 7.10.1. Everything seems perfect, without any error messages, search bar in graylog works, prometheus metrics works too.
But I closed graylog web page, and after minutes I received alert from ElasticSearch cluster, that new data (documents) missing in ES. I opened graylog web page again and everything saw ok, because I saw data in search dashboard. After close graylog web page I received same alert from ES.
I investigated it and I found that graylog didn’t send new documents to ES when graylog page is closed. When I opened graylog search via web page data was append to ES cluster again.
I check following graylog metrics, if data are go in and out:
All data are processed:
However, ES metric elasticsearch_indices_docs indicate that graylog data is not sent to the ES. I need open graylog web page and after this I will see change in this metrics.
# General node_id_file = /usr/share/graylog/data/journal/node-id root_username = admin root_email = EMAIL root_timezone = Europe/Prague plugin_dir = /usr/share/graylog/plugins-default http_bind_address = 0.0.0.0:9000 http_external_uri = https://URL/ http_enable_cors = true enabled_tls_protocols = TLSv1.1,TLSv1.2,TLSv1.3 # Output & Input output_batch_size = 200 output_flush_interval = 1 output_fault_count_threshold = 6 output_fault_penalty_seconds = 10 processbuffer_processors = 6 outputbuffer_processors = 6 processor_wait_strategy = blocking ring_size = 65536 inputbuffer_ring_size = 65536 inputbuffer_processors = 2 inputbuffer_wait_strategy = blocking message_journal_enabled = true # Do not change `message_journal_dir` location message_journal_dir = /usr/share/graylog/data/journal outputbuffer_processor_keep_alive_time = 5000 outputbuffer_processor_threads_core_pool_size = 5 outputbuffer_processor_threads_max_pool_size = 30 message_journal_max_age = 12h # size is 75% of persistent volume (journal-graylog) message_journal_max_size = 15gb message_journal_flush_age = 1m message_journal_flush_interval = 100000 # MongoDB mongodb_max_connections = 1000 mongodb_threads_allowed_to_block_multiplier = 5 #ElasticSearch rotation_strategy = count elasticsearch_max_docs_per_index = 10000000 elasticsearch_shards = 12 elasticsearch_index_optimization_jobs = 40 elasticsearch_connect_timeout = 10s elasticsearch_socket_timeout = 60s elasticsearch_max_total_connections = 100 elasticsearch_max_total_connections_per_route = 10 allow_leading_wildcard_searches = true allow_highlighting = false elasticsearch_version = 7 elasticsearch_mute_deprecation_warnings = true # Email transport transport_email_enabled = true transport_email_hostname = aspmx.l.google.com transport_email_port = 25 transport_email_use_auth = false transport_email_use_tls = true transport_email_use_ssl = false transport_email_auth_username = transport_email_auth_password = transport_email_subject_prefix = [graylog] transport_email_from_email = EMAIL content_packs_dir = /usr/share/graylog/data/contentpacks content_packs_auto_load = grok-patterns.json # Prometheus prometheus_exporter_enabled = true prometheus_exporter_bind_address = 0.0.0.0:9833 # Others proxied_requests_thread_pool_size = 32
- Graylog version: 4.2.7-1
- ES cluster version: 7.10.1
- Running in kubernetes
Thank for fast response.