I have graylog and elasticsearch all running on the same machine. The issue I have is the process buffer is full. I’ve checked the logs and here’s what I’ve found:
2020-07-23T15:51:27.751-04:00 ERROR [IndexFieldTypePollerPeriodical] Couldn't update field types for index set <Defaul
t index set/5f172f0e8b94001e849b6411>
org.graylog2.indexer.ElasticsearchException: Couldn't collect indices for alias graylog_deflector
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:54) ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:65) ~[graylog.jar:?]
at org.graylog2.indexer.indices.Indices.aliasTarget(Indices.java:336) ~[graylog.jar:?]
at org.graylog2.indexer.MongoIndexSet.getActiveWriteIndex(MongoIndexSet.java:204) ~[graylog.jar:?]
at org.graylog2.indexer.fieldtypes.IndexFieldTypePollerPeriodical.lambda$schedule$4(IndexFieldTypePollerPeriodical.java:201) ~[graylog.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_252]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_252]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_252]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_252]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_252]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_252]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]
Caused by: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200
at io.searchbox.client.http.JestHttpClient.execute(JestHttpClient.java:80) ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:49) ~[graylog.jar:?]
... 11 more
I can curl successfully my elasticsearch running on my instance:
curl http://127.0.0.1:9200
{
"name" : "uF7RBi6",
"cluster_name" : "graylog",
"cluster_uuid" : "bY1zhhyRSS-aNR6IHH49BQ",
"version" : {
"number" : "6.8.10",
"build_flavor" : "oss",
"build_type" : "deb",
"build_hash" : "537cb22",
"build_date" : "2020-05-28T14:47:19.882936Z",
"build_snapshot" : false,
"lucene_version" : "7.7.3",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Nothing in my elasticsearch logs.
Here are my settings in my server.conf (everything else is default)
is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = retracted
root_password_sha2 = retracted
root_email = "admin@company.com"
root_timezone = America/Toronto
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = 0.0.0.0:9000
trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
transport_email_enabled = true
transport_email_hostname = mail.company.com
transport_email_port = 25
transport_email_use_auth = false
transport_email_subject_prefix = [Graylog]
transport_email_from_email = graylog@servers.company.com
proxied_requests_thread_pool_size = 32
Everything is the default in the config for elasticsearch except for this:
cluster.name: graylog
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
action.auto_create_index: false
Netstat:
netstat -tunapl | grep 9200
tcp6 0 0 127.0.0.1:9200 :::* LISTEN 10342/java
tcp6 0 0 127.0.0.1:51814 127.0.0.1:9200 ESTABLISHED 9919/java
tcp6 0 0 127.0.0.1:9200 127.0.0.1:51814 ESTABLISHED 10342/java
tcp6 0 0 127.0.0.1:9200 127.0.0.1:51808 ESTABLISHED 10342/java
tcp6 0 0 127.0.0.1:51806 127.0.0.1:9200 ESTABLISHED 9919/java
tcp6 0 0 127.0.0.1:51812 127.0.0.1:9200 ESTABLISHED 9919/java
tcp6 0 0 127.0.0.1:9200 127.0.0.1:51806 ESTABLISHED 10342/java
tcp6 0 0 127.0.0.1:51808 127.0.0.1:9200 ESTABLISHED 9919/java
tcp6 0 0 127.0.0.1:9200 127.0.0.1:51816 ESTABLISHED 10342/java
tcp6 0 0 127.0.0.1:9200 127.0.0.1:51810 ESTABLISHED 10342/java
tcp6 0 0 127.0.0.1:51816 127.0.0.1:9200 ESTABLISHED 9919/java
tcp6 0 0 127.0.0.1:51810 127.0.0.1:9200 ESTABLISHED 9919/java
tcp6 0 0 127.0.0.1:9200 127.0.0.1:51812 ESTABLISHED 10342/java
/etc/hosts
127.0.1.1 dev-graylog-1n1 dev-graylog-1n1
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Is there any things I can check to help me debug ? Graylog is behind an external nginx reverse proxy but the webui works fine. Could it be related ?
I’m using Ubuntu 18.04 and installed using the official doc here https://docs.graylog.org/en/3.3/pages/installation/os/ubuntu.html
Any ideas ?
Thanks !