Graylog Time out

Hello friends,

I have Graylog community version Graylog 3.2.6+e255fcc and I am using the fortinet pack.I am configuring a dashboard that works if I put 5 minutes, but if I select a date range for example all the previous day gives me time out could you help me to configure or optimize. I have a Centos 7 server with 4CPU and 8 RAM. Attached are some errors found.


Hello and welcome

I took a glance at your picture. Looks like you may have a connection problem to Elasticsearch. I see the "could not connect to also connection refused. Sounds like a permission/configuration issues. Does Graylog have access to the fortinet pack?
Was this working before or did this just happen?
Do you have any other problems like this or only for fortinet Dashboard?

Out of curiosity what do you get when you execute this command.

curl -XGET ''

Graylog config file elasticsearch_hosts= match Elasticsearch config ?


Thanks for your help. I am new with Graylog and Elasticsearch. To install and configure it I was guided by this video.

I have a MV with 4CPU, 16GB RAM, I modified the Java HEAD of both Graylog and Elasticsearch going from 1GB to 4GB in each. I also installed this pack

It works correctly but the problem I have is that when I make a new dashboard with some fields if I select in a range of 15 minutes it does it correctly but if I extend the time as for example 1,2,3 days ago I get a time out and it gets stuck so I restart the server again and it works again.

I attach the curl

now in elasticsearch the field and in Graylog the field elasticsearch_hosts= is commented out. What should I change here?



To be honest I not sure right now, but if I had to guess it could be a couple different things. Judging from the picture above, this is showing connection time outs, couldnt update field in index, failed to connect to
Since you showed ES is fine and I assume MongoDb, Graylog services are good also you stated that only 15 minutes of searched logs are shown but when you go to a date range it times out . This makes be believe it could be a configuration problem or resource problem (i.e. CPU)

To check for resources run TOP or HTOP then go to your dashboard and execute what you did before when dashboard times out. IF the CPU is maxed out try adding 2 or 4 more cores on your virtual machine.

Do you have Selinux enabled? If so maybe put it into passive mode and reboot?

Thats to many octets above.

Maybe try or and restart Elasticsearch service.

Is it posibable to show both you Elasticsearch configuration file and Graylog configuration file?

Thanks @gsmith,

I think it is more of configuration, when I launch the date range the CPU goes up but it does not get stuck that is to say I do not think it is the CPU I use glances and processes do not reach critical as well as memory.

Then the selinux I have it disabled. The by mistake I put a 0 of more currently I have it in, I think it is a configuration issue that tries to search by date range and makes a time out because it does not end but I do not know what it can be. That is why I resort to the forum if you have any suggestions where I can see I will appreciate it.



If that is true, could you do the following statement?

Hello @gsmith

I add you graylog and elasticsearch

elasticsearch.yml graylog
action.auto_create_index: false /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200

is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = BmDsgau4VJAE9RzJxILDFMWEHg4dzqaEHfJn93lOlzy9gWl8nUHQ3GzYOo8v4E2KeMkk1xp15QiUgQBSJeULzxHWSUCabXLu
root_password_sha2 = 1b5a38c34d1fd3d0bf56c611763d3fa069e5102cd48537d5d7464a076ec0ec33
root_timezone = Europe/Madrid
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address =
http_publish_uri =
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 1
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
proxied_requests_thread_pool_size = 32

Thanks for the help, any suggestions for improvement please let me know.



I have a couple suggestion to see if it would resolve your issue.

First thing I noticed was this.

It has been know that these should match the physical processors on the graylog server.

As an example, I have CentOS7 virtual machine with 12 CPU, 12 GB memory.

processbuffer_processors = 6
outputbuffer_processors = 2
inputbuffer_processors = 3

I would try lowering those settings or adding more CPU’s on that server to see if that helps.

I dont think you need this uncommented , but you could try to commented that out to see if that helps.

The default setting should look like this.

# Default: http://$http_bind_address/

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.