mlouarn
(louarn)
September 15, 2020, 2:27pm
1
Hello,
All of a sudden, I no longer receive any input logs on my Graylog server.
When I go to the “Nodes” I see the “Process Buffer” and the “Output Buffer” at 100% (see the screeenshot)
My server settings:
Version:3.2.6+e255fcc, codename Ethereal Elk
JVM: PID 2794, Debian 11.0.7 on Linux 4.19.0-9-amd64
CPU : 8
RAM : 8192 Mo
What I have already done:
stop and restart the inputs
restart the server
Boost the server (CPU & RAM)
Do you have any help for me?
I am still a beginner on graylog.
Thanks for your help.
Regards,
Hello @mlouarn , welcome!
What is the status of the graylog-server service and supporting services?
systemctl status mongod
systemctl status elasticsearch
systemctl status graylog-server
Is the elasticsearch service running? Are there any errors in the most recent service logs? (you should see these output at the bottom of the systemctl command output)
Are there any errors in your graylog server log file?
mlouarn
(louarn)
September 16, 2020, 7:43am
3
Hello,
Thank you for your quick reply,
I did the actions you asked me to do.
You can see warnings on the screenshot.
Are these warnings serious?
Thanks for your help.
Regards,
cawfehman
(Cawfehman)
September 16, 2020, 6:18pm
4
anything in the logs? what’s your elasticsearch health status?
curl -X GET “{IP or hostname}:9200/_cluster/health?pretty” (no braces in this command)
default log locations…
/var/log/graylog-server/
/var/log/elasticsearch/
mlouarn
(louarn)
September 17, 2020, 7:53am
5
Hello,
Anything in the log since september 6, 2020.
/var/log/graylog-server/
/var/log/elasticsearch/
The system does not accept any more logs because I reached the limit of 5Gb and I need a license?
But is this a limit of 5Gb of logs per day?
cawfehman
(Cawfehman)
September 17, 2020, 6:48pm
6
The 5GB limit is just for the enterprise features… it wouldn’t cause your system to stop ingesting. can you grep for any WARN or ERROR in the /var/log/graylog-server/server.log file, and did you look in the /var/log/elasticsearch/graylog.log file for any issues?
The problem is most likely listed right in one of those 2 logs.
Also, did you get your elasticsearch cluster health?
mlouarn
(louarn)
September 18, 2020, 12:22pm
7
Hi,
my last log entering the server was at 10:53 am.
below the logs “server.log” of “/var/log/graylog-server/server.log” just after 10:53 am.
below the logs “graylog.log” of “/var/log/elasticsearch/graylog.log” just after 10:53 am.
This morning I restarted the machine and I recreated a log file to have only the events of this morning :
logs of graylog :
logs of elastic :
Also, did you get your elasticsearch cluster health? :
that’s what you want ?
mlouarn
(louarn)
September 24, 2020, 1:15pm
8
Can someone help me please ?
shoothub
(Shoothub)
September 24, 2020, 2:15pm
9
You are probably out of storage, Elastic Search by default stop after you use 90% of disk space. If you can, empty some space or add new one or change watermark setting on Elastic Search, and run to unblock:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d ‘{"index.blocks.read_only_allow_delete": null}'
reference:
he @mmahacek
you reach the high/low watermark on your Elasticsearch server(s).
The following settings allow to control when Elasticsearch will first stop allocating new shards, and when it starts to relocate shards or when it sets the Cluster into read-only state because to many messages are coming in.
Setting: cluster.routing.allocation.disk.watermark.low
Controls the low watermark for disk usage. It defaults to 85%, meaning ES will not allocate new shards to nodes once they have more than …
Hi,
I didn’t see any posts related to this error, but maybe I searched with the incorrect key works.
Would be nice if you can send me the link if you say it was posted before.
Anyway, this is what I found by googling the web and it resolved it.
Add the following in elasticseach.yml
/etc/elasticsearch#
vim elasticsearch.yml
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 5gb
cluster.routing.allocation.disk.watermark.low: 30gb …
system
(system)
Closed
October 8, 2020, 2:15pm
10
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.