Hello,
after a month I wanted to access the Graylog web interface again. However, it is no longer accessible.
With the command “sudo tail -f /var/log/graylog-server/server.log” I get the following output: ERROR [VersionProbe] Unable to retrieve version from indexer node: Failed to connect to 127.0.1.1:9200. - Connection refused.
INFO [VersionProbe] Indexer is not available. Retry #196
with the command: dpkg -l | grep -E “.(elasticsearch|graylog|mongo).”
I noticed that elasticsearch is no longer available.
Now to my question: why is elasticsearch suddenly no longer available and how do I restore it without losing data?
OS Information: Ubuntu 24.04.1 LTS
Package Version: 6.1.3-1
What do you mean no longer available? Are you sure you arent using opensearch or graylog data node instead of elasticsearch?
You’re right, sorry. The installation was some time ago.
graylog-datanode.service and graylog-server.service are running.
So anywhere you see it mention elastic etc in the logs that will just mean datanode for you. So i would give the datanode service a restart, and then check the datanode logs for any issues.
I found the following in the logs:
[OpensearchProcessImpl] org.opensearch.cluster.block.ClusterBlockException: index [.opendistro_security] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];
How can I delete the read-only-allow-delete block?
I have already tried “curl -X GET ‘http://127.0.0.1:9200/_cluster/health?pretty’, but I get a ”curl: (7) Failed to connect to 127.0.0.1 port 9200 after 0 ms: Could not connect to the server"
You basically ran out of disk space (Opensearch doesn’t like getting past 70-80%) You will either need to expand the drives, or you could have it delete some of the older indices.
I have expanded the hard disk by 60GB. But I get the same error.
Did you restart after the expansion, how much free space is there now % and is there only one partition/drive?
Yes, I have completely restarted the Linux vm.
Filesystem Size Used Avail Use% Mounted on
tmpfs 390M 1,1M 389M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 123G 47G 71G 40% /
tmpfs 2,0G 0 2,0G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
/dev/sda2 2,0G 182M 1,7G 10% /boot
tmpfs 390M 12K 390M 1% /run/user/1000
Hey @DarkMission
Is the watermark log still occurring and are there any other error logs with the data-node log file?