Ran out of space - got hard drive now can't start UI

Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question.
Don’t forget to select tags to help index your topic!

1. Describe your incident:
in opensearch log:
flood stage disk watermark [95%] exceeded on [xyz] [/var/lib/opensearch/nodes/0] free: 2.2gb[3.1%], all indices on this node will be marked read-only

In graylog-server log:
Please check the index error log in your web interface for the reason. Error: failure in bulk execution:

Then the system admin got me more space (800G HD). but he set it at /srv/graylog.
I was in the process of testing an install - so the data did not matter much
How can I reset the indexes (which point to the wrong hard drive) and then restart

2. Describe your environment:

  • OS Information: Ubuntu 20.04

  • Package Version: Graylog 5, Opensearch 2.0.1 Mongo DB 5

  • Service logs, configurations, and environment variables:

opensearch.yml:
#node.name: node-1

#node.roles: [‘master’]

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

path.data: /srv/graylog/opensearch

Path to log files:

path.logs: /var/log/opensearch

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

OpenSearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: local ip

Set a custom port for HTTP:

http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when this node is started:

discovery.type: single-node

The default list of hosts is [“127.0.0.1”, “[::1]”]

#discovery.seed_hosts: [“host1”, “host2”]


server.conf (graylog-server)
is_master = true

The auto-generated node ID will be stored in this file and read after restarts. It is a good idea

to use an absolute file path here if you are starting Graylog server from init scripts or similar.

node_id_file = /etc/graylog/server/node-id2

You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.

Generate one by using for example: pwgen -N 1 -s 96

ATTENTION: This value must be the same on all Graylog nodes in the cluster.

Changing this value after installation will render all user sessions and encrypted values in the database invalid. (e.g. encrypted access tokens)

bin_dir = /usr/share/graylog-server/bin

Set the data directory here (relative or absolute)

This directory is used to store Graylog server state.

Default: data

data_dir = /var/lib/graylog-server

Set plugin directory here (relative or absolute)

plugin_dir = /usr/share/graylog-server/plugin

http_bind_address = local-ip:9000

http_enable_cors = true

Enable GZIP support for HTTP interface

This compresses API responses and therefore helps to reduce

overall round trip times. This is enabled by default. Uncomment the next line to disable it.

#http_enable_gzip = true

The maximum size of the HTTP request headers in bytes.

http_max_header_size = 8192

The size of the thread pool used exclusively for serving the HTTP interface.

http_thread_pool_size = 64

stream_aware_field_types=false

Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For

header. May be subnets, or hosts.

#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128

List of Elasticsearch hosts Graylog should connect to.

Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.

If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that

requires authentication.

Default: http://127.0.0.1:9200

elasticsearch_hosts = local ip:9200

3. What steps have you already taken to try and solve the problem?
I looked in the forum and not quite my problem - So I started to reinstall opensearch and mongoDB but the high watermark is still there.

4. How can the community help?

point me to an area where I can find command line index deletion and watermark reset. as I can no longer log into UI.

I guess even though opensearch is preferred there is no documentation for opensearch items in graylog conf files.

Helpful Posting Tips: Tips for Posting Questions that Get Answers [Hold down CTRL and link on link to open tips documents in a separate tab]

Hey @fixvirus,

You can observer current indices with the below

curl -X GET localhost:9200/_cat/indices?=pretty

Then delete with

curl -XDELETE localhost:9200/name_of_index

Make use of the wildcard function to delete multiple indices at a time, as an example the below would delete all indices named graylog_x

curl -XDELETE localhost:9200/gray*

Once the indices are cleared, restart Opensearch.

Thanks wine merchant,
I would do that but i get connection refused.
The graylog and opensearch do not connect.
When i get back i will put more info.


elasticsearch_hosts = local ip:9200

Is “local ip” not equal localhost ip but the IP address of the networkadapter?
Then, change this to localhost or 127.0.0.1. Except opensearch should be accessible from the outside!

2 Likes

The http_bind_address = local ip which means 192.168.1.1 (not actual address - but similar local ip address schema)

thanks

1 Like

Given that, does the below work or is the connection still refused. Is the local firewall enabled?

curl -X GET 192.168.1.1:9200/_cat/indices?=pretty

no firewall, the mongodb is not starting . I want to reinstall everything so deleted database
reinstall to mongo6.0 has not changed anything.
I saw elsewhere that may have screwed up mongo because of wiredTiger, trying to restart mongodb now.

thanks for the attention, but I think I am just going to rebuild this test machine. I learned a few items so won’t have to redo this again in the future.

Sometimes a fiery crash is the shortest path to knowledge…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.