OpenSearch exception

**While retrieving data for this widget, the following error(s) occurred: *

can anyone help me to fix this, it seems by mistake i have deleted the index?

Indices snap

Your indexes are set to an enormous size. That’s probably part of your problem.

Can you offer some details about what kind of environment you are working with?

What version of GL, what version of OpenSearch or Elasticsearch?

You need to get the health of your storage cluster. Use this and post what it returns.

curl -X GET "http://localhost:9200/_cluster/health?pretty"

Depending on how you configured it, you may need to specify the IP or hostname of your ES/OS node.

Hi Chris , Thanks for reply

kindly find the output

health?pretty"
{
“cluster_name” : “graylog”,
“status” : “red”,
“timed_out” : false,
“number_of_nodes” : 1,
“number_of_data_nodes” : 1,
“discovered_master” : true,
“discovered_cluster_manager” : true,
“active_primary_shards” : 20,
“active_shards” : 20,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 16,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 55.55555555555556
}

Also find server.conf file config

is_leader = true

node_id_file = /etc/graylog/server/node-id

password_secret =

The default root user is named ‘admin’

#root_username = admin

root_password_sha2 =

root_timezone = Asia/Kolkata

bin_dir = /usr/share/graylog-server/bin

data_dir = /var/lib/graylog-server

plugin_dir = /usr/share/graylog-server/plugin

###############

HTTP settings

###############

Default: 127.0.0.1:9000

http_bind_address = localhost:9000

http_publish_uri = http:

stream_aware_field_types=false

rotation_strategy = count

elasticsearch_max_docs_per_index = 20000000

elasticsearch_max_number_of_indices = 5

retention_strategy = delete

elasticsearch_shards = 1
elasticsearch_replicas = 0

elasticsearch_index_prefix =graylog

allow_leading_wildcard_searches = false

allow_highlighting = false

elasticsearch_analyzer = standard

output_batch_size = 500

output_flush_interval = 1

output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

processbuffer_processors = 12
outputbuffer_processors = 12

processor_wait_strategy = blocking

ring_size = 262144

inputbuffer_ring_size = 262144
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking

message_journal_enabled = true

message_journal_dir = /var/lib/graylog-server/journal

message_journal_max_age = 12h
message_journal_max_size = 60gb

lb_recognition_period_seconds = 3

mongodb_uri = mongodb://localhost/graylog

mongodb_max_connections = 1000

integrations_scripts_dir = /usr/share/graylog-server/scripts

Opensearch.yml

cluster.name: graylog

node.name: graylog

path.data: /graylog/opensearch/data

path.logs: /var/log/opensearch

discovery.type: single-node

action.auto_create_index: true

plugins.security.disabled: true

Hope this information enough if anything let me know.

Also find my instance specs

RAM : 32 GB
Storage : 1TB

daily recieving log of around 150-200 GB

-Xms12g
-Xmx12g

OS status is red.
Here’s some info to trouble-shoot: Amazon OpenSearch Service cluster is in red or yellow status | AWS re:Post

is this isue with the configuration ?

Hi Chris ,

Can you suggest here ?

@Rizwan, hard to say for sure. It may be a consequence of your index settings. You have created a situation where Opensearch has to keep open enormous indices. There are 16 unassigned shards. That may be as a result of asking Opensearch to do more than the resources will support. The link @patrickmann provided will help you to resolve the unassigned shards.

Once that is done, it may also be the case that you have run out of disk space, and that has caused the unassigned shards. You can check the Opensearch logs for watermark stage messages. Of course, you can also just look at the storage stats for the volume containing your Opensearch data and see if it has run out of space.

Here is an introduction to the topic of watermark stages. Don’t know if you are dealing with that problem now, but it will likely be useful information for you regardless. It’s fundamental to understanding how Opensearch protects itself from running out of space.

Hi Rizwan,

Did you run the command that chris shared
curl -X GET “http://localhost:9200/_cluster/health?pretty
And what result did you get, this error showing index for graylog-4
when you run the above command you will get some error after that run the below command and I hope it will resolve your issue
curl -X PUT “http://localhost:9200/_cluster/health?pretty

Hi Thanks for reply,

you can see the output of that command in above comments

Hi Rizwan,

Could you please share you message in text form?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.