Type=index_not_found_exception

Hello.

I’ve been installed Graylog server + Opensearch for collecting syslog messages from Cisco environment.

But there is one problem after all steps of installation has passed.
When I create new input and go through show received messages I see:

“While retrieving data for this widget, the following error(s) occurred:
OpenSearch exception [type=index_not_found_exception, reason=no such index ].”

But in system - indeces, I see that I have “default index set”.
And CURL showes:
curl -XGET localhost:9200/_cat/indices
yellow open graylog_deflector QO1wN7nuQVSKx40Zw6NQVA 1 1 88 0 139.4kb 139.4kb
yellow open myservicename 1Th-Nn-LRqSlEJ2-uzUdJw 1 1 0 0 208b 208b

I’m beginner at this, so I might have missed something, please help me solve that problem.

My environment :
Debian 10
Opensearch 2.0.1
Mongodb 6.0

The quick and easy thing to do would be to go to System/Indices, then click on the name of your index (probably default index set). You should see three buttons on the right.

Click the maintenance button and choose “Rotate Active Write Index”. Then click “recalculate index ranges”

If that doesn’t work, you will need to find out why Opensearch is in a Yellow state. You should see state reflected in the System/Overview page here, though yours may say Opensearch rather than Elasticsearch:

To find out why Opensearch is in a degraded state, you can check both the graylog and Opensearch logs. (found here: Default file locations)

Take a look at those and come back here if you have more questions.

Hello, Chris, thank you very much for your reply.

I do as you said, but for some reason I also show elasticsearch indeсes in that page, but not opensearch. Is this normal?

And when I click Recalculate index ranges I see that message:

“Could not create a job to start index ranges recalculation for graylog_deflector, reason: FetchError: There was an error fetching a resource: Bad Request. Additional information: graylog_deflector is not a Graylog-managed Elasticsearch index.”

Why there is mentioned Elasticsearch again?

Logs is:
2023-02-07T09:54:28.868+07:00 ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource
java.lang.IllegalArgumentException: No JobDefinition for archiving restore action found!
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:145) ~[graylog.jar:?]
at org.graylog.plugins.archive.job.ArchivingJobHandler.getArchiveRestoreJobDefinition(ArchivingJobHandler.java:42) ~[?:?]
at org.graylog.plugins.archive.job.ArchivingJobHandler.getTypeQuery(ArchivingJobHandler.java:95) ~[?:?]
at org.graylog.plugins.archive.job.ArchivingJobHandler.listArchiveTriggers(ArchivingJobHandler.java:110) ~[?:?]
at org.graylog.plugins.archive.job.ArchivingJobResourceHandler.listAllJobs(ArchivingJobResourceHandler.java:35) ~[?:?]
at org.graylog.scheduler.rest.JobResourceHandlerService.lambda$listJobs$0(JobResourceHandlerService.java:48) ~[graylog.jar:?]
at java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) ~[?:?]
at java.util.Iterator.forEachRemaining(Unknown Source) ~[?:?]
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:?]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:?]
at java.util.stream.ReferencePipeline.collect(Unknown Source) ~[?:?]
at org.graylog.scheduler.rest.JobResourceHandlerService.listJobs(JobResourceHandlerService.java:48) ~[graylog.jar:?]
at org.graylog.scheduler.rest.JobResourceHandlerService.listJobsAsSystemJobSummary(JobResourceHandlerService.java:52) ~[graylog.jar:?]
at org.graylog2.rest.resources.cluster.ClusterSystemJobResource.list(ClusterSystemJobResource.java:89) ~[graylog.jar:?]
at jdk.internal.reflect.GeneratedMethodAccessor72.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[?:?]

And Graylog logs:
Feb 06 17:30:51 krk-log101 systemd[1]: Started Graylog server.
Feb 06 17:30:51 krk-log101 graylog-server[12142]: WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.

I also found several solutions that suggested stopping Graylog, removing the indexes and running.
I go through these steps, but the result is that I get this error again.

curl -X GET “localhost:9200/_cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.*&pretty” -H ‘Content-Type: application/json’ -d’

{
“index”: “my-index”,
“shard”: 0,
“primary”: false,
“current_node”: “my-node”
}

{
“error” : {
“root_cause” : [
{
“type” : “index_not_found_exception”,
“reason” : “no such index [my-index]”,
“index” : “my-index”,
“index_uuid” : “na
}
],
“type” : “index_not_found_exception”,
“reason” : “no such index [my-index]”,
“index” : “my-index”,
“index_uuid” : “na
},
“status” : 404
}

Hey @boostmachine

just chiming in

I see you cluster is in Yellow, From you curl command it shows the the index [my-index] is not there.Do you have a index called [My-index] or any stream connect to [My-index] ?
As you posted above you have couple indices in yellow and one called myservicename

yellow open myservicename 1Th-Nn-LRqSlEJ2-uzUdJw 1 1 0 0 208b 208b

cURL example, this command is a littile better when adding ?v

curl 'localhost:9200/_cat/indices?v'

Thes maybe danggling indices but not 100% sure. You can check by

curl -X GET "localhost:9200/_dangling?pretty"

You logs show

 No JobDefinition for archiving restore action found!

I assume you have Graylog 5 installed? By chance is this Opensource install or do you have a license?

Are you using security plugin with Opensearch?

In my personal documentation for the reasons why the indices are in yellow…

  • You have restarted a node
  • Node crashes
  • Networking issues
  • Disk space issues
  • Node allocation awareness
  • Shard has exceeded the maximum number of retries

From you first post

Try to remake the widget again but it seams that you had a index for that widget but now it gone and/or renamed.

I found this post , perhaps it will help.

Hello here, thank you for your reply.

Yeah, I made a mistake and inserted the wrong command output.

My index called graylog_deflector.

So this output is:

curl ‘localhost:9200/_cat/indices?v’
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open graylog_deflector mYGtIvk0QsK_Oun_yeZHyw 1 1 17 0 151.5kb 151.5kb

And another one:

curl -X GET “localhost:9200/_dangling?pretty”
{
“_nodes” : {
“total” : 1,
“successful” : 1,
“failed” : 0
},
“cluster_name” : “graylog”,
“dangling_indices” :
}

And yes, you’re totally right, I have Graylog 5 version, and it is Opensource version.

I just dont understand why if I start Graylog server there are two shards with the same name and one of that is unassigned:

curl ‘localhost:9200/_cat/shards’
graylog_deflector 0 p STARTED 17 151.5kb 10.53.15.147 krk-log101
graylog_deflector 0 r UNASSIGNED

And from your advice “Try to remake the widget again but it seams that you had a index for that widget but now it gone and/or renamed.” could you give me some tips how can I see that allocations of indexes and widgets?

Hey @boostmachine

Only way to tell If you could show your configuration file ( Graylog) and perhaps your ES/OS yaml file. Maybe any configuration when you created index sets for.
If you do post here please remove and/or rename personal information. Use the markdown so configuration and Log files are readable here, thanks

graylog_deflector 0 r UNASSIGNED

If this is a new setup I would remove the UNASSIGNED shard then Manually rotate your index set/s.

EDIT: after looking baclk over you post this is not needed. You have a shard issue.

curl -X DELETE "localhost:9200/my-index-000001?pretty"

If you have data then you can try to recover is

From what I read in this post, my guess would be there was a index created and now its gone and you have unsigned shards to that index.

Hey @boostmachine

I over looked this

P = Primary  shard
R = replica shard

Thats why you have two, yeah something is up with you index set

I appreciate your help a lot.
So I will put only those lines that are uncommented in the server.conf of Graylog.

is_leader = false
node_id_file = /etc/graylog/server/node-id
password_secret = ###
root_username = ###
root_password_sha2 = ###
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = 0.0.0.0:9000
http_publish_uri = http://10.X.X.X:9000/
stream_aware_field_types=false
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000
integrations_scripts_dir = /usr/share/graylog-server/scripts

And maybe it will be useful for investigation, configuration of Opensearch.yml file:

cluster.name: graylog
node.name: ${HOSTNAME}
path.data: /graylog/opensearch/data
path.logs: /var/log/opensearch
network.host: 0.0.0.0
discovery.type: single-node
action.auto_create_index: false
plugins.security.disabled: true

By the way, I did it:
curl -X DELETE "localhost:9200/graylog_deflector?pretty"
and there is still problem. it was created and is in yellow status again

Hello,

I had something similar on some of my indexes (Indices)
Deflector is not “a real” index, I had to do the following:

  1. stop INPUT(s)
  2. curl -X DELETE localhost:9200/*_deflector (on the node ES/OS is running)
  3. rotate the graylog index and you should see graylog0 (index) in the GUI
  4. start the input
2 Likes

Sorry, my reply was blocked as spam.

I did this common but with my index name (graylog_deflector) curl -X DELETE “localhost:9200/my-index-000001?pretty”, and reload Graylog server and it doesn’t work.

So there is my config files of Graylog and Opensearch, could you help me if I made mistake over there.
Graylog:

> is_leader = false
> node_id_file = /etc/graylog/server/node-id
> password_secret = ###
> root_username = ###
> root_password_sha2 = ###
> bin_dir = /usr/share/graylog-server/bin
> data_dir = /var/lib/graylog-server
> plugin_dir = /usr/share/graylog-server/plugin
> http_bind_address = 0.0.0.0:9000
> http_publish_uri = http://10.X.X.X:9000/
> stream_aware_field_types=false
> rotation_strategy = count
> elasticsearch_max_docs_per_index = 20000000
> elasticsearch_max_number_of_indices = 20
> retention_strategy = delete
> elasticsearch_shards = 4
> elasticsearch_replicas = 0
> elasticsearch_index_prefix = graylog
> allow_leading_wildcard_searches = false
> allow_highlighting = false
> elasticsearch_analyzer = standard
> output_batch_size = 500
> output_flush_interval = 1
> output_fault_count_threshold = 5
> output_fault_penalty_seconds = 30
> processbuffer_processors = 5
> outputbuffer_processors = 3
> processor_wait_strategy = blocking
> ring_size = 65536
> inputbuffer_ring_size = 65536
> inputbuffer_processors = 2
> inputbuffer_wait_strategy = blocking
> message_journal_enabled = true
> message_journal_dir = /var/lib/graylog-server/journal
> lb_recognition_period_seconds = 3
> mongodb_uri = mongodb://localhost/graylog
> mongodb_max_connections = 1000
> integrations_scripts_dir = /usr/share/graylog-server/scripts

Opensearch:

cluster.name: graylog
node.name: ${HOSTNAME}
path.data: /graylog/opensearch/data
path.logs: /var/log/opensearch
network.host: 0.0.0.0
discovery.type: single-node
action.auto_create_index: false
plugins.security.disabled: true

Hello, thank you for your reply.

After the first two steps, the GUI page with the index does not load and hangs like this:

Just endless loading. What’s wrong with that? :unamused:

Are the file permissions OK for the OS paths?

Hey @boostmachine

I have questions on your configuration files, if you dont mind.

1.is_leader = false

  • Is this one node or iis this server in a cluster? If its one node it should be set to TRUE.

2.path.data: /graylog/opensearch/data

  • This data path doesnt look normal. By chance did you reconfigure it? I though the data path should be…
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/opensearch

Hey again!

Thank you for advice about leader option, I will change it.

But about another one, it is from this manual (Debian installation), there is the same way of data. Should I change it, Is there any chance this will help?
Thank you

Hello Iaakus.

These files permissions like from manual:

drwxr-s---  3 opensearch opensearch   4096 Jun 15  2022 bin
drwxr-s---  9 opensearch opensearch   4096 Feb  7 13:26 config
drwxr-s---  3 opensearch opensearch   4096 Jan 26 16:58 data
drwxr-s---  9 opensearch opensearch   4096 Jan  1  1970 jdk
drwxr-s---  3 opensearch opensearch   4096 Jan  1  1970 lib
-rwxr-s---  1 opensearch opensearch  11358 Jan  1  1970 LICENSE.txt
drwxr-s---  2 opensearch opensearch   4096 Feb  6 17:28 logs
-rwxr-s---  1 opensearch opensearch   5797 Jun 15  2022 manifest.yml
drwxr-s--- 19 opensearch opensearch   4096 Jan  1  1970 modules
-rwxr-s---  1 opensearch opensearch 216309 Jan  1  1970 NOTICE.txt
-rwxr-s---  1 opensearch opensearch   2339 Jun 15  2022 opensearch-tar-install.sh
drwxr-s---  5 opensearch opensearch   4096 Jun 15  2022 performance-analyzer-rca
drwxr-s--- 17 opensearch opensearch   4096 Jun 15  2022 plugins
-rwxr-s---  1 opensearch opensearch   2462 Jan  1  1970 README.md

hey,

Yeah I just looked at that, documentation needs to be updated. I install Opensearch on Ubuntu 22.0.4 like so from the Opensearch documentation

You can now install opensearch like you did with Elasticseach using APT.

this is by default,

root@ansible:/etc/apt/sources.list.d#  cat /etc/opensearch/opensearch.yml  | egrep -v "^\s*(#|$)"
cluster.name: graylog
path.data: /var/lib/opensearch
path.logs: /var/log/opensearch
network.host: 10.10.10.10
http.port: 9200
plugins.security.disabled: true
discovery.type: single-node
bootstrap.memory_lock: true
action.auto_create_index: false
root@ansible:/etc/apt/sources.list.d#

This may or may not help, But I do not have this issue, It up to you but I would think just letting the installation of Opensearch do its thing :wink: and perhaps you may not have to worry about permissions, it up to you.

HEY GSMITH!

I changed the “leader” option, and it seemed to work.

I don’t know what it had to do with it, i.e. it was looking for other nodes and that’s why shards didn’t work?

BUT in any case, thanks so much to everyone who participated in helping with this issue, and especially to you!

1 Like

awesome-yes-will-ferrell (1)

Holy Cow man, its that easy,

hey for the future this is all I did to install Opensearch , here is a piece of my history

 1682  wget https://artifacts.opensearch.org/releases/bundle/opensearch/2.5.0/opensearch-2.5.0-linux-x64.deb
 1683  ls
 1684  sudo dpkg -i opensearch-2.5.0-linux-x64.deb
 1685  sudo systemctl daemon-reload
 1686  sudo systemctl enable opensearch.service
 1687  sudo systemctl start opensearch.service
 1688  sudo systemctl status  opensearch.service
 1689  vi /etc/opensearch/opensearch.yml
 1690  systemctl restart opensearch

not only that but Im running Graylog 5.0 for testing with MongoDb 4.4.18, and it seams to work. I have a issue installing MongoDb 5.0 becuz of my CPU being used. Glad it fixe, if you can mark it as resolved t for future searches that would be great :+1:

1 Like

Hahaha that’s so funny, but it wasn’t that obvious to me, sorry.
And it’s funny that your way of installing this service is just a few lines and maybe 20 minutes of time, but in my case it was days and tons of lost neurons. :joy:
But thank you all again, and of course I will make it as resolved! :ok_hand:t3:

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.