OpenSearch Snapshot Plugin Issue

Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question.
Don’t forget to select tags to help index your topic!

First and foremost, apologies if this is the wrong forum however (and this is no slight the OpenSearch Forums), this forum is just way more responsive and seemingly willing to assist.

1. Describe your incident:
I get the following error when attempting to set up a snapshot policy for OpenSearch:

{“error”:”no handler found for uri [/_plugins/_sm/policies/snapshot_policy] and method [POST]”}

2. Describe your environment:
OpenSearch 2.0.1
Server OS: Ubuntu 22.04 in LXC guest; LXC host: Debian Bookworm
Graylog: 5.0.3+a82acb2 (open/community edition)

  • Service logs, configurations, and environment variables:
curl -XPOST -v http://127.0.0.1:9200/_plugins/_sm/policies/snapshot_policy -H 'Content-Type: application/json' -d '{ "description": "Snapshot Schedule", "creation": { "schedule": { "cron": { "expression": "0 5 * * *", "timezone": "UTC" } }, "time_limit": "None" }, "deletion": { "schedule": { "cron": { "expression": "0 0 * * *", "timezone": "UTC" } }, "condition": { "max_age": "90d", "min_count": 7 }, "time_limit": "None" }, "snapshot_config": { "date_format": "yyyy-MM-dd", "timezone": "UTC", "indices": "*", "repository": "my-fs-repository", "ignore_unavailable": "true", "include_global_state": "false", "partial": "true", "metadata": { "any_key": "any_value" } } }'
Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 127.0.0.1:9200...
* Connected to 127.0.0.1 (127.0.0.1) port 9200 (#0)
> POST /_plugins/_sm/policies/snapshot_policy HTTP/1.1
> Host: 127.0.0.1:9200
> User-Agent: curl/7.81.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 556
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 400 Bad Request
< content-type: application/json; charset=UTF-8
< content-length: 95
<
* Connection #0 to host 127.0.0.1 left intact
{"error":"no handler found for uri [/_plugins/_sm/policies/snapshot_policy] and method [POST]"}

Configuration

# ======================== OpenSearch Configuration =========================
#
# NOTE: OpenSearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.opensearch.org
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: graylog
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: graylogopen
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /graylog/opensearch/data
#
# Path to log files:
#
path.logs: /var/log/opensearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# OpenSearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
discovery.type: single-node
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["SERVERNAME01", "SERVERNAME02", "SERVERNAME03"]
#
# Bootstrap the cluster using an initial set of cluster-manager-eligible nodes:
#
#cluster.initial_cluster_manager_nodes: ["SERVERNAME01", "SERVERNAME02", "SERVERNAME03"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
action.auto_create_index: false
plugins.security.disabled: true
path.repo: ["/media/RAID5/graylog-open/opensearch"]

3. What steps have you already taken to try and solve the problem?

The usual Google-foo

Thank you!

Helpful Posting Tips: Tips for Posting Questions that Get Answers [Hold down CTRL and link on link to open tips documents in a separate tab]

:smiley:

Some of us are slowly adding in the Opensearch forum to our repertoire

1 Like

I just hope one day I will be able to reach a level of understanding where I can peruse forums and provide help vs just look for answers haha

2 Likes

Have not worked with Opensearch at all but I posted already and feel obligated to look a this… :crazy_face:

Have you registered a repository for the snapshot first? I was looking at these instructions here and that stood out. You may have just not mentioned it though. Where are you pulling your snapshot instructions from?

1 Like

T

There is no try, only do. If you hang around and read enough, you can start answering the simple questions and it all cascades… you just have to have the interest…

2 Likes

Hey @accidentaladmin

Looks like you trying to configure a Repo?If so not sure if this will help, but for Graylog/Opensearch I have done this.

path.repo: ["/mnt/elasticsearch/my_repo"]

&&

curl -X PUT "localhost:9200/_snapshot/my_repo?pretty" -H 'Content-Type: application/json' -d'
{
"type": "fs",
"settings": {
"location": "/mnt/elasticsearch/my_repo"
}
}
'

Once completed run the following.

curl -X PUT "localhost:9200/_snapshot/my_repo/snapshot_1?wait_for_completion=true&pretty"

Restore:

curl -X POST "localhost:9200/_snapshot/my_repo/snapshot_1/_restore?pretty

hope that helps

1 Like

So this filled in one missing piece: the mounting of the drive (duh) and verified the snapshot as viable, however I am still trying to get the snapshot scheduler to do the work for me automatically.

In my journies, it looks like I need to install the opensearch plugin “Index Management” to get the “snapshot policy” plugin. However, when I attempt to list my existing opensearch plugins, I get this error:

/graylog/opensearch/jdk/bin/java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory

This appears to be the result of Java’s non-standard location by way of it being “baked-in” to Graylog 5. I’ve tried the symbolic link trick (place a symbolic link where it should be pointing to its actual location in /graylog/opensearch/jdk) and also tried editing the paths in my .bashrc, but I am picking up these tips that are proposed for similar but not the same causes of the above-referenced error output.

Has anyone encountered this issue and, if so, has anyone found a solution?

Thank you!

The interest is there, for sure!

Hey @accidentaladmin

I think you are correct. In my lab I have Graylog 5 && Opensearch. My understanding is Opensearch Dashboard snapshot management ( i.e, free) but Graylog 5 Archives ( i.e, Not free). this may have conflicts.

I see, yeah my way only does it once, you want to do it on a schedule ( hour, day ,month,etc).

I feared this may be the case which is kinda sucky in that it appears intentional.

But I am supporter of “if you create something, you control something”. If I don’t like it, I can kick rocks.

1 Like

So I have tried and tried to get opensearch plugins that would allow schedules snapshots to work to no avail. I am pretty sure scheduled snapshots are intentionally nerfed by Graylog.

So, now I am trying to create a shell script that will auto run the snapshot from crontab. This is what I have so far:

#!/bin/bash
folder_date=$(date +%F-%T.%z)
my_repo=/media/RAID5/media/RAID5/graylog-open/opensearch/$folder_date
json_string=$( jq -n \
                --arg mr "$my_repo"\
                '{ "type": "fs", "settings": { "location": $mr } }' )


mkdir -p /media/RAID5/media/RAID5/graylog-open/opensearch/$folder_date &&
curl -X PUT "localhost:9200/_snapshot/my_repo?pretty" -H 'Content-Type: application/json' -d $json_string &&
curl -X PUT "localhost:9200/_snapshot/my_repo/$folder_date?wait_for_completion=true&pretty"

Of course, this produces an error:

root@Graylog-Open:~# ./opensearch-backup.sh
{
  "error" : {
    "root_cause" : [
      {
        "type" : "json_e_o_f_exception",
        "reason" : "Unexpected end-of-input: expected close marker for Object (start marker at [Source: (org.opensearch.common.io.stream.InputStreamStreamInput); line: 1, column: 1])\n at [Source: (org.opensearch.common.io.stream.InputStreamStreamInput); line: 1, column: 2]"
      }
    ],
    "type" : "json_e_o_f_exception",
    "reason" : "Unexpected end-of-input: expected close marker for Object (start marker at [Source: (org.opensearch.common.io.stream.InputStreamStreamInput); line: 1, column: 1])\n at [Source: (org.opensearch.common.io.stream.InputStreamStreamInput); line: 1, column: 2]"
  },
  "status" : 400
}
curl: (3) URL using bad/illegal format or missing URL
curl: (6) Could not resolve host: "fs",
curl: (3) URL using bad/illegal format or missing URL
curl: (3) unmatched brace in URL position 1:
{
 ^

Modifying this ever so slightly to:

#!/bin/bash
folder_date=$(date +%F-%T.%z)
my_repo=/media/RAID5/media/RAID5/graylog-open/opensearch/$folder_date
json_string=$( jq -n \
                --arg mr "$my_repo"\
                { "type": "fs", "settings": { "location": $mr } } )


mkdir -p /media/RAID5/media/RAID5/graylog-open/opensearch/$folder_date &&
curl -X PUT "localhost:9200/_snapshot/my_repo?pretty" -H 'Content-Type: application/json' -d '$json_string' &&
curl -X PUT "localhost:9200/_snapshot/my_repo/$folder_date?wait_for_completion=true&pretty"

Produces this error:

root@Graylog-Open:~# ./opensearch-backup.sh
jq: error: syntax error, unexpected $end (Unix shell quoting issues?) at <top-level>, line 1:
{
jq: 1 compile error
{
  "error" : {
    "root_cause" : [
      {
        "type" : "json_parse_exception",
        "reason" : "Unrecognized token '$json_string': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')\n at [Source: (org.opensearch.common.io.stream.InputStreamStreamInput); line: 1, column: 13]"
      }
    ],
    "type" : "json_parse_exception",
    "reason" : "Unrecognized token '$json_string': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')\n at [Source: (org.opensearch.common.io.stream.InputStreamStreamInput); line: 1, column: 13]"
  },
  "status" : 400
}

It should be obvious that I do not have experience with javascript/JSON so I am sure that is the root cause of my issues. However, if anyone has any suggestions, I would greatly appreciate it!

Hey,

Good idea :+1:

Curuious have you tried Expect?, I normally use it when I need to execute command lines on switchs or other software, pretty simple.

Example this gets my table/collection from mongodb called traffic and makes a file called traffic.json. how i get data from mongod and ingest it into Graylog. Im quit sure you can do that with curl command, etc…

#!/usr/bin/expect -f

        set timeout 20
        

        spawn mongoexport  -u mongo_admin -p  primalFear  --collection=traffic --db=graylog --out=/var/log/streams/traffic.json 

       
expect eof

I know absolutely nothing about “expect” but will investigate. As it is, here is the most recent iteration of the back-up script:

#!/bin/bash
my_repo=/media/RAID5/graylog-open/opensearch/$(date +%F)/

cp -v /graylog/opensearch/config/opensearch.yml /root/opensearch.$(date +%F) &&
sed -i -r "s@([path.repo]+:\s\[.*\])@path\.repo\:\ \[$my_repo\]@g" /graylog/opensearch/config/opensearch.yml &&
mkdir -p /media/RAID5/graylog-open/opensearch/$(date +%F) &&
curl -X PUT "localhost:9200/_snapshot/my_repo?pretty" -H 'Content-Type: application/json' -d "{ \"type\": \"fs\", \"settings\": { \"location\": \"$my_repo\" }}" &&
curl -X PUT "localhost:9200/_snapshot/my_repo/snapshot?wait_for_completion=true&pretty"

But this produces:

root@Graylog-Open:~# ./opensearch-backup.sh
'/graylog/opensearch/config/opensearch.yml' -> '/root/opensearch.2023-03-21'
{
  "error" : {
    "root_cause" : [
      {
        "type" : "exception",
        "reason" : "failed to create blob container"
      }
    ],
    "type" : "repository_verification_exception",
    "reason" : "[my_repo] path  is not accessible on master node",
    "caused_by" : {
      "type" : "exception",
      "reason" : "failed to create blob container",
      "caused_by" : {
        "type" : "access_denied_exception",
        "reason" : "/media/RAID5/graylog-open/opensearch/2023-03-21/tests-m2gNMZ18TpiN3jIKrOhwGw"
      }
    }
  },
  "status" : 500
}
^C

Any thoughts? I feel like I am so, so close.

Ok, so this is the solution that appears to work:

graylogopen_backup.sh

#!/bin/bash
/root/graylogopen_conf.sh &&
/root/mongodb_backup.sh &&
/root/opensearch_backup.sh

graylogopen_conf.sh

#!/bin/bash
time=$(date +%F-%T.%z)
conf_file="/media/RAID5/graylog-open/conf/$(date +%F-%T.%z)_server.conf"
if test -f "/etc/graylog/server/server.conf";
        then
                touch $conf_file
                cat /etc/graylog/server/server.conf >> $conf_file
        else
                echo "No server.conf file"
fi

mongodb_backup.sh

#!/bin/bash
folder_date=$(date +%F)
dump=$(sudo mongodump --uri="mongodb://127.0.0.1:27017" -o /media/RAID5/graylog-open/mongodb/$(date +%F))
if test -d "/media/RAID5/graylog-open/mongodb/$(date +%F)";
        then
                $dump
        else
                mkdir -p /media/RAID5/graylog-open/mongodb/$(date +%F) &&
                $dump
fi &&
tar cvf - /media/RAID5/graylog-open/mongodb/$(date +%F)/ | lz4 -9q - /media/RAID5/graylog-open/mongodb/$(date +%F).tar.lz4 &&
rm -rf /media/RAID5/graylog-open/mongodb/$(date +%F)

opensearch_backup.sh

#!/bin/bash
my_repo=/media/RAID5/graylog-open/opensearch/$(date +%F)/

cp -v /graylog/opensearch/config/opensearch.yml /root/opensearch_yml_backups/opensearch.$(date +%F) &&
sed -i -r "s@([path.repo]+:\s\[.*\])@path\.repo\:\ \[$my_repo\]@g" /graylog/opensearch/config/opensearch.yml &&
mkdir -p /media/RAID5/graylog-open/opensearch/$(date +%F) &&
chown -R opensearch:opensearch /media/RAID5/graylog-open/opensearch/$(date +%F) &&
curl -X PUT "localhost:9200/_snapshot/my_repo?pretty" -H 'Content-Type: application/json' -d "{ \"type\": \"fs\", \"settings\": { \"location\": \"$my_repo\" }}" &&
curl -X PUT "localhost:9200/_snapshot/my_repo/snapshot?wait_for_completion=true&pretty" &&
tar cvf - /media/RAID5/graylog-open/opensearch/$(date +%F)/ | lz4 -9q - /media/RAID5/graylog-open/opensearch/$(date +%F).tar.lz4 &&
rm -rf /media/RAID5/graylog-open/opensearch/$(date +%F)

Verified to work on my rig. I run it nightly.

1 Like

small change to opensearch_backup.sh (opensearch daemon needs a restart after modifying the config)

#!/bin/bash
my_repo=/media/RAID5/graylog-open/opensearch/$(date +%F)/

cp -v /graylog/opensearch/config/opensearch.yml /root/opensearch_yml_backups/opensearch.$(date +%F) &&
sed -i -r "s@([path.repo]+:\s\[.*\])@path\.repo\:\ \[$my_repo\]@g" /graylog/opensearch/config/opensearch.yml &&
systemctl restart opensearch &&
sleep 30s &&
systecmtl status opensearch &&
sleep 900s && 
mkdir -p /media/RAID5/graylog-open/opensearch/$(date +%F) &&
chown -R opensearch:opensearch /media/RAID5/graylog-open/opensearch/$(date +%F) &&
curl -X PUT "localhost:9200/_snapshot/my_repo?pretty" -H 'Content-Type: application/json' -d "{ \"type\": \"fs\", \"settings\": { \"location\": \"$my_repo\" }}" &&
curl -X PUT "localhost:9200/_snapshot/my_repo/snapshot?wait_for_completion=true&pretty" &&
tar cvf - /media/RAID5/graylog-open/opensearch/$(date +%F)/ | lz4 -9q - /media/RAID5/graylog-open/opensearch/$(date +%F).tar.lz4 &&
rm -rf /media/RAID5/graylog-open/opensearch/$(date +%F)
1 Like

I wanted to apologize for this language. What I should have said is that I am more comfortable within this forum as opposed to over at OpenSearch. OpenSearch is full of great members, too. I should not have been so thoughtless in my knee-jerk assessment.

1 Like

LMAo @accidentaladmin you all good, I bounce between both. I just realize it was 27 day ago you posted that in Opensearch forum. Damn, If i knew that was you i would have responded

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.