There is no index target to point to Creating one now

Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question.
Don’t forget to select tags to help index your topic!

1. Describe your incident:
Indexes are reset and rebuilt from time to time. The following message is received for each index - “There is no index target to point to. Creating one now”
Graylog is assembled in a docker cluster with 3 nodes

2. Describe your environment:

  • OS Information: Debian 11

  • Package Version: Docker-compose version: “3.8”
    services:
    mongodb:
    image: “mongo:5.0”
    opensearch:
    image: "opensearchproject/opensearch:2.4.0
    graylog:
    image: “${GRAYLOG_IMAGE:-graylog/graylog:5.0}”

  • Service logs, configurations, and environment variables:
    |2023-11-07T12:04:30+02:00| 4097e5e1 / graylog1|SystemJob [org.graylog2.indexer.indices.jobs.SetIndexReadOnlyAndCalculateRangeJob] finished in 1141ms.|
    |—|—|—|
    |2023-11-07T12:04:29+02:00| 4097e5e1 / graylog1|Optimizing index <graylog_1>.|
    |2023-11-07T12:04:29+02:00| 4097e5e1 / graylog1|Flushed and set <graylog_1> to read-only.|
    |2023-11-07T12:03:59+02:00| 4097e5e1 / graylog1|Cycled index alias <graylog_deflector> from <graylog_1> to <graylog_2>.|
    |2023-11-07T06:55:42+02:00| 4097e5e1 / graylog1|SystemJob [org.graylog2.indexer.indices.jobs.OptimizeIndexJob] finished in 53086ms.|
    |2023-11-07T06:54:49+02:00| 4097e5e1 / graylog1|SystemJob [org.graylog2.indexer.indices.jobs.SetIndexReadOnlyAndCalculateRangeJob] finished in 319ms.|
    |2023-11-07T06:54:49+02:00| 4097e5e1 / graylog1|Optimizing index <graylog_0>.|
    |2023-11-07T06:54:49+02:00| 4097e5e1 / graylog1|Flushed and set <graylog_0> to read-only.|
    |2023-11-07T06:54:19+02:00| 4097e5e1 / graylog1|Cycled index alias <graylog_deflector> from <graylog_0> to <graylog_1>.|
    |2023-11-07T04:36:13+02:00| 4097e5e1 / graylog1|Cycled index alias <juniper_deflector> from to <juniper_0>.|
    |2023-11-07T04:36:13+02:00| 4097e5e1 / graylog1|There is no index target to point to. Creating one now.|
    |2023-11-07T04:36:13+02:00| 4097e5e1 / graylog1|Cycled index alias <hp_deflector> from to <hp_0>.|
    |2023-11-07T04:36:08+02:00| 4097e5e1 / graylog1|There is no index target to point to. Creating one now.|
    |2023-11-07T04:35:59+02:00| 4097e5e1 / graylog1|Cycled index alias <graylog_deflector> from to <graylog_0>.|
    |2023-11-07T04:35:58+02:00| 4097e5e1 / graylog1|There is no index target to point to. Creating one now.|
    |2023-11-07T04:35:58+02:00| 4097e5e1 / graylog1|Cycled index alias <a10_deflector> from to <a10_0>.|
    |2023-11-07T04:35:58+02:00| 4097e5e1 / graylog1|There is no index target to point to. Creating one now.|
    |2023-11-07T04:35:49+02:00| 4097e5e1 / graylog1|Cycled index alias <gl-system-events_deflector> from to <gl-system-events_0>.|
    |2023-11-07T04:35:49+02:00| 4097e5e1 / graylog1|There is no index target to point to. Creating one now.|
    |2023-11-07T04:35:49+02:00| 4097e5e1 / graylog1|Cycled index alias <gl-events_deflector> from to <gl-events_0>.|
    |2023-11-07T04:35:49+02:00| 4097e5e1 / graylog1|There is no index target to point to. Creating one now.|
    |2023-11-07T04:35:49+02:00| 4097e5e1 / graylog1|Cycled index alias <arista_deflector> from to <arista_0>.|
    |2023-11-07T04:35:49+02:00| 4097e5e1 / graylog1|There is no index target to point to. Creating one now.|
    |2023-11-07T04:35:49+02:00| 4097e5e1 / graylog1|Cycled index alias <apc_deflector> from to <apc_0>.|
    |2023-11-07T04:35:48+02:00| 4097e5e1 / graylog1|There is no index target to point to. Creating one now.|
    |2023-11-07T04:35:48+02:00| 4097e5e1 / graylog1|Cycled index alias <a10_deflector> from to <a10_0>.|
    |2023-11-07T04:35:48+02:00| 4097e5e1 / graylog1|There is no index target to point to. Creating one now.|
    |2023-11-07T03:33:18+02:00| 4097e5e1 / graylog1|Running retention strategy [org.graylog2.indexer.retention.strategies.DeletionRetentionStrategy] for indices <graylog_149>|
    |2023-11-07T03:33:18+02:00| 4097e5e1 / graylog1|Number of indices (25) higher than limit (24). Running retention for 1 indices.|
    |2023-11-07T03:30:09+02:00| 4097e5e1 / graylog1|SystemJob <1e6338d0-7d0d-11ee-acb3-0242ac150005> [org.graylog2.indexer.indices.jobs.SetIndexReadOnlyAndCalculateRangeJob] finished in 538ms.|

root@graylog ~$ curl -X GET “localhost:9200/_cluster/health” 1 ↵
{“cluster_name”:“os-docker-cluster”,“status”:“green”,“timed_out”:false,“number_of_nodes”:3,“number_of_data_nodes”:3,“discovered_master”:true,“discovered_cluster_manager”:true,“active_primary_shards”:484,“active_shards”:516,“relocating_shards”:0,“initializing_shards”:0,“unassigned_shards”:0,“delayed_unassigned_shards”:0,“number_of_pending_tasks”:0,“number_of_in_flight_fetch”:0,“task_max_waiting_in_queue_millis”:0,“active_shards_percent_as_number”:100.0}#

3. What steps have you already taken to try and solve the problem?
I deleted and recreated the indexes - it didn’t help. There are no errors in the graylog logs before the index reset. The opensearch cluster is green

I looked at similar topics, for example,
but there is a problem with elasticsearch

4. How can the community help?
If you need additional information - I will provide it

Helpful Posting Tips: Tips for Posting Questions that Get Answers [Hold down CTRL and link on link to open tips documents in a separate tab]

Hey @totemz

Can I ask how you configured your index set? Also someone had a simialer issue with graylog config file. Ther eupgrade wrote over the old file shown here

Check you configration again make sure the path.data: /var/lib/opensearch is correct.
Here are couple more commands for troubleshooting

#### See if you can find the  old index set's ###
curl -XGET http://localhost:9200/_cat/indices?v
curl -XGET http://localhost:9200/_cluster/allocation/explain?pretty

Hi! @gsmith
This is the configuration of the docker-compose.yml

opensearch1:
    image: "opensearchproject/opensearch:2.4.0"
    hostname: "opensearch1"
    environment:
      - "OPENSEARCH_JAVA_OPTS=-Xms10G -Xmx10G"
      - "node.name=opensearch1"
      - "cluster.name=os-docker-cluster"
      - "discovery.seed_hosts=opensearch2,opensearch3"
      - "cluster.initial_master_nodes=opensearch1,opensearch2,opensearch3"
      - "node.attr.temp: hot"
      - "bootstrap.memory_lock=true"
      - "action.auto_create_index=false"
      - "plugins.security.ssl.http.enabled=false"
      - "plugins.security.disabled=true"
    ulimits:
      memlock:
        hard: -1
        soft: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - "opensearch-data-01:/usr/share/opensearch/data"
      #- "opensearch-config-01:/usr/share/opensearch/config"
      - "/usr/apps/graylog-prod/opensearch-nodes-config/node01/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml"
    restart: "on-failure"
    ports:
      - 9200:9200

Here are the curl results:

curl -XGET http://localhost:9200/_cat/indices\?v  
                                                                                                          
health status index                          uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   zte_0                          3Yoc9KDARtGJqzLxKrxG9A   4   0       4064            0      2.2mb          2.2mb
green  open   arista_0                       gdo4IodJRMSc76ry6x___w   4   0         28            0    192.9kb        192.9kb
green  open   hp_0                           LSsQ6AmOSBC3_RDsZsegqA   4   0        103            0    348.1kb        348.1kb
green  open   gl-events_0                    -aXkk0fNSmqYv758D-vp6g   4   0          0            0       832b           832b
green  open   nat_0                          pmt3ZLusRAadvve5bR2rEw   4   0   49128438            0     14.4gb         14.4gb
green  open   graylog_0                      td762tP5ToKEF986I9twiA   4   0     321773            0    375.3mb        375.3mb
green  open   gl-system-events_0             3R5bndG9T2y3aV81FRFRBQ   4   0          0            0       832b           832b
green  open   juniper_0                      -ksIToyeQKOGqftAr0XGPw   4   0       4470            0      2.6mb          2.6mb
green  open   .opendistro-job-scheduler-lock 7mSJVtcLTPefX6ajHQiZSg   1   1          1            0       68kb         37.1kb
green  open   a10_0                          YjjHH69JQ8eaQj5RZVzqFQ   4   0       1530            0    841.2kb        841.2kb
green  open   apc_0                          wUz4Mt8xQsWLpOU3YKth8Q   4   0          0            0       832b           832b

curl -XGET http://localhost:9200/_cluster/allocation/explain\?pretty                                                                                        
{
  "error" : {
    "root_cause" : [
      {
        "type" : "illegal_argument_exception",
        "reason" : "unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"
      }
    ],
    "type" : "illegal_argument_exception",
    "reason" : "unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"
  },
  "status" : 400
}


Now the situation with the indices - they began to reset several times every day((

Thanks for the help

Hey ,

What I found out from testing Graylog Docker was after a reboot some configuration were reset. Solving this I had to set my volumes.

For example:

graylog:
    image: graylog/graylog:4.2-jre11
    network_mode: bridge
    dns:
      - 192.168.2.15
      - 192.168.2.16
   # journal and config directories in local NFS share for persistence
    volumes:
      - graylog_journal:/usr/share/graylog/data/journal
      - graylog_bin:/usr/share/graylog/bin
      - graylog_data:/usr/share/graylog/data
    environment:
      # Container time Zone
      - TZ=America/Chicago
      # CHANGE ME (must be at least 16 characters)!

I have also tested mounting Graylog configuration file to make life easier. Instead of creating a bunch of Env configs.

I added path.data for opensearch in docker-compose file (I don’t know if it will help somehow)

- "path.data=/var/lib/opensearch"
volumes:
      - "opensearch-data-01:/var/lib/opensearch"

There are also such errors and they basically coincide in time with resetting the indexes, I don’t know if they can be related

Indexes are reset without any server or docker reboots

Compiled a new greylog build without docker using the official documentation
OS: Debian 11
Graylog: 5.2.1
Opensearch: 2.9.0.
MongoDB: 6.0.11

I re-created indexes, inputs, etc.,I edited the java settings in file’s /etc/opensearch/jvm.options - -Xms24g -Xmx24g and
/etc/default/graylog-server - GRAYLOG_SERVER_JAVA_OPTS=“-Xms8g -Xmx8g -server -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow” and now I don’t get errors about “Journal utilization is too high” or “Uncommited vessages deleted from journal”
But the indexes are still reset.

The only thing that connects these two installations is the same server with the zfs file system. Could the problem be precisely in the file system? because I’ve already run out of options (

opensearch.log from reset to reset:

[2023-11-27T00:09:43,325][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:14:43,325][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:17:19,152][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [beats_0/JNJntwzdTsWj-rJTCFQqSg] deleting index
[2023-11-27T00:17:19,156][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [zte_0/nxtV3ZJuRLWYizbdgODE_w] deleting index
[2023-11-27T00:17:19,156][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-events_0/fR_dNHYVTZasSS7r3ZcRLA] deleting index
[2023-11-27T00:17:19,156][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [arista_0/Ukcfow09RWWgpveF_56lNg] deleting index
[2023-11-27T00:17:19,157][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [graylog_0/HXOciojkT8aKNwNwTFQs7w] deleting index
[2023-11-27T00:17:19,157][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [juniper_0/gQ-9oHWtSXKCGcf4Ju9k1g] deleting index
[2023-11-27T00:17:19,157][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [a10_0/kWCGveSxRH2jbJ15cbyD_A] deleting index
[2023-11-27T00:17:19,157][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [nat_0/d92a5UIeSKm5hjh1ZFP9MQ] deleting index
[2023-11-27T00:17:19,157][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-system-events_0/a0TgzU3kQmWt3jxZpy16xg] deleting index
[2023-11-27T00:17:19,422][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:19,422][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] get managed-index failed: [.opendistro-ism-config] IndexNotFoundException[no such index [.opendistro-ism-config]]
[2023-11-27T00:17:19,423][INFO ][o.o.i.s.GlobalCheckpointSyncAction] [node-1] [nat_0][2] global checkpoint sync failed
org.opensearch.index.IndexNotFoundException: no such index [nat_0]
	at org.opensearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:939) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:1120) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.cluster.ClusterStateObserver$ContextPreservingListener.onNewClusterState(ClusterStateObserver.java:380) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.cluster.ClusterStateObserver$ObserverClusterStateListener.postAdded(ClusterStateObserver.java:257) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.cluster.service.ClusterApplierService$1.run(ClusterApplierService.java:320) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:849) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:282) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:245) [opensearch-2.9.0.jar:2.9.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
[2023-11-27T00:17:19,425][INFO ][o.o.i.s.GlobalCheckpointSyncAction] [node-1] [nat_0][1] global checkpoint sync failed
org.opensearch.index.IndexNotFoundException: no such index [nat_0]
	at org.opensearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:939) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:1120) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.cluster.ClusterStateObserver$ContextPreservingListener.onNewClusterState(ClusterStateObserver.java:380) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.cluster.ClusterStateObserver$ObserverClusterStateListener.postAdded(ClusterStateObserver.java:257) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.cluster.service.ClusterApplierService$1.run(ClusterApplierService.java:320) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:849) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:282) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:245) [opensearch-2.9.0.jar:2.9.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
[2023-11-27T00:17:19,425][INFO ][o.o.i.s.GlobalCheckpointSyncAction] [node-1] [nat_0][3] global checkpoint sync failed
org.opensearch.index.IndexNotFoundException: no such index [nat_0]
	at org.opensearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:939) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:1120) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.cluster.ClusterStateObserver$ContextPreservingListener.onNewClusterState(ClusterStateObserver.java:380) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.cluster.ClusterStateObserver$ObserverClusterStateListener.postAdded(ClusterStateObserver.java:257) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.cluster.service.ClusterApplierService$1.run(ClusterApplierService.java:320) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:849) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:282) [opensearch-2.9.0.jar:2.9.0]
	at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:245) [opensearch-2.9.0.jar:2.9.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
[2023-11-27T00:17:27,106][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[d2oztp9ASrm7Kg7tG9VdUQ/QNgqTdMQQgWIASr35fHWCg]
[2023-11-27T00:17:27,108][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/ImJIHHulRT2tqbPY11QPTQ]
[2023-11-27T00:17:27,109][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [nat_0] creating index, cause [api], templates [nat-template], shards [4]/[0]
[2023-11-27T00:17:27,133][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/ImJIHHulRT2tqbPY11QPTQ]
[2023-11-27T00:17:27,177][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,246][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,246][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[nat_0][1], [nat_0][3], [nat_0][0]]]).
[2023-11-27T00:17:27,265][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,282][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,286][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[rN7josnHQ7Cui3N8BQ1mvA/YfYFYGcUQaq7kxLRxiGWqg]
[2023-11-27T00:17:27,288][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[a10_0/WOVSZKOZSIWKeKhOwPG5lQ]
[2023-11-27T00:17:27,289][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [a10_0] creating index, cause [api], templates [a10-template], shards [1]/[0]
[2023-11-27T00:17:27,303][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[a10_0/WOVSZKOZSIWKeKhOwPG5lQ]
[2023-11-27T00:17:27,312][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,345][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/ImJIHHulRT2tqbPY11QPTQ]
[2023-11-27T00:17:27,346][INFO ][o.o.c.m.MetadataMappingService] [node-1] [nat_0/ImJIHHulRT2tqbPY11QPTQ] update_mapping [_doc]
[2023-11-27T00:17:27,366][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,367][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[a10_0][0]]]).
[2023-11-27T00:17:27,387][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,387][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/ImJIHHulRT2tqbPY11QPTQ]
[2023-11-27T00:17:27,424][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,429][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[jyUOnApHRx6PUxwbkVqm8A/5Jx4nBARRieqVqWmOR-xHw]
[2023-11-27T00:17:27,431][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[arista_0/8DXjp8u3Q72Sy-tvFUMI9g]
[2023-11-27T00:17:27,432][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [arista_0] creating index, cause [api], templates [arista-template], shards [1]/[0]
[2023-11-27T00:17:27,457][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[arista_0/8DXjp8u3Q72Sy-tvFUMI9g]
[2023-11-27T00:17:27,466][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,512][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[arista_0][0]]]).
[2023-11-27T00:17:27,530][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,554][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,560][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[FaFCA9TBTiSsz4X_Kek1Kw/MTVlR_R2RHyT_vZQWvOrAA]
[2023-11-27T00:17:27,562][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[beats_0/4vMOi2ZpTT2LWq8vSrpa0w]
[2023-11-27T00:17:27,563][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [beats_0] creating index, cause [api], templates [beats-template], shards [4]/[0]
[2023-11-27T00:17:27,579][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[beats_0/4vMOi2ZpTT2LWq8vSrpa0w]
[2023-11-27T00:17:27,618][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,697][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,698][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[beats_0][1], [beats_0][3], [beats_0][0]]]).
[2023-11-27T00:17:27,712][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,729][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,733][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[ElnQ2mN5SaGc3Lpsl4C_CQ/P6QTBuTRSTa4U3pyxMmUcQ]
[2023-11-27T00:17:27,736][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[graylog_0/PXnO9lwURU-z1Gmt6FXzIQ]
[2023-11-27T00:17:27,737][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [graylog_0] creating index, cause [api], templates [graylog-internal], shards [1]/[0]
[2023-11-27T00:17:27,764][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[graylog_0/PXnO9lwURU-z1Gmt6FXzIQ]
[2023-11-27T00:17:27,771][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,814][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[graylog_0][0]]]).
[2023-11-27T00:17:27,828][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,846][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,850][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[8FeCHyjnQ72oi6a9cMScQw/C3RQIKdyRMG4FwVT9bHH5g]
[2023-11-27T00:17:27,853][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gl-events_0/bjoGqvbQTPSnKEQAdeb1xQ]
[2023-11-27T00:17:27,854][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [gl-events_0] creating index, cause [api], templates [gl-events-template], shards [1]/[0]
[2023-11-27T00:17:27,877][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gl-events_0/bjoGqvbQTPSnKEQAdeb1xQ]
[2023-11-27T00:17:27,887][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,929][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[gl-events_0][0]]]).
[2023-11-27T00:17:27,946][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,966][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:27,971][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gmyKn6yURpeTDqWM2TSB_Q/4eQGlbS4Ri2lQtSQWi16nw]
[2023-11-27T00:17:27,974][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gl-system-events_0/5NvyZgCeQme4Dpkw-Woyiw]
[2023-11-27T00:17:27,975][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [gl-system-events_0] creating index, cause [api], templates [gl-system-events-template], shards [1]/[0]
[2023-11-27T00:17:27,988][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gl-system-events_0/5NvyZgCeQme4Dpkw-Woyiw]
[2023-11-27T00:17:28,001][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:28,039][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[gl-system-events_0][0]]]).
[2023-11-27T00:17:28,061][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:28,075][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:28,081][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[uyszPyzeQyqd6XNQ5-50yg/qG-rgpaPQXu18cDR696pbg]
[2023-11-27T00:17:28,084][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[juniper_0/QJKDtjhuTy2lsrviDh1RdQ]
[2023-11-27T00:17:28,085][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [juniper_0] creating index, cause [api], templates [juniper-template], shards [1]/[0]
[2023-11-27T00:17:28,098][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[juniper_0/QJKDtjhuTy2lsrviDh1RdQ]
[2023-11-27T00:17:28,105][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:28,141][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[juniper_0][0]]]).
[2023-11-27T00:17:28,164][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:28,194][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:28,198][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[sODghJa7S4GLPFnjnplMbw/GVL-3mTgSZKtLRdSY_-tEQ]
[2023-11-27T00:17:28,201][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[zte_0/oJhfOvxnQLWtlxprllnnIQ]
[2023-11-27T00:17:28,201][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [zte_0] creating index, cause [api], templates [zte-template], shards [1]/[0]
[2023-11-27T00:17:28,223][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[zte_0/oJhfOvxnQLWtlxprllnnIQ]
[2023-11-27T00:17:28,238][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:28,265][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[zte_0][0]]]).
[2023-11-27T00:17:28,286][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:28,307][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:28,624][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[juniper_0/QJKDtjhuTy2lsrviDh1RdQ]
[2023-11-27T00:17:28,625][INFO ][o.o.c.m.MetadataMappingService] [node-1] [juniper_0/QJKDtjhuTy2lsrviDh1RdQ] update_mapping [_doc]
[2023-11-27T00:17:28,647][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:29,425][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[zte_0/oJhfOvxnQLWtlxprllnnIQ]
[2023-11-27T00:17:29,425][INFO ][o.o.c.m.MetadataMappingService] [node-1] [zte_0/oJhfOvxnQLWtlxprllnnIQ] update_mapping [_doc]
[2023-11-27T00:17:29,448][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:36,499][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[a10_0/WOVSZKOZSIWKeKhOwPG5lQ]
[2023-11-27T00:17:36,500][INFO ][o.o.c.m.MetadataMappingService] [node-1] [a10_0/WOVSZKOZSIWKeKhOwPG5lQ] update_mapping [_doc]
[2023-11-27T00:17:36,538][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:17:37,923][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[graylog_0/PXnO9lwURU-z1Gmt6FXzIQ]
[2023-11-27T00:17:37,924][INFO ][o.o.c.m.MetadataMappingService] [node-1] [graylog_0/PXnO9lwURU-z1Gmt6FXzIQ] update_mapping [_doc]
[2023-11-27T00:17:37,954][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:19:43,326][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:21:15,314][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[arista_0/8DXjp8u3Q72Sy-tvFUMI9g]
[2023-11-27T00:21:15,315][INFO ][o.o.c.m.MetadataMappingService] [node-1] [arista_0/8DXjp8u3Q72Sy-tvFUMI9g] update_mapping [_doc]
[2023-11-27T00:21:15,334][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T00:24:43,326][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:29:43,326][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:34:43,326][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:39:43,327][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:44:43,327][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:49:43,240][INFO ][o.o.a.t.CronTransportAction] [node-1] Start running AD hourly cron.
[2023-11-27T00:49:43,250][ERROR][o.o.a.a.AlertIndices     ] [node-1] info deleteOldIndices
[2023-11-27T00:49:43,250][ERROR][o.o.a.a.AlertIndices     ] [node-1] info deleteOldIndices
[2023-11-27T00:49:43,250][INFO ][o.o.a.a.AlertIndices     ] [node-1] No Old History Indices to delete
[2023-11-27T00:49:43,250][INFO ][o.o.a.a.AlertIndices     ] [node-1] No Old Finding Indices to delete
[2023-11-27T00:49:43,251][INFO ][o.o.a.t.ADTaskManager    ] [node-1] Start to maintain running historical tasks
[2023-11-27T00:49:43,251][INFO ][o.o.a.c.HourlyCron       ] [node-1] Hourly maintenance succeeds
[2023-11-27T00:49:43,258][INFO ][o.o.s.i.DetectorIndexManagementService] [node-1] No Old Alert Indices to delete
[2023-11-27T00:49:43,259][INFO ][o.o.s.i.DetectorIndexManagementService] [node-1] No Old Finding Indices to delete
[2023-11-27T00:49:43,327][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:54:43,327][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T00:59:43,328][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:04:43,328][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:09:43,328][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:14:43,328][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:19:43,328][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:24:43,329][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:29:43,329][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:34:43,329][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:39:43,330][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:44:43,330][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:49:43,241][INFO ][o.o.a.t.CronTransportAction] [node-1] Start running AD hourly cron.
[2023-11-27T01:49:43,242][INFO ][o.o.a.t.ADTaskManager    ] [node-1] Start to maintain running historical tasks
[2023-11-27T01:49:43,242][INFO ][o.o.a.c.HourlyCron       ] [node-1] Hourly maintenance succeeds
[2023-11-27T01:49:43,330][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:54:43,330][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T01:59:43,330][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:00:07,184][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[qKK5AsKJR2m0NpmAgEvOZQ/JWXQKusfTJim1cGTIXGZ1w]
[2023-11-27T02:00:07,185][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_1/F4tQ6VUpSruSEhAg74nCSw]
[2023-11-27T02:00:07,186][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [nat_1] creating index, cause [api], templates [nat-template], shards [4]/[0]
[2023-11-27T02:00:07,202][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_1/F4tQ6VUpSruSEhAg74nCSw]
[2023-11-27T02:00:07,239][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T02:00:07,304][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T02:00:07,305][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[nat_1][1], [nat_1][3], [nat_1][0]]]).
[2023-11-27T02:00:07,318][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T02:00:07,332][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T02:00:07,366][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_1/F4tQ6VUpSruSEhAg74nCSw]
[2023-11-27T02:00:07,367][INFO ][o.o.c.m.MetadataMappingService] [node-1] [nat_1/F4tQ6VUpSruSEhAg74nCSw] update_mapping [_doc]
[2023-11-27T02:00:07,383][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T02:00:07,384][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_1/F4tQ6VUpSruSEhAg74nCSw]
[2023-11-27T02:00:37,448][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/ImJIHHulRT2tqbPY11QPTQ]
[2023-11-27T02:00:37,449][INFO ][o.o.c.m.MetadataMappingService] [node-1] [nat_0/ImJIHHulRT2tqbPY11QPTQ] update_mapping [_doc]
[2023-11-27T02:00:37,461][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T02:00:37,462][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/ImJIHHulRT2tqbPY11QPTQ]
[2023-11-27T02:00:37,475][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T02:00:37,477][INFO ][o.o.i.c.n.f.IndexOperationActionFilter] [node-1] Add notification action listener for tasks: 5Rw963tWSiKXR-aEigsHOQ:80132685 and action: indices:admin/forcemerge 
[2023-11-27T02:04:43,331][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:09:43,331][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:14:43,331][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:19:43,331][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:24:43,332][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:29:43,332][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:34:43,332][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:39:43,332][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:44:43,333][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:49:43,241][INFO ][o.o.a.t.CronTransportAction] [node-1] Start running AD hourly cron.
[2023-11-27T02:49:43,241][INFO ][o.o.a.t.ADTaskManager    ] [node-1] Start to maintain running historical tasks
[2023-11-27T02:49:43,241][INFO ][o.o.a.c.HourlyCron       ] [node-1] Hourly maintenance succeeds
[2023-11-27T02:49:43,333][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:54:43,333][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T02:59:43,333][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:04:43,334][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:09:43,334][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:14:43,334][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:19:43,334][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:24:43,335][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:29:43,335][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:34:43,335][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:39:43,335][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:44:43,336][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:49:43,241][INFO ][o.o.a.t.CronTransportAction] [node-1] Start running AD hourly cron.
[2023-11-27T03:49:43,242][INFO ][o.o.a.t.ADTaskManager    ] [node-1] Start to maintain running historical tasks
[2023-11-27T03:49:43,242][INFO ][o.o.a.c.HourlyCron       ] [node-1] Hourly maintenance succeeds
[2023-11-27T03:49:43,336][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:54:43,336][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T03:59:43,336][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-27T04:01:05,376][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [nat_1/F4tQ6VUpSruSEhAg74nCSw] deleting index
[2023-11-27T04:01:05,376][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [arista_0/8DXjp8u3Q72Sy-tvFUMI9g] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [a10_0/WOVSZKOZSIWKeKhOwPG5lQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [zte_0/oJhfOvxnQLWtlxprllnnIQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [nat_0/ImJIHHulRT2tqbPY11QPTQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-system-events_0/5NvyZgCeQme4Dpkw-Woyiw] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-events_0/bjoGqvbQTPSnKEQAdeb1xQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [graylog_0/PXnO9lwURU-z1Gmt6FXzIQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [juniper_0/QJKDtjhuTy2lsrviDh1RdQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [beats_0/4vMOi2ZpTT2LWq8vSrpa0w] deleting index
[2023-11-27T04:01:05,634][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T04:01:05,635][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] get managed-index failed: [.opendistro-ism-config] IndexNotFoundException[no such index [.opendistro-ism-config]]
[2023-11-27T04:01:07,106][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[FzRGSKIUTCizlXtgFfZfzQ/5rzGtrELQZqAwgT97j5hmA]
[2023-11-27T04:01:07,108][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/wuDjzKVzSuKipisRBDHKmg]
[2023-11-27T04:01:07,109][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [nat_0] creating index, cause [api], templates [nat-template], shards [4]/[0]
[2023-11-27T04:01:07,122][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/wuDjzKVzSuKipisRBDHKmg]
[2023-11-27T04:01:07,155][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T04:01:07,219][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-27T04:01:07,219][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[nat_0][1], [nat_0][3], [nat_0][0]]]).

Hety @totemz

This is odd, I took a look at your logs,

First I seen this.

[2023-11-27T04:01:05,376][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [nat_1/F4tQ6VUpSruSEhAg74nCSw] deleting index
[2023-11-27T04:01:05,376][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [arista_0/8DXjp8u3Q72Sy-tvFUMI9g] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [a10_0/WOVSZKOZSIWKeKhOwPG5lQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [zte_0/oJhfOvxnQLWtlxprllnnIQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [nat_0/ImJIHHulRT2tqbPY11QPTQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-system-events_0/5NvyZgCeQme4Dpkw-Woyiw] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-events_0/bjoGqvbQTPSnKEQAdeb1xQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [graylog_0/PXnO9lwURU-z1Gmt6FXzIQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [juniper_0/QJKDtjhuTy2lsrviDh1RdQ] deleting index
[2023-11-27T04:01:05,377][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [beats_0/4vMOi2ZpTT2LWq8vSrpa0w] deleting index

Was that you deleting those indices? , if not What is you index rotation strategy look like?

Second, I see this in your logs.

[YELLOW] to [GREEN] (reason: [shards started [[nat_0][1], [nat_0][3], [nat_0][0]]])

Sum it up Index set [node-1] [nat_1/F4tQ6VUpSruSEhAg74nCSw] deleting index
But now you have nat_0 started.

That one sceeenshot shows your jurnal is to high, Normally this is because of your indexer ( i.e., Opensearch) issue. Either from a connection or configuration.

Can you show your full docker-compose file?

No, indexes are not removed manually, it just happens at random times.
The rotation looks like this:


Can you show your full docker-compose file?

I wrote above

Compiled a new greylog build without docker using the official documentation
OS: Debian 11
Graylog: 5.2.1
Opensearch: 2.9.0.
MongoDB: 6.0.11

Time intervals between resets are getting shorter

[2023-11-28T15:54:43,503][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-28T15:59:43,503][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-28T16:04:43,503][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-28T16:08:30,742][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [arista_0/xo6e0f-HRS2TtESUEwbkWQ] deleting index
[2023-11-28T16:08:30,743][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [zte_0/Maz9kQMMThmZ8kpH2DhNGw] deleting index
[2023-11-28T16:08:30,743][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [graylog_0/NcZLfLqiTbmlXVYGobzeYQ] deleting index
[2023-11-28T16:08:30,743][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-events_0/7eQwT_AVTRKMfXBRXZ3BlQ] deleting index
[2023-11-28T16:08:30,743][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-system-events_0/JI4b7emHRpSjeRrRFQEzxQ] deleting index
[2023-11-28T16:08:30,743][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [a10_0/C_XtUUm1Sd6YwYtikrV93A] deleting index
[2023-11-28T16:08:30,743][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [beats_0/Yj33IqZtROiOgbXfFTYlMA] deleting index
[2023-11-28T16:08:30,743][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [nat_0/jywfbuI_SNC0FanIB_iR7g] deleting index
[2023-11-28T16:08:30,743][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [juniper_0/JQe93S_VRAK7PlDcKxs-OQ] deleting index
[2023-11-28T16:08:30,807][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:30,807][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] get managed-index failed: [.opendistro-ism-config] IndexNotFoundException[no such index [.opendistro-ism-config]]
[2023-11-28T16:08:37,106][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[ibQOIQ8WRqy-mvXRlZaW8Q/izVa5Z0sQ-SDZsqcXPE-VQ]
[2023-11-28T16:08:37,108][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/3znr5OYaSViJP7mTOH8u7g]
[2023-11-28T16:08:37,108][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [nat_0] creating index, cause [api], templates [nat-template], shards [4]/[0]
[2023-11-28T16:08:37,122][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[nat_0/3znr5OYaSViJP7mTOH8u7g]
[2023-11-28T16:08:37,151][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,218][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,218][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[nat_0][1], [nat_0][3], [nat_0][0]]]).
[2023-11-28T16:08:37,232][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,249][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,252][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[cLBNxZ2zRQiCfUlk0DaPEg/AlcuaibcTw2t-0iYcAf8bQ]
[2023-11-28T16:08:37,254][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[a10_0/7z0n1tEgTiKOeiP80C-whg]
[2023-11-28T16:08:37,254][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [a10_0] creating index, cause [api], templates [a10-template], shards [1]/[0]
[2023-11-28T16:08:37,269][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[a10_0/7z0n1tEgTiKOeiP80C-whg]
[2023-11-28T16:08:37,277][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,304][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[a10_0][0]]]).
[2023-11-28T16:08:37,320][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,335][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,338][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[rkZRWEzDTHaGm6hLwMEeNQ/3GhKBWpMSL-KfWVS2PhYlQ]
[2023-11-28T16:08:37,340][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[arista_0/qIFrIfqxQq-B3I5kbgjT2g]
[2023-11-28T16:08:37,340][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [arista_0] creating index, cause [api], templates [arista-template], shards [1]/[0]
[2023-11-28T16:08:37,354][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[arista_0/qIFrIfqxQq-B3I5kbgjT2g]
[2023-11-28T16:08:37,363][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,395][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[arista_0][0]]]).
[2023-11-28T16:08:37,407][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,420][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,423][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[4RyaE8lnSnC8z879rQBQWg/BxoTScuzQD-iekzVuTGN4w]
[2023-11-28T16:08:37,425][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[beats_0/5Ch5BDkxQLCVbMzrCzh5Kw]
[2023-11-28T16:08:37,426][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [beats_0] creating index, cause [api], templates [beats-template], shards [4]/[0]
[2023-11-28T16:08:37,438][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[beats_0/5Ch5BDkxQLCVbMzrCzh5Kw]
[2023-11-28T16:08:37,467][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,533][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,534][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[beats_0][1], [beats_0][3], [beats_0][0]]]).
[2023-11-28T16:08:37,550][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,569][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,571][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[0bHnZU-kQLGhLseJSxAErQ/BGxbs33nRA6OKRiRmFG3ag]
[2023-11-28T16:08:37,573][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[graylog_0/Aj4-v9dgRRK6AVT1xmClCA]
[2023-11-28T16:08:37,573][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [graylog_0] creating index, cause [api], templates [graylog-internal], shards [1]/[0]
[2023-11-28T16:08:37,586][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[graylog_0/Aj4-v9dgRRK6AVT1xmClCA]
[2023-11-28T16:08:37,596][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,622][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[graylog_0][0]]]).
[2023-11-28T16:08:37,634][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,648][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,651][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gv4RtqAxQTi9tEuzKDxZBA/pdbuyJ1FRQaiV6-H3XVZhA]
[2023-11-28T16:08:37,652][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gl-events_0/HENC09rOQweHJUsPPuC10g]
[2023-11-28T16:08:37,653][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [gl-events_0] creating index, cause [api], templates [gl-events-template], shards [1]/[0]
[2023-11-28T16:08:37,668][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gl-events_0/HENC09rOQweHJUsPPuC10g]
[2023-11-28T16:08:37,682][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,725][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[gl-events_0][0]]]).
[2023-11-28T16:08:37,738][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,753][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,755][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[o1FvYl95Q6WL1j3d89Dnsw/h7NA8N6rToqxZnYOjSPRhw]
[2023-11-28T16:08:37,757][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gl-system-events_0/i9INOpiCSrScGNRVv7cyjg]
[2023-11-28T16:08:37,757][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [gl-system-events_0] creating index, cause [api], templates [gl-system-events-template], shards [1]/[0]
[2023-11-28T16:08:37,769][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[gl-system-events_0/i9INOpiCSrScGNRVv7cyjg]
[2023-11-28T16:08:37,780][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,807][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[gl-system-events_0][0]]]).
[2023-11-28T16:08:37,819][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,834][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,836][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[KL2F938XRGWPttEuZYyFIA/UhMMHJQcRmCYT3NVR52MRA]
[2023-11-28T16:08:37,838][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[juniper_0/7fXL_DuiTnCnxPnS3i8xyA]
[2023-11-28T16:08:37,838][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [juniper_0] creating index, cause [api], templates [juniper-template], shards [1]/[0]
[2023-11-28T16:08:37,850][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[juniper_0/7fXL_DuiTnCnxPnS3i8xyA]
[2023-11-28T16:08:37,857][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,903][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[juniper_0][0]]]).
[2023-11-28T16:08:37,914][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,930][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:37,932][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[6VykxMLqSCetIxsL6awrNQ/FDUCExNaQNWo0GCRHG8JEw]
[2023-11-28T16:08:37,934][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[zte_0/DHnlYg4mSxWg7NWqJ5stOA]
[2023-11-28T16:08:37,934][INFO ][o.o.c.m.MetadataCreateIndexService] [node-1] [zte_0] creating index, cause [api], templates [zte-template], shards [1]/[0]
[2023-11-28T16:08:37,963][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[zte_0/DHnlYg4mSxWg7NWqJ5stOA]
[2023-11-28T16:08:37,971][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:38,001][INFO ][o.o.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[zte_0][0]]]).
[2023-11-28T16:08:38,015][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:38,027][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:38,100][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[zte_0/DHnlYg4mSxWg7NWqJ5stOA]
[2023-11-28T16:08:38,101][INFO ][o.o.c.m.MetadataMappingService] [node-1] [zte_0/DHnlYg4mSxWg7NWqJ5stOA] update_mapping [_doc]
[2023-11-28T16:08:38,101][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[a10_0/7z0n1tEgTiKOeiP80C-whg]
[2023-11-28T16:08:38,102][INFO ][o.o.c.m.MetadataMappingService] [node-1] [a10_0/7z0n1tEgTiKOeiP80C-whg] update_mapping [_doc]
[2023-11-28T16:08:38,118][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:38,119][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[juniper_0/7fXL_DuiTnCnxPnS3i8xyA]
[2023-11-28T16:08:38,121][INFO ][o.o.c.m.MetadataMappingService] [node-1] [juniper_0/7fXL_DuiTnCnxPnS3i8xyA] update_mapping [_doc]
[2023-11-28T16:08:38,147][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:08:38,147][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[juniper_0/7fXL_DuiTnCnxPnS3i8xyA]
[2023-11-28T16:09:10,104][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[arista_0/qIFrIfqxQq-B3I5kbgjT2g]
[2023-11-28T16:09:10,108][INFO ][o.o.c.m.MetadataMappingService] [node-1] [arista_0/qIFrIfqxQq-B3I5kbgjT2g] update_mapping [_doc]
[2023-11-28T16:09:10,142][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:09:43,504][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-28T16:10:05,100][INFO ][o.o.p.PluginsService     ] [node-1] PluginService:onIndexModule index:[graylog_0/Aj4-v9dgRRK6AVT1xmClCA]
[2023-11-28T16:10:05,101][INFO ][o.o.c.m.MetadataMappingService] [node-1] [graylog_0/Aj4-v9dgRRK6AVT1xmClCA] update_mapping [_doc]
[2023-11-28T16:10:05,125][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:14:43,504][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-11-28T16:19:36,818][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [juniper_0/7fXL_DuiTnCnxPnS3i8xyA] deleting index
[2023-11-28T16:19:36,818][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [nat_0/3znr5OYaSViJP7mTOH8u7g] deleting index
[2023-11-28T16:19:36,818][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [graylog_0/Aj4-v9dgRRK6AVT1xmClCA] deleting index
[2023-11-28T16:19:36,818][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [a10_0/7z0n1tEgTiKOeiP80C-whg] deleting index
[2023-11-28T16:19:36,818][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [arista_0/qIFrIfqxQq-B3I5kbgjT2g] deleting index
[2023-11-28T16:19:36,818][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [beats_0/5Ch5BDkxQLCVbMzrCzh5Kw] deleting index
[2023-11-28T16:19:36,818][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-system-events_0/i9INOpiCSrScGNRVv7cyjg] deleting index
[2023-11-28T16:19:36,818][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [zte_0/DHnlYg4mSxWg7NWqJ5stOA] deleting index
[2023-11-28T16:19:36,819][INFO ][o.o.c.m.MetadataDeleteIndexService] [node-1] [gl-events_0/HENC09rOQweHJUsPPuC10g] deleting index
[2023-11-28T16:19:36,898][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-11-28T16:19:36,898][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] get managed-index failed: [.opendistro-ism-config] IndexNotFoundException[no such index [.opendistro-ism-config]]

Hey @totemz

Ill be honest, Im not sure whats going on with index set’s being delete. It sounds like some configuration that is being execute. I have worked with graylog for a while and never heard of index set randomly being delete. This is strange.

The reason I ask about your docker-compose is because it doesnt look completed, Im not sure exactly how you have that installation setup configured. For a better understanding here mine, and still works. Yes its a little older but it works. The newer version should work with the following setting unless there is a bug.

version: '2'
services:
   # MongoDB: 
  mongodb:
    image: mongo:4
    network_mode: bridge
   # DB in share for persistence
    volumes:
      - mongo_data:/data/db
  
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    network_mode: bridge
    #data folder in share for persistence
    volumes:
      - es_data:/usr/share/elasticsearch/data
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
   # Graylog:
  graylog:
    image: graylog/graylog:4.2-jre11
    network_mode: bridge
    dns:
      - 192.168.2.15
      - 192.168.2.16
   # journal and config directories in local NFS share for persistence
    volumes:
      - graylog_journal:/usr/share/graylog/data/journal
      - graylog_bin:/usr/share/graylog/bin
      - graylog_data:/usr/share/graylog/data
    environment:
      # Container time Zone
      - TZ=America/Chicago
      # CHANGE ME (must be at least 16 characters)!
      - GRAYLOG_PASSWORD_SECRET=pJod1TRZAckHmqM2oQPqX1qnLVJS99jHm2DuCux2Bpiuu2XLTZuyb2YW9eHiKLTifjy7cLpeWIjWgMtnwZf6Q79HW2nonDhN
      # Password: admin
      - GRAYLOG_ROOT_PASSWORD_SHA2=ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f
      - GRAYLOG_HTTP_BIND_ADDRESS=0.0.0.0:9000
      - GRAYLOG_HTTP_EXTERNAL_URI=http://192.168.1.28:9000/
      - GRAYLOG_ROOT_TIMEZONE=America/Chicago
      - GRAYLOG_ROOT_EMAIL=greg.smith@domain.com
      - GRAYLOG_HTTP_PUBLISH_URI=http://192.168.1.28:9000/
      - GRAYLOG_TRANSPORT_EMAIL_PROTOCOL=smtp
      - GRAYLOG_HTTP_ENABLE_CORS=true
      - GRAYLOG_TRANSPORT_EMAIL_WEB_INTERFACE_URL=http://192.168.1.28:9000/
      - GRAYLOG_TRANSPORT_EMAIL_HOSTNAME=192.168.1.28
      - GRAYLOG_TRANSPORT_EMAIL_ENABLED=true
      - GRAYLOG_TRANSPORT_EMAIL_PORT=25
      - GRAYLOG_TRANSPORT_EMAIL_USE_AUTH=false
      - GRAYLOG_TRANSPORT_EMAIL_USE_TLS=false
      - GRAYLOG_TRANSPORT_EMAIL_USE_SSL=false
      - GRAYLOG_TRANSPORT_FROM_EMAIL=root@localhost
      - GRAYLOG_TRANSPORT_SUBJECT_PREFIX=[graylog]
      - GRAYLOG_REPORT_DISABLE_SANDBOX=true
      
    links:
      - mongodb:mongo
      - elasticsearch
    depends_on:
      - mongodb
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Syslog TCP
      - 8514:8514
      # Syslog UDP
      - 8514:8514/udp
      # GELF TCP
      - 12201:12201
      # GELF UDP
      - 12201:12201/udp
      # Reports
      - 9515:9515
      - 9515:9515/udp
      # email
      - 25:25
      - 25:25/udp     
       
#Volumes for persisting data
volumes:
  mongo_data:
    driver: local
  es_data:
    driver: local
  graylog_journal:
    driver: local
  graylog_bin:
    driver: local
  graylog_data:
    driver: local

EDIT: Went over some notes I had for troubleshooting, I realized you have three node cluster which make sense since your pushing 20K mps. Maybe you can get some insight of you cluster status with this.

curl  -XGET  'https://127.0.0.1:9200/_cluster/stats?pretty'

Was this working before or did this just happen?

To understand whether the problem is with docker, I turned off all docker containers, installed graylog simply on debian 11 without docker, created indexes, inputs, and the situation with indexes is similar - they are reset. the previous opensearch log is already without docker.
I wrote about it above

Compiled a new greylog build without docker using the official documentation
OS: Debian 11
Graylog: 5.2.1
Opensearch: 2.9.0.
MongoDB: 6.0.11

I re-created indexes, inputs, etc.,I edited the java settings in file’s /etc/opensearch/jvm.options - -Xms24g -Xmx24g and
/etc/default/graylog-server - GRAYLOG_SERVER_JAVA_OPTS=“-Xms8g -Xmx8g -server -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow” and now I don’t get errors about “Journal utilization is too high” or “Uncommited vessages deleted from journal”
But the indexes are still reset.

The only thing that connects these two installations is the same server with the zfs file system. Could the problem be precisely in the file system? because I’ve already run out of options

It worked on vm on graylog version 4, elasticsearch and mongo 4, but vm could not cope with that amount of data and the processors did not have avx support.
In order to work with graylog version 5 and mongo 6 deployed a new installation on a physical server with sufficient resources and processors with avh support. Since then, the indexes have been reset, the maximum time without a reset was a month. As I wrote earlier, this happens both in docker and without docker installation.

As @gsmith says, GL doesn’t randomly delete your indices. Nor does OpenSearch.
This could be malicious activity. Please review security of your OS installation.
Reminds me of this:

1 Like

Hi! @patrickmann
Thank you! it looks like a solution to a problem. I secured my opensearch on debian, the indexes are not reset. Now I will run the installation in docker with these parameters and will report the result.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.