After digging through more on the Elasticsearch logs and forums I ran the check health
curl -X GET "localhost:9200/_cluster/health"
and got the following
{"cluster_name":"graylog","status":"red","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":996,"active_shards":996,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":4,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":99.6}
Which says the status was red so I searched around for what that meant and I ran the following
curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 36068 100 36068 0 0 45540graylog_87 1 p UNASSIGNED ALLOCATION_FAILED
graylog_87 2 p UNASSIGNED ALLOCATION_FAILED
graylog_87 3 p UNASSIGNED ALLOCATION_FAILED
graylog_87 0 p UNASSIGNED ALLOCATION_FAILED
0 --:--:-- --:--:-- --:--:-- 45540
and this as well
curl -XGET localhost:9200/_cluster/allocation/explain?pretty
{
"index" : "graylog_87",
"shard" : 1,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "ALLOCATION_FAILED",
"at" : "2021-12-20T20:47:46.055Z",
"failed_allocation_attempts" : 5,
"details" : "failed shard on node [O6_sxqoVR3CDeTBoZoG2uw]: failed recovery, failure RecoveryFailedException[[graylog_87][1]: Recovery failed on {graylog}{O6_sxqoVR3CDeTBoZoG2uw}{-En-X4ftSOOMMZ6Q0impDA}{127.0.0.1}{127.0.0.1:9300}{dimr}]; nested: IndexShardRecoveryException[failed recovery]; nested: TranslogCorruptedException[translog from source [/media/data/elasticsearch/nodes/0/indices/D6EoAlY0S5-GpdU30weTrg/1/translog] is corrupted]; nested: IllegalStateException[pre-1.4 translog found [/media/data/elasticsearch/nodes/0/indices/D6EoAlY0S5-GpdU30weTrg/1/translog/translog-216.tlog]]; ",
"last_allocation_status" : "no"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes that hold an in-sync shard copy",
"node_allocation_decisions" : [
{
"node_id" : "O6_sxqoVR3CDeTBoZoG2uw",
"node_name" : "graylog",
"transport_address" : "127.0.0.1:9300",
"node_decision" : "no",
"store" : {
"in_sync" : true,
"allocation_id" : "g8Hq5B3KSL6prD2LooaZVg"
},
"deciders" : [
{
"decider" : "max_retry",
"decision" : "NO",
"explanation" : "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2021-12-20T20:47:46.055Z], failed_attempts[5], failed_nodes[[O6_sxqoVR3CDeTBoZoG2uw]], delayed=false, details[failed shard on node [O6_sxqoVR3CDeTBoZoG2uw]: failed recovery, failure RecoveryFailedException[[graylog_87][1]: Recovery failed on {graylog}{O6_sxqoVR3CDeTBoZoG2uw}{-En-X4ftSOOMMZ6Q0impDA}{127.0.0.1}{127.0.0.1:9300}{dimr}]; nested: IndexShardRecoveryException[failed recovery]; nested: TranslogCorruptedException[translog from source [/media/data/elasticsearch/nodes/0/indices/D6EoAlY0S5-GpdU30weTrg/1/translog] is corrupted]; nested: IllegalStateException[pre-1.4 translog found [/media/data/elasticsearch/nodes/0/indices/D6EoAlY0S5-GpdU30weTrg/1/translog/translog-216.tlog]]; ], allocation_status[deciders_no]]]"
}
]
}
]
}
Found online that the solution was to delete the index so I ran
curl -XDELETE 'localhost:9200/graylog_87/'
Ran the health check again and it was green and in the Graylog web interface system messages it stopped showing the messages continuously now.