Deflector exists as an index and is not an alias. GL 3.3

Hey,

Despite trying out a number of appraoches i remain stuck with " Deflector exists as an index and is not an alias. (triggered 8 hours ago)" After deleting all indices i see this, no fix.

curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason
graylog_1 3 p UNASSIGNED INDEX_CREATED
graylog_1 1 p UNASSIGNED INDEX_CREATED
graylog_1 2 p UNASSIGNED INDEX_CREATED
graylog_1 0 p UNASSIGNED INDEX_CREATED
graylog_0 3 p UNASSIGNED INDEX_CREATED
graylog_0 1 p UNASSIGNED INDEX_CREATED
graylog_0 2 p UNASSIGNED INDEX_CREATED
graylog_0 0 p UNASSIGNED INDEX_CREATED
gl-events_0 3 p UNASSIGNED INDEX_CREATED
gl-events_0 1 p UNASSIGNED INDEX_CREATED
gl-events_0 2 p UNASSIGNED INDEX_CREATED
gl-events_0 0 p UNASSIGNED INDEX_CREATED

----------------------------------------- indicator for configuration flaw, but all seems to be well
WARN [BufferSynchronizerService] Elasticsearch is unavailable. Not waiting to clear buffers and caches, as we have no healthy cluster.
----------------------------------------- updated
Which ever index set i look into i get:

We could not get the indices overview information. This usually means there was a problem connecting to Elasticsearch, and  **you should ensure Elasticsearch is up and reachable from Graylog** .

Graylog will continue storing your messages in its journal, but you will not be able to search on them until Elasticsearch is reachable again.

Despite the indication at least the state can be read from the cluster

Elasticsearch cluster

The possible Elasticsearch cluster states and more related information is available in the [Graylog documentation](https://docs.graylog.org/en/3.3/pages/configuration/elasticsearch.html).**

Elasticsearch cluster is yellow.  Shards: 0 active, 0 initializing, 0 relocating, 16 unassigned, [What does this mean?](https://docs.graylog.org/en/3.3/pages/configuration/elasticsearch.html#cluster-status-explained)

----------------------- After fresh install (apt remove for elasticsearch and graylog, mongo)

● graylog-server.service - Graylog server
Loaded: loaded (/lib/systemd/system/graylog-server.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-06-16 09:46:53 CEST; 25s ago
Docs: http://docs.graylog.org/
Main PID: 3891 (graylog-server)
Tasks: 110 (limit: 4654)
Memory: 694.2M
CGroup: /system.slice/graylog-server.service
├─3891 /bin/sh /usr/share/graylog-server/bin/graylog-server
└─3910 /usr/bin/java -Xms1g -Xmx1g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThr

Jun 16 09:46:53 machine systemd[1]: Started Graylog server.
Jun 16 09:46:54 machine graylog-server[3891]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Jun 16 09:46:54 machine graylog-server[3891]: WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
Jun 16 09:46:55 machine graylog-server[3891]: WARNING: An illegal reflective access operation has occurred
Jun 16 09:46:55 machine graylog-server[3891]: WARNING: Illegal reflective access by com.google.inject.assistedinject.FactoryProvider2$MethodHandleWrapper (file:/usr/share/graylog-server/graylog.ja
Jun 16 09:46:55 machine graylog-server[3891]: WARNING: Please consider reporting this to the maintainers of com.google.inject.assistedinject.FactoryProvider2$MethodHandleWrapper
Jun 16 09:46:55 machine graylog-server[3891]: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
Jun 16 09:46:55 machine graylog-server[3891]: WARNING: All illegal access operations will be denied in a future release

For Elasticsearch cluster is yellow. Shards: 0 active, 0 initializing, 0 relocating, 12 unassigned

The number of unassigned was reduced from 20 to 12 by checking output for
curl http://localhost:9200/_aliases?pretty
by stopping graylog and deleting any _deflector index
now the list is clean but no luck resolving the unreachable or unhealty status for ES from GL

fix the unassigned shards and the cluster will be green

i have stop graylog again and remove any index i saw in the output for
curl http://localhost:9200/_aliases?pretty
the status for ES is now green but the deflector issue persists

now at: cannot allocate because allocation is not permitted to any of the nodes
may be because i run ES and GL on a single node setup ?

Thanks but this is where i was at just before, i do not find a fix for the allocation notification

you need to find the reason why elasticsearch can’t allocate the indices. The log should tell you what the problem is.

I mean the elasticsearch log.

Just checked, nothing sensible. This reminds me of why i left graylog the last time, it is not just graylog one has to learn, it are also the fragilitie and interoperations. I feel there is still a lot of improvement possible with how graylog handles elasticsearch.For less experienced users it would save a lot of time, and pressure on the forum.

ERROR [Messages] Failed to index [4] messages. Please check the index error log in your web interface for the reason. Error: One or more of the items in the Bulk request failed, check BulkResult.getItems() for more information.

“error log in your web interface” in Graylog says many many many times, while graylog_deflector is typically an alias

|2 hours ago|graylog_deflector|6923e540-b06f-11ea-9ed3-7e053dcd3d04|{“type”:“index_not_found_exception”,“reason”:“no such index”,“resource.type”:“index_expression”,“resource.id”:“graylog_deflector”,“index_uuid”:“na”,“index”:“graylog_deflector”}|

Currently Graylog remains without error (in the green), until i enter system-indices click on an index

What stumps me is it lists 4 indices but states: Total: 0 indices, 0 documents, 0B

After reboot, the server now boots into a “green” state with no change in effective usability
DEBUG … All shards failed for phase: [dfs]

Checking both /var/lib/elasticsearch and /var/lib/graylog/ i notice there indices may be not even there, total size is 48KB and 1.9MB respectively of which 1.6 for the journal alone.

With no data in place i did rm -Rf for both /var/lib/elasticsearch/* and /var/lib/graylog/* to no avail
the service is now in the green but still not one index is created despite having created an index using the Graylog web interface. This newly created index also results in an error stating the index could not be found when accessing it over system indices.

the elasticsearch logfile should tell you why the indices are not created at all.

But what Elasticsearch Version did you try to use with Graylog?

using ES 6.8.10

Those logs are a mess if you are not accustomed to them.

curl -X PUT “localhost:9200/someindex?pretty”
{
“acknowledged” : true,
“shards_acknowledged” : false,
“index” : “logs_and_events”
}

tried every connectivity test possible, i do not see why graylog insists on being unable to reach ES while they both run on localhost and on the same system

Found: [IndexFieldTypePollerPeriodical] Active write index for index set “Default index set” (5edd25147c826c2f5211a7fe) doesn’t exist yet

Even after a completely fresh install using the pre existing configuration i get this INFO when starting graylog

INFO [IndexRangesCleanupPeriodical] Skipping index range cleanup because the Elasticsearch cluster is unreachable or unhealthy

Bizarrely enough, graylog seems to ‘remember’ the entire previous configuration and failed indexing attempts, i do not understand this at all since i erased what i could find for both ES and GL

reinstalling again, this time also clearing out anything mongod-org

Started over by hunting for artefacts left from mongod, elasticsearch-oss and graylog-server
The setup is now clean again (finally) and there is ONE apparent error which is likely the root cause

Could not retrieve global index stats.

Fetching global index stats failed: cannot GET http://127.0.0.1:9000/api/system/indices/index_sets/stats (500)

status = fixed this result only after repeat reinstallations, no way to salvage the data was found but appears to be available, for this test setup not important

context: running a single-node setup hosting both graylog and elastic and mongod

configuration changes leading to fix

elasticsearch

node.data: true

graylog

commented both

#elasticsearch_shards = 4
#elasticsearch_replicas = 0

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.