[Graylog_2.3.2] Shards and deflectors errors

Hi all,

I’ve installed 3 graylod node (on each node : graylog, es, mongo).

The versions are:

  • Graylog : 2.3.2
  • ES : 5.6.5
  • Mongo : 3.4.10

Topo :

  • 1 master node
  • 2 slave nodes

I’ve differents errors :

  • In WebUI: [System/Overview] :
    ** Deflector exists as an index and is not an alias
    ** Elasticsearch cluster yellow status with shards unassigned
    ** System messages with : Deflector is pointing to [null], not the newest one : [graylog_0]. Re-pointing.
  • In logs:
    ** ERROR [IndexRotationThread] Couldn’t point deflector to a new index … Couldn’t collect aliases for index pattern graylog_*
    ** Index not found for query : graylog_2

If I use curl for shards:

curl -X GET localhost:9200/_cat_shards

All are started and unassigned

If I use curl for indices :

curl localhost:9200/_cat/indices?v

I’ve graylog indices:

graylog_0
graylog_1
graylog_2

All indices are in yellow state and open status.

Someone have an idea to resolv this errors ?

Thanks

How did you install Graylog, Elasticsearch, and MongoDB?
What’s the full configuration of all Graylog, Elasticsearch, and MongoDB nodes?
What’s your network topology?

Hi @jochen,

Nodes informations:

I’ve the “Sorry, new users can only put 2 links in a post.” error. What’s your regex to detect it ? I tried differents posts but errors …

Thanks

Try using proper formatting for code blocks and text snippets (see Markdown Reference) or post your configuration files and logs to a paste service such as https://gist.github.com/ or https://0bin.net/.

hi @jochen

Node01:

[root@node01 ~]# cat /etc/graylog/server/log4j2.xml
<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="org.graylog2.log4j" shutdownHook="disable">
    <Appenders>
        <RollingFile name="rolling-file" fileName="/var/log/graylog-server/server.log" filePattern="/var/log/graylog-server/server.log.%i.gz">
            <PatternLayout pattern="%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} %-5p [%c{1}] %m%n"/>
            <Policies>
                <SizeBasedTriggeringPolicy size="50MB"/>
            </Policies>
            <DefaultRolloverStrategy max="10" fileIndex="min"/>
        </RollingFile>

        <!-- Internal Graylog log appender. Please do not disable. This makes internal log messages available via REST calls. -->
        <Memory name="graylog-internal-logs" bufferSize="500"/>
    </Appenders>
    <Loggers>
        <!-- Application Loggers -->
        <Logger name="org.graylog2" level="info"/>
        <Logger name="com.github.joschi.jadconfig" level="warn"/>
        <!-- This emits a harmless warning for ActiveDirectory every time which we can't work around :( -->
        <Logger name="org.apache.directory.api.ldap.model.message.BindRequestImpl" level="error"/>
        <!-- Prevent DEBUG message about Lucene Expressions not found. -->
        <Logger name="org.elasticsearch.script" level="warn"/>
        <!-- Disable messages from the version check -->
        <Logger name="org.graylog2.periodical.VersionCheckThread" level="off"/>
        <!-- Suppress crazy byte array dump of Drools -->
        <Logger name="org.drools.compiler.kie.builder.impl.KieRepositoryImpl" level="warn"/>
        <!-- Silence chatty natty -->
        <Logger name="com.joestelmach.natty.Parser" level="warn"/>
        <!-- Silence Kafka log chatter -->
        <Logger name="kafka.log.Log" level="warn"/>
        <Logger name="kafka.log.OffsetIndex" level="warn"/>
        <!-- Silence useless session validation messages -->
        <Logger name="org.apache.shiro.session.mgt.AbstractValidatingSessionManager" level="warn"/>
        <Root level="warn">
            <AppenderRef ref="rolling-file"/>
            <AppenderRef ref="graylog-internal-logs"/>
        </Root>
    </Loggers>
</Configuration>

[root@node01 ~]# cat /etc/graylog/server/server.conf | grep -v "^#"
is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = ****************
root_password_sha2 = ****************
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://10.0.0.1:9000/api/
web_listen_uri = http://10.0.0.1:9000/
elasticsearch_hosts = http://10.0.0.1:9200,http://10.0.0.2:9200,http://10.0.0.3:9200
rotation_strategy = count
elasticsearch_max_docs_per_index = 50000000
elasticsearch_max_number_of_indices = 50
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 1
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://10.0.0.1:27017,10.0.0.2:27017,10.0.0.3:27017/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32

[root@node01 ~]# cat /etc/graylog/server/node-id
c006c84e-51e6-****************-91d0396e9673

[root@node01 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v "^#"
cluster.name: graylog
node.name: node-01
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: ["127.0.0.1","10.0.0.1"]

[root@node01 ~]# cat /etc/elasticsearch/jvm.options | grep -v "^#"
-Xms2g
-Xmx2g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+AlwaysPreTouch
-server
-Xss1m
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys=true
-Djdk.io.permissionsUseCanonicalPath=true
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Dlog4j.skipJansi=true
-XX:+HeapDumpOnOutOfMemoryError

[root@node01 ~]# cat /etc/elasticsearch/log4j2.properties | grep -v "^#"
status = error
logger.action.name = org.elasticsearch.action
logger.action.level = debug
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%d{yyyy-MM-dd}.log
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%d{yyyy-MM-dd}.log
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true
logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false

Node02:

[root@node02 ~]# cat /etc/graylog/server/log4j2.xml
<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="org.graylog2.log4j" shutdownHook="disable">
    <Appenders>
        <RollingFile name="rolling-file" fileName="/var/log/graylog-server/server.log" filePattern="/var/log/graylog-server/server.log.%i.gz">
            <PatternLayout pattern="%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} %-5p [%c{1}] %m%n"/>
            <Policies>
                <SizeBasedTriggeringPolicy size="50MB"/>
            </Policies>
            <DefaultRolloverStrategy max="10" fileIndex="min"/>
        </RollingFile>

        <!-- Internal Graylog log appender. Please do not disable. This makes internal log messages available via REST calls. -->
        <Memory name="graylog-internal-logs" bufferSize="500"/>
    </Appenders>
    <Loggers>
        <!-- Application Loggers -->
        <Logger name="org.graylog2" level="info"/>
        <Logger name="com.github.joschi.jadconfig" level="warn"/>
        <!-- This emits a harmless warning for ActiveDirectory every time which we can't work around :( -->
        <Logger name="org.apache.directory.api.ldap.model.message.BindRequestImpl" level="error"/>
        <!-- Prevent DEBUG message about Lucene Expressions not found. -->
        <Logger name="org.elasticsearch.script" level="warn"/>
        <!-- Disable messages from the version check -->
        <Logger name="org.graylog2.periodical.VersionCheckThread" level="off"/>
        <!-- Suppress crazy byte array dump of Drools -->
        <Logger name="org.drools.compiler.kie.builder.impl.KieRepositoryImpl" level="warn"/>
        <!-- Silence chatty natty -->
        <Logger name="com.joestelmach.natty.Parser" level="warn"/>
        <!-- Silence Kafka log chatter -->
        <Logger name="kafka.log.Log" level="warn"/>
        <Logger name="kafka.log.OffsetIndex" level="warn"/>
        <!-- Silence useless session validation messages -->
        <Logger name="org.apache.shiro.session.mgt.AbstractValidatingSessionManager" level="warn"/>
        <Root level="warn">
            <AppenderRef ref="rolling-file"/>
            <AppenderRef ref="graylog-internal-logs"/>
        </Root>
    </Loggers>
</Configuration>

[root@node02 ~]# cat /etc/graylog/server/server.conf | grep -v "^#"
cat /etc/graylog/server/node-id
cat /etc/elasticsearch/elasticsearch.yml | grep -v "^#"
cat /etc/elasticsearch/jvm.options | grep -v "^#"
cat /etc/elasticsearch/log4j2.properties | grep -v "^#"
is_master = false
node_id_file = /etc/graylog/server/node-id
password_secret = ****************
root_password_sha2 = ****************
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://10.0.0.2:9000/api/
web_listen_uri = http://10.0.0.2:9000/
elasticsearch_hosts = http://10.0.0.1:9200,http://10.0.0.2:9200,http://10.0.0.3:9200
rotation_strategy = count
elasticsearch_max_docs_per_index = 50000000
elasticsearch_max_number_of_indices = 50
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 1
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://10.0.0.1:27017,10.0.0.2:27017,10.0.0.3:27017/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32

[root@node02 ~]# cat /etc/graylog/server/node-id
5259c1a5-9580-****************-435f094f5285

[root@node02 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v "^#"
cluster.name: graylog
node.name: node-02
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: ["127.0.0.1","10.0.0.2"]

[root@node02 ~]# cat /etc/elasticsearch/jvm.options | grep -v "^#"
-Xms2g
-Xmx2g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+AlwaysPreTouch
-server
-Xss1m
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys=true
-Djdk.io.permissionsUseCanonicalPath=true
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Dlog4j.skipJansi=true
-XX:+HeapDumpOnOutOfMemoryError

[root@node02 ~]# cat /etc/elasticsearch/log4j2.properties | grep -v "^#"
status = error
logger.action.name = org.elasticsearch.action
logger.action.level = debug
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%d{yyyy-MM-dd}.log
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%d{yyyy-MM-dd}.log
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true
logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false

Node03:

[root@node03 ~]# cat /etc/graylog/server/log4j2.xml
<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="org.graylog2.log4j" shutdownHook="disable">
    <Appenders>
        <RollingFile name="rolling-file" fileName="/var/log/graylog-server/server.log" filePattern="/var/log/graylog-server/server.log.%i.gz">
            <PatternLayout pattern="%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} %-5p [%c{1}] %m%n"/>
            <Policies>
                <SizeBasedTriggeringPolicy size="50MB"/>
            </Policies>
            <DefaultRolloverStrategy max="10" fileIndex="min"/>
        </RollingFile>

        <!-- Internal Graylog log appender. Please do not disable. This makes internal log messages available via REST calls. -->
        <Memory name="graylog-internal-logs" bufferSize="500"/>
    </Appenders>
    <Loggers>
        <!-- Application Loggers -->
        <Logger name="org.graylog2" level="info"/>
        <Logger name="com.github.joschi.jadconfig" level="warn"/>
        <!-- This emits a harmless warning for ActiveDirectory every time which we can't work around :( -->
        <Logger name="org.apache.directory.api.ldap.model.message.BindRequestImpl" level="error"/>
        <!-- Prevent DEBUG message about Lucene Expressions not found. -->
        <Logger name="org.elasticsearch.script" level="warn"/>
        <!-- Disable messages from the version check -->
        <Logger name="org.graylog2.periodical.VersionCheckThread" level="off"/>
        <!-- Suppress crazy byte array dump of Drools -->
        <Logger name="org.drools.compiler.kie.builder.impl.KieRepositoryImpl" level="warn"/>
        <!-- Silence chatty natty -->
        <Logger name="com.joestelmach.natty.Parser" level="warn"/>
        <!-- Silence Kafka log chatter -->
        <Logger name="kafka.log.Log" level="warn"/>
        <Logger name="kafka.log.OffsetIndex" level="warn"/>
        <!-- Silence useless session validation messages -->
        <Logger name="org.apache.shiro.session.mgt.AbstractValidatingSessionManager" level="warn"/>
        <Root level="warn">
            <AppenderRef ref="rolling-file"/>
            <AppenderRef ref="graylog-internal-logs"/>
        </Root>
    </Loggers>
</Configuration>

[root@node03 ~]# cat /etc/graylog/server/server.conf | grep -v "^#"
is_master = false
node_id_file = /etc/graylog/server/node-id
password_secret = ****************
root_password_sha2 = ****************
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://10.0.0.3:9000/api/
web_listen_uri = http://10.0.0.3:9000/
elasticsearch_hosts = http://10.0.0.1:9200,http://10.0.0.2:9200,http://10.0.0.3:9200
rotation_strategy = count
elasticsearch_max_docs_per_index = 50000000
elasticsearch_max_number_of_indices = 50
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 1
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://10.0.0.1:27017,10.0.0.2:27017,10.0.0.3:27017/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32

[root@node03 ~]# cat /etc/graylog/server/node-id
8ffa439a-b57d-****************-c61578ee3443

[root@node03 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v "^#"
cluster.name: graylog
node.name: node-03
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: ["127.0.0.1","10.0.0.3"]

[root@node03 ~]# cat /etc/elasticsearch/jvm.options | grep -v "^#"
-Xms2g
-Xmx2g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+AlwaysPreTouch
-server
-Xss1m
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys=true
-Djdk.io.permissionsUseCanonicalPath=true
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Dlog4j.skipJansi=true
-XX:+HeapDumpOnOutOfMemoryError

[root@node03 ~]# cat /etc/elasticsearch/log4j2.properties | grep -v "^#"
status = error
logger.action.name = org.elasticsearch.action
logger.action.level = debug
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%d{yyyy-MM-dd}.log
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%d{yyyy-MM-dd}.log
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true
logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false

Try adding action.auto_create_index: false to your Elasticsearch configuration files and removing all indices in Elasticsearch.

I add this line in all nodes ES configuration file, use XDELETE on graylog_* and restart ES + graylog-server.

I’ve this:

Node 01 : tail -f server.log

2017-12-14T12:29:53.636+01:00 INFO  [MongoIndexSet] Did not find a deflector alias. Setting one up now.
2017-12-14T12:29:53.641+01:00 INFO  [MongoIndexSet] Pointing to already existing index target <graylog_2>
2017-12-14T12:30:03.615+01:00 WARN  [IndexRotationThread] Deflector is pointing to [null], not the newest one: [graylog_2]. Re-pointing.
2017-12-14T12:30:03.635+01:00 ERROR [IndexRotationThread] Couldn't point deflector to a new index
org.graylog2.indexer.ElasticsearchException: Couldn't switch alias graylog_deflector from index null to index graylog_2

[alias_action] failed to parse field [remove]
        at org.graylog2.indexer.cluster.jest.JestUtils.specificException(JestUtils.java:94) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:58) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:63) ~[graylog.jar:?]
        at org.graylog2.indexer.indices.Indices.cycleAlias(Indices.java:608) ~[graylog.jar:?]
        at org.graylog2.indexer.MongoIndexSet.pointTo(MongoIndexSet.java:357) ~[graylog.jar:?]
        at org.graylog2.periodical.IndexRotationThread.checkAndRepair(IndexRotationThread.java:166) ~[graylog.jar:?]
        at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:76) ~[graylog.jar:?]
        at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_151]
        at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
        at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_151]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_151]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_151]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_151]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

		
2017-12-14T12:33:44.766+01:00 ERROR [MessageCountRotationStrategy] Unknown index, cannot perform rotation
org.graylog2.indexer.IndexNotFoundException: Couldn't check stats of index graylog_3

Index not found for query: graylog_3. Try recalculating your index ranges.
        at org.graylog2.indexer.cluster.jest.JestUtils.buildIndexNotFoundException(JestUtils.java:124) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.specificException(JestUtils.java:80) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:58) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:63) ~[graylog.jar:?]
        at org.graylog2.indexer.indices.Indices.indexStats(Indices.java:255) ~[graylog.jar:?]
        at org.graylog2.indexer.indices.Indices.numberOfMessages(Indices.java:215) ~[graylog.jar:?]
        at org.graylog2.indexer.rotation.strategies.MessageCountRotationStrategy.shouldRotate(MessageCountRotationStrategy.java:66) ~[graylog.jar:?]
        at org.graylog2.indexer.rotation.strategies.MessageCountRotationStrategy.shouldRotate(MessageCountRotationStrategy.java:34) ~[graylog.jar:?]
        at org.graylog2.indexer.rotation.strategies.AbstractRotationStrategy.rotate(AbstractRotationStrategy.java:67) ~[graylog.jar:?]
        at org.graylog2.periodical.IndexRotationThread.checkForRotation(IndexRotationThread.java:113) ~[graylog.jar:?]
        at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:77) ~[graylog.jar:?]
        at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_151]
        at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
        at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_151]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_151]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_151]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_151]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
2017-12-14T12:33:44.766+01:00 ERROR [AbstractRotationStrategy] Cannot perform rotation of index <graylog_3> in index set <Default index set> with strategy <org.graylog2.indexer.rotation.strategies.MessageCountRotationStrategy> at this moment

Also, I not have any log for ES and the path.logs is configured on each node (elasticsearch.yml).

In WebUI system overview, I’ve the same error:

  • Deflector exists as an index and is not an alias.
  • And shards unassigned
  • System messages : Deflector is pointing to [null], not the newest one: [graylog_2]. Re-pointing.

Delete all indices and index aliases in Elasticsearch, then start Graylog.

Node 01 :

# systemctl stop elasticsearch.service && systemctl stop graylog-server
# systemctl start elasticsearch.service

# curl -XGET http://127.0.0.1:9200/_aliases
{"graylog_3":{"aliases":{}}}

# curl -XGET http://127.0.0.1:9200/_cat/indices/
yellow open graylog_3 AaLmm0ORQdaOfS9mRMwLMQ 4 1 0 0 764b 764b

# curl -XDELETE http://localhost:9200/graylog_*/
{"acknowledged":true}

# systemctl start graylog-server && tail -f /var/log/graylog-server/server.log

WebUI System overview :

  • Elasticsearch cluster is yellow. Shards: 8 active, 0 initializing, 0 relocating, 8 unassigned, What does this mean?

System messages :

  • There is no index target to point to. Creating one now.
  • And after : Deflector is pointing to [null], not the newest one: [graylog_0]. Re-pointing.

WebUI search page (random issue):

Could not execute search

There was an error executing your search. Please check your Graylog server logs for more information.

Error Message:
    Unable to perform search query. Index not found for query: graylog_0. Try recalculating your index ranges.
Details:
    Index not found for query: graylog_0. Try recalculating your index ranges.
Search status code:
    500
Search response:
    cannot GET http://10.0.0.1:9000/api/search/universal/relative?query=%2A&range=300&limit=150&sort=timestamp%3Adesc (500)

I try "System/Indices/Default index set/Maintenance/Recalculate index range.
After this, I check system messages for verify if the job is start status :

SystemJob <c72da410-e0e1-11e7-9c71-5254006210e0> [org.graylog2.indexer.ranges.RebuildIndexRangesJob] finished in 60ms.
Done calculating index ranges for 1 indices. Took 21ms.

Try search again :

A search on two, I’ve the same error.

Node 02 :

# systemctl stop elasticsearch.service && systemctl stop graylog-server
# systemctl start elasticsearch.service

# curl -XGET http://127.0.0.1:9200/_aliases
{"graylog_2":{"aliases":{"graylog_deflector":{}}},"graylog_1":{"aliases":{}},"graylog_0":{"aliases":{}}}

# curl -XDELETE http://127.0.0.1:9200/graylog_2/_alias/_all
{"acknowledged":true}

# curl -XGET http://127.0.0.1:9200/_cat/indices/
yellow open graylog_2 KGhy4-10TaqCCHXRriKjXQ 4 2 0 0 764b 764b
yellow open graylog_1 vu3-qoxMQWK8ZHyLuJobfw 4 2 0 0 764b 764b
yellow open graylog_0 6N2nbSuMQSSJEumi0S_xew 4 2 0 0 764b 764b

# curl -XDELETE http://localhost:9200/graylog_*/
{"acknowledged":true}

# systemctl start graylog-server && tail -f /var/log/graylog-server/server.log

Node 03 :

# systemctl stop elasticsearch.service && systemctl stop graylog-server
# systemctl start elasticsearch.service

# curl -XGET http://127.0.0.1:9200/_aliases
{"graylog_0":{"aliases":{}},"graylog_2":{"aliases":{"graylog_deflector":{}}}}

# curl -XDELETE http://127.0.0.1:9200/graylog_2/_alias/_all
{"acknowledged":true}

# curl -XGET http://127.0.0.1:9200/_cat/indices/
yellow open graylog_0 Y1KM_ercT3aCplDICAFEOA 4 2 0 0   764b   764b
yellow open graylog_2 VQW1ZgFVRaWK97K1GPG0Eg 4 2 9 0 24.5kb 24.5kb

# curl -XDELETE http://localhost:9200/graylog_*/
{"acknowledged":true}

# systemctl start graylog-server && tail -f /var/log/graylog-server/server.log

@jochen, do you have an idea for the error in search page ?

Thanks

Your Elasticsearch nodes don’t seem to form a cluster but each runs for itself, ignoring all other nodes.

Check the logs of your Elasticsearch nodes and make sure your configuration is correct.

@jochen

I have only graylog.log in /var/log/elasticsearch:

Also, I not have any log for ES and the path.logs is configured on each node (elasticsearch.yml).

And what’s wrong with that?

That I’ve configured a /var/log/elasticsearch.log in elasticsearch.yml, it’s strange to not have any log.

But I checked all ES configuration files on 3 nodes, and I not see errors.

In WebUI, I’ve 87 failed indexing attempts in the last 24 hours.

In the System messages, I’ve the same error “Deflector is pointing to [null], not the newest one: [graylog_0|1]. Re-pointing.”

When I try “curl -XGET localhost:9200/_cluster/state?pretty”; I’ve:

"routing_table" :  {
  "indices" : {
    "graylog_0": {
      "shards" : {
        "1" : [
           {
             "state" : "UNASSIGNED" 
             "primary" : true,
             "node" : null,
             "relocating_node" : null,
             "shard" : 1,
             "index" : "graylog_0",
             "recovery_source" : {
               "type" : "EXISTING_STORE"
             },
             unassigned_info" : {
               "reason" : "CLUSTER_RECOVERED",
               "at" : time
               "delayed" : false,
               "allocation_status" : "no_valid_shard_copy"

....


Do you have an idea @jochen?

Thanks

you should read the message that elasticsearch is giving you - together with google that should work.

Maybe you disabled cluster allocation and just need to re-enable that?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.