Graylog unable to connect to AWS Elasticsearch cluster


(NGerealy@08) #1

I have setup the graylog-server following the instructions for (ubuntu 16.04). Everything works fine. Now, I would like to point the graylog-server to an AWS ES cluster. The only configuration I have changed is this in server.conf -

elasticsearch_hosts = [https://vpc-aws-es-cluster.com:443]

And this started throwing the below error on web interface -

Could not load field information
Loading field information failed with status: cannot GET https://graylog.xxxx/api/system/fields (500)

Below is the version information -

  • Graylog Version: 2.4
  • Elasticsearch Version: 5.6

Tried with ES version 2.3 aswell, still the same issue.

I am able to curl the ES cluster and print the health status as follows -

{
  "cluster_name" : "xxxx",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 4,
  "active_shards" : 4,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

I have attached the complete server.log file with this thread.

Please let me know if I need to make any other configuration changes.
I do not find clear documentation on integration with AWS ES cluster. Please point to one if it is existing already.

Thank you.


(Jan Doberstein) #2

Did you use the same connection string for the CURL Command and elasticsearch_hosts ?


(NGerealy@08) #3

Yes, I did !! Is there any other configuration I need to change?


(Jan Doberstein) #4

no need for any other configuration setting.

It should work, but with the given information no additional hints how to solve this is possible.

Helpful would be, the server.conf, the server.log that include the startup of Graylog and might reveal what is working and what not.


(NGerealy@08) #5

Below is the server.log

2018-08-30T06:52:44.329Z INFO  [CmdLineTool] Loaded plugin: AWS plugins 2.4.6 [org.graylog.aws.plugin.AWSPlugin]
2018-08-30T06:52:44.332Z INFO  [CmdLineTool] Loaded plugin: Elastic Beats Input 2.4.6 [org.graylog.plugins.beats.BeatsInputPlugin]
2018-08-30T06:52:44.332Z INFO  [CmdLineTool] Loaded plugin: CEF Input 2.4.6 [org.graylog.plugins.cef.CEFInputPlugin]
2018-08-30T06:52:44.333Z INFO  [CmdLineTool] Loaded plugin: Collector 2.4.6 [org.graylog.plugins.collector.CollectorPlugin]
2018-08-30T06:52:44.334Z INFO  [CmdLineTool] Loaded plugin: Enterprise Integration Plugin 2.4.6 [org.graylog.plugins.enterprise_integration.EnterpriseIntegrationPlugin]
2018-08-30T06:52:44.334Z INFO  [CmdLineTool] Loaded plugin: MapWidgetPlugin 2.4.6 [org.graylog.plugins.map.MapWidgetPlugin]
2018-08-30T06:52:44.335Z INFO  [CmdLineTool] Loaded plugin: NetFlow Plugin 2.4.6 [org.graylog.plugins.netflow.NetFlowPlugin]
2018-08-30T06:52:44.342Z INFO  [CmdLineTool] Loaded plugin: Pipeline Processor Plugin 2.4.6 [org.graylog.plugins.pipelineprocessor.ProcessorPlugin]
2018-08-30T06:52:44.342Z INFO  [CmdLineTool] Loaded plugin: Threat Intelligence Plugin 2.4.6 [org.graylog.plugins.threatintel.ThreatIntelPlugin]
2018-08-30T06:52:44.788Z INFO  [CmdLineTool] Running with JVM arguments: -Xms1g -Xmx1g -XX:NewRatio=1 -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb
2018-08-30T06:52:45.035Z INFO  [Version] HV000001: Hibernate Validator 5.1.3.Final
2018-08-30T06:52:47.195Z INFO  [InputBufferImpl] Message journal is enabled.
2018-08-30T06:52:47.221Z INFO  [NodeId] Node ID: eb5e4833-1dad-4838-8a34-3d4497d618f8
2018-08-30T06:52:47.416Z INFO  [LogManager] Loading logs.
2018-08-30T06:52:47.476Z INFO  [LogManager] Logs loading complete.
2018-08-30T06:52:47.477Z INFO  [KafkaJournal] Initialized Kafka based journal at /var/lib/graylog-server/journal
2018-08-30T06:52:47.491Z INFO  [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy <BlockingWaitStrategy>, running 2 parallel message handlers.
2018-08-30T06:52:47.511Z INFO  [cluster] Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=5000}
2018-08-30T06:52:47.562Z INFO  [cluster] No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
2018-08-30T06:52:47.590Z INFO  [connection] Opened connection [connectionId{localValue:1, serverValue:13123}] to localhost:27017
2018-08-30T06:52:47.594Z INFO  [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 6, 6]}, minWireVersion=0, maxWireVersion=6, maxDocumentSize=16777216, roundTripTimeNanos=1630824}
2018-08-30T06:52:47.601Z INFO  [connection] Opened connection [connectionId{localValue:2, serverValue:13124}] to localhost:27017
2018-08-30T06:52:48.093Z INFO  [AbstractJestClient] Setting server pool to a list of 1 servers: [https://vpc-graylog-xxx.amazonaws.com:443]
2018-08-30T06:52:48.100Z INFO  [JestClientFactory] Using multi thread/connection supporting pooling connection manager
2018-08-30T06:52:48.224Z INFO  [JestClientFactory] Using custom ObjectMapper instance
2018-08-30T06:52:48.232Z INFO  [JestClientFactory] Node Discovery disabled...
2018-08-30T06:52:48.233Z INFO  [JestClientFactory] Idle connection reaping disabled...
2018-08-30T06:52:48.673Z INFO  [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2018-08-30T06:52:50.533Z INFO  [RulesEngineProvider] No static rules file loaded.
2018-08-30T06:52:50.557Z INFO  [connection] Opened connection [connectionId{localValue:3, serverValue:13125}] to localhost:27017
2018-08-30T06:52:50.695Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2018-08-30T06:52:50.702Z INFO  [OutputBuffer] Initialized OutputBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2018-08-30T06:52:50.724Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2018-08-30T06:52:50.738Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2018-08-30T06:52:50.750Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2018-08-30T06:52:50.763Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2018-08-30T06:52:50.948Z INFO  [ServerBootstrap] Graylog server 2.4.6+ceaa7e4 starting up
2018-08-30T06:52:50.948Z INFO  [ServerBootstrap] JRE: Oracle Corporation 1.8.0_181 on Linux 4.4.0-1061-aws
2018-08-30T06:52:50.948Z INFO  [ServerBootstrap] Deployment: deb
2018-08-30T06:52:50.949Z INFO  [ServerBootstrap] OS: Ubuntu 16.04.5 LTS (xenial)
2018-08-30T06:52:50.949Z INFO  [ServerBootstrap] Arch: amd64
2018-08-30T06:52:50.982Z INFO  [PeriodicalsService] Starting 25 periodicals ...
2018-08-30T06:52:50.982Z INFO  [Periodicals] Starting [org.graylog2.periodical.ThroughputCalculator] periodical in [0s], polling every [1s].
2018-08-30T06:52:50.982Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlertScannerThread] periodical in [10s], polling every [60s].
2018-08-30T06:52:50.983Z INFO  [Periodicals] Starting [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] periodical in [0s], polling every [1s].
2018-08-30T06:52:50.984Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [120s], polling every [20s].
2018-08-30T06:52:50.989Z INFO  [Periodicals] Starting [org.graylog2.periodical.ContentPackLoaderPeriodical] periodical, running forever.
2018-08-30T06:52:50.989Z INFO  [Periodicals] Starting [org.graylog2.periodical.GarbageCollectionWarningThread] periodical, running forever.
2018-08-30T06:52:50.990Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2018-08-30T06:52:50.991Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRetentionThread] periodical in [0s], polling every [300s].
2018-08-30T06:52:50.991Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRotationThread] periodical in [0s], polling every [10s].
2018-08-30T06:52:50.992Z INFO  [Periodicals] Starting [org.graylog2.periodical.NodePingThread] periodical in [0s], polling every [1s].
2018-08-30T06:52:50.992Z INFO  [Periodicals] Starting [org.graylog2.periodical.VersionCheckThread] periodical in [300s], polling every [1800s].
2018-08-30T06:52:50.992Z INFO  [Periodicals] Starting [org.graylog2.periodical.ThrottleStateUpdaterThread] periodical in [1s], polling every [1s].
2018-08-30T06:52:50.993Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventPeriodical] periodical in [0s], polling every [1s].
2018-08-30T06:52:50.993Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventCleanupPeriodical] periodical in [0s], polling every [86400s].
2018-08-30T06:52:50.993Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterIdGeneratorPeriodical] periodical, running forever.
2018-08-30T06:52:50.994Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesMigrationPeriodical] periodical, running forever.
2018-08-30T06:52:50.994Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesCleanupPeriodical] periodical in [15s], polling every [3600s].
2018-08-30T06:52:50.998Z INFO  [connection] Opened connection [connectionId{localValue:4, serverValue:13126}] to localhost:27017
2018-08-30T06:52:50.999Z INFO  [connection] Opened connection [connectionId{localValue:5, serverValue:13127}] to localhost:27017
2018-08-30T06:52:51.002Z INFO  [connection] Opened connection [connectionId{localValue:7, serverValue:13129}] to localhost:27017
2018-08-30T06:52:51.003Z INFO  [connection] Opened connection [connectionId{localValue:6, serverValue:13128}] to localhost:27017
2018-08-30T06:52:51.011Z INFO  [connection] Opened connection [connectionId{localValue:8, serverValue:13130}] to localhost:27017
2018-08-30T06:52:51.012Z INFO  [connection] Opened connection [connectionId{localValue:9, serverValue:13131}] to localhost:27017
2018-08-30T06:52:51.033Z INFO  [connection] Opened connection [connectionId{localValue:10, serverValue:13132}] to localhost:27017
2018-08-30T06:52:51.058Z INFO  [PeriodicalsService] Not starting [org.graylog2.periodical.UserPermissionMigrationPeriodical] periodical. Not configured to run on this node.
2018-08-30T06:52:51.059Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlarmCallbacksMigrationPeriodical] periodical, running forever.
2018-08-30T06:52:51.072Z INFO  [Periodicals] Starting [org.graylog2.periodical.ConfigurationManagementPeriodical] periodical, running forever.
2018-08-30T06:52:51.083Z INFO  [Periodicals] Starting [org.graylog2.periodical.LdapGroupMappingMigration] periodical, running forever.
2018-08-30T06:52:51.089Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexFailuresPeriodical] periodical, running forever.
2018-08-30T06:52:51.092Z INFO  [Periodicals] Starting [org.graylog2.periodical.TrafficCounterCalculator] periodical in [0s], polling every [1s].
2018-08-30T06:52:51.101Z INFO  [Periodicals] Starting [org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration] periodical, running forever.
2018-08-30T06:52:51.108Z INFO  [Periodicals] Starting [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread] periodical in [0s], polling every [3600s].
2018-08-30T06:52:51.139Z INFO  [LegacyDefaultStreamMigration] Legacy default stream has no connections, no migration needed.
2018-08-30T06:52:51.143Z INFO  [LookupTableService] Data Adapter otx-api-domain/5b7b32f4d0149b77291711dc [@ea7f3b3] STARTING
2018-08-30T06:52:51.143Z INFO  [LookupTableService] Data Adapter abuse-ch-ransomware-domains/5b7b32f4d0149b77291711db [@49dad4b2] STARTING
2018-08-30T06:52:51.143Z INFO  [LookupTableService] Data Adapter abuse-ch-ransomware-ip/5b7b32f4d0149b77291711de [@6b1db3d4] STARTING
2018-08-30T06:52:51.143Z INFO  [LookupTableService] Data Adapter otx-api-ip/5b7b32f4d0149b77291711dd [@2a0cf862] STARTING
2018-08-30T06:52:51.147Z INFO  [LookupTableService] Data Adapter tor-exit-node/5b7b32f4d0149b77291711df [@3f7d2a0d] STARTING
2018-08-30T06:52:51.148Z INFO  [LookupTableService] Data Adapter whois/5b7b32f4d0149b77291711d9 [@35516813] STARTING
2018-08-30T06:52:51.149Z WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-08-30T06:52:51.153Z INFO  [LookupTableService] Data Adapter spamhaus-drop/5b7b32f4d0149b77291711da [@275ff4ec] STARTING
2018-08-30T06:52:51.150Z ERROR [LookupDataAdapter] Couldn't start data adapter <abuse-ch-ransomware-domains/5b7b32f4d0149b77291711db/@49dad4b2>
org.graylog.plugins.threatintel.tools.AdapterDisabledException: Abuse.ch service is disabled, not starting adapter. To enable it please go to System / Configurations.
	at org.graylog.plugins.threatintel.adapters.abusech.AbuseChRansomAdapter.doStart(AbuseChRansomAdapter.java:80) ~[?:?]
	at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
	at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
	at com.google.common.util.concurrent.Callables$4.run(Callables.java:122) [graylog.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
2018-08-30T06:52:51.148Z ERROR [LookupDataAdapter] Couldn't start data adapter <spamhaus-drop/5b7b32f4d0149b77291711da/@275ff4ec>
org.graylog.plugins.threatintel.tools.AdapterDisabledException: Spamhaus service is disabled, not starting (E)DROP adapter. To enable it please go to System / Configurations.
	at org.graylog.plugins.threatintel.adapters.spamhaus.SpamhausEDROPDataAdapter.doStart(SpamhausEDROPDataAdapter.java:68) ~[?:?]
	at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
	at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
	at com.google.common.util.concurrent.Callables$4.run(Callables.java:122) [graylog.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
2018-08-30T06:52:51.147Z ERROR [LookupDataAdapter] Couldn't start data adapter <tor-exit-node/5b7b32f4d0149b77291711df/@3f7d2a0d>
org.graylog.plugins.threatintel.tools.AdapterDisabledException: TOR service is disabled, not starting TOR exit addresses adapter. To enable it please go to System / Configurations.
	at org.graylog.plugins.threatintel.adapters.tor.TorExitNodeDataAdapter.doStart(TorExitNodeDataAdapter.java:73) ~[?:?]
	at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
	at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
	at com.google.common.util.concurrent.Callables$4.run(Callables.java:122) [graylog.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
2018-08-30T06:52:51.156Z INFO  [LookupTableService] Data Adapter whois/5b7b32f4d0149b77291711d9 [@35516813] RUNNING
2018-08-30T06:52:51.159Z WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2018-08-30T06:52:51.163Z INFO  [LookupTableService] Data Adapter tor-exit-node/5b7b32f4d0149b77291711df [@3f7d2a0d] RUNNING
2018-08-30T06:52:51.151Z ERROR [LookupDataAdapter] Couldn't start data adapter <abuse-ch-ransomware-ip/5b7b32f4d0149b77291711de/@6b1db3d4>
org.graylog.plugins.threatintel.tools.AdapterDisabledException: Abuse.ch service is disabled, not starting adapter. To enable it please go to System / Configurations.
	at org.graylog.plugins.threatintel.adapters.abusech.AbuseChRansomAdapter.doStart(AbuseChRansomAdapter.java:80) ~[?:?]
	at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
	at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
	at com.google.common.util.concurrent.Callables$4.run(Callables.java:122) [graylog.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
2018-08-30T06:52:51.165Z INFO  [LookupTableService] Data Adapter abuse-ch-ransomware-domains/5b7b32f4d0149b77291711db [@49dad4b2] RUNNING
2018-08-30T06:52:51.179Z INFO  [LookupTableService] Data Adapter spamhaus-drop/5b7b32f4d0149b77291711da [@275ff4ec] RUNNING
2018-08-30T06:52:51.180Z INFO  [LookupTableService] Data Adapter abuse-ch-ransomware-ip/5b7b32f4d0149b77291711de [@6b1db3d4] RUNNING
2018-08-30T06:52:51.180Z INFO  [LookupTableService] Data Adapter otx-api-ip/5b7b32f4d0149b77291711dd [@2a0cf862] RUNNING
2018-08-30T06:52:51.180Z INFO  [LookupTableService] Data Adapter otx-api-domain/5b7b32f4d0149b77291711dc [@ea7f3b3] RUNNING
2018-08-30T06:52:51.185Z INFO  [LookupTableService] Cache whois-cache/5b7b32f4d0149b77291711d4 [@46ce4d0c] STARTING
2018-08-30T06:52:51.185Z INFO  [LookupTableService] Cache otx-api-ip-cache/5b7b32f4d0149b77291711d6 [@20c65304] STARTING
2018-08-30T06:52:51.186Z INFO  [LookupTableService] Cache spamhaus-e-drop-cache/5b7b32f4d0149b77291711d7 [@6e8d6228] STARTING
2018-08-30T06:52:51.190Z INFO  [LookupTableService] Cache whois-cache/5b7b32f4d0149b77291711d4 [@46ce4d0c] RUNNING
2018-08-30T06:52:51.190Z INFO  [LookupTableService] Cache otx-api-ip-cache/5b7b32f4d0149b77291711d6 [@20c65304] RUNNING
2018-08-30T06:52:51.190Z INFO  [LookupTableService] Cache spamhaus-e-drop-cache/5b7b32f4d0149b77291711d7 [@6e8d6228] RUNNING
2018-08-30T06:52:51.193Z INFO  [LookupTableService] Cache threat-intel-uncached-adapters/5b7b32f4d0149b77291711d3 [@6d7ef992] STARTING
2018-08-30T06:52:51.193Z INFO  [LookupTableService] Cache otx-api-domain-cache/5b7b32f4d0149b77291711d5 [@3a6f01f2] STARTING
2018-08-30T06:52:51.199Z INFO  [LookupTableService] Cache threat-intel-uncached-adapters/5b7b32f4d0149b77291711d3 [@6d7ef992] RUNNING
2018-08-30T06:52:51.199Z INFO  [LookupTableService] Cache otx-api-domain-cache/5b7b32f4d0149b77291711d5 [@3a6f01f2] RUNNING
2018-08-30T06:52:51.218Z INFO  [LookupTableService] Starting lookup table otx-api-ip/5b7b32f4d0149b77291711e1 [@5c996388] using cache otx-api-ip-cache/5b7b32f4d0149b77291711d6 [@20c65304], data adapter otx-api-ip/5b7b32f4d0149b77291711dd [@2a0cf862]
2018-08-30T06:52:51.218Z INFO  [LookupTableService] Starting lookup table tor-exit-node-list/5b7b32f4d0149b77291711e2 [@36d194af] using cache threat-intel-uncached-adapters/5b7b32f4d0149b77291711d3 [@6d7ef992], data adapter tor-exit-node/5b7b32f4d0149b77291711df [@3f7d2a0d]
2018-08-30T06:52:51.218Z INFO  [LookupTableService] Starting lookup table whois/5b7b32f4d0149b77291711e3 [@273c3bf6] using cache whois-cache/5b7b32f4d0149b77291711d4 [@46ce4d0c], data adapter whois/5b7b32f4d0149b77291711d9 [@35516813]
2018-08-30T06:52:51.218Z INFO  [LookupTableService] Starting lookup table abuse-ch-ransomware-ip/5b7b32f4d0149b77291711e4 [@4721877f] using cache threat-intel-uncached-adapters/5b7b32f4d0149b77291711d3 [@6d7ef992], data adapter abuse-ch-ransomware-ip/5b7b32f4d0149b77291711de [@6b1db3d4]
2018-08-30T06:52:51.219Z INFO  [LookupTableService] Starting lookup table spamhaus-drop/5b7b32f4d0149b77291711e5 [@50b33930] using cache spamhaus-e-drop-cache/5b7b32f4d0149b77291711d7 [@6e8d6228], data adapter spamhaus-drop/5b7b32f4d0149b77291711da [@275ff4ec]
2018-08-30T06:52:51.219Z INFO  [LookupTableService] Starting lookup table abuse-ch-ransomware-domains/5b7b32f4d0149b77291711e6 [@6a7dbb65] using cache threat-intel-uncached-adapters/5b7b32f4d0149b77291711d3 [@6d7ef992], data adapter abuse-ch-ransomware-domains/5b7b32f4d0149b77291711db [@49dad4b2]
2018-08-30T06:52:51.219Z INFO  [LookupTableService] Starting lookup table otx-api-domain/5b7b32f4d0149b77291711e7 [@2c58de57] using cache otx-api-domain-cache/5b7b32f4d0149b77291711d5 [@3a6f01f2], data adapter otx-api-domain/5b7b32f4d0149b77291711dc [@ea7f3b3]
2018-08-30T06:52:51.469Z ERROR [ConfigurationManagementPeriodical] Error while running migration <V20170607164210_MigrateReopenedIndicesToAliases{2017-06-07T16:42:10Z}>
org.graylog2.indexer.ElasticsearchException: Couldn't read cluster state for reopened indices [graylog_*]


	at org.graylog2.indexer.cluster.jest.JestUtils.specificException(JestUtils.java:95) ~[graylog.jar:?]
	at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:57) ~[graylog.jar:?]
	at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:62) ~[graylog.jar:?]
	at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.getReopenedIndices(V20170607164210_MigrateReopenedIndicesToAliases.java:88) ~[graylog.jar:?]
	at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.getReopenedIndices(V20170607164210_MigrateReopenedIndicesToAliases.java:135) ~[graylog.jar:?]
	at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.lambda$upgrade$0(V20170607164210_MigrateReopenedIndicesToAliases.java:78) ~[graylog.jar:?]
	at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) ~[?:1.8.0_181]
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_181]
	at java.util.Collections$2.tryAdvance(Collections.java:4717) ~[?:1.8.0_181]
	at java.util.Collections$2.forEachRemaining(Collections.java:4725) ~[?:1.8.0_181]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_181]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_181]
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[?:1.8.0_181]
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[?:1.8.0_181]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_181]
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[?:1.8.0_181]
	at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.upgrade(V20170607164210_MigrateReopenedIndicesToAliases.java:80) ~[graylog.jar:?]
	at org.graylog2.periodical.ConfigurationManagementPeriodical.doRun(ConfigurationManagementPeriodical.java:43) [graylog.jar:?]
	at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
2018-08-30T06:52:51.617Z INFO  [JerseyService] Enabling CORS for HTTP endpoint
2018-08-30T06:53:02.119Z INFO  [NetworkListener] Started listener bound to [127.0.0.1:9000]
2018-08-30T06:53:02.120Z INFO  [HttpServer] [HttpServer] Started.
2018-08-30T06:53:02.121Z INFO  [JerseyService] Started REST API at <http://127.0.0.1:9000/api/>
2018-08-30T06:53:02.121Z INFO  [JerseyService] Started Web Interface at <http://127.0.0.1:9000/>
2018-08-30T06:53:02.122Z INFO  [ServerBootstrap] Services started, startup times in ms: {OutputSetupService [RUNNING]=11, KafkaJournal [RUNNING]=12, BufferSynchronizerService [RUNNING]=12, InputSetupService [RUNNING]=56, JournalReader [RUNNING]=109, StreamCacheService [RUNNING]=124, ConfigurationEtagService [RUNNING]=125, PeriodicalsService [RUNNING]=142, LookupTableService [RUNNING]=256, JerseyService [RUNNING]=11147}
2018-08-30T06:53:02.127Z INFO  [ServiceManagerListener] Services are healthy
2018-08-30T06:53:02.127Z INFO  [InputSetupService] Triggering launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] to Running [LB:ALIVE]
2018-08-30T06:53:02.130Z INFO  [ServerBootstrap] Graylog server up and running.
2018-08-30T06:53:02.152Z INFO  [InputStateListener] Input [GELF HTTP/5b7c7790d0149b26555cb366] is now STARTING
2018-08-30T06:53:02.207Z WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input GELFHttpInput{title=Gelf-http, type=org.graylog2.inputs.gelf.http.GELFHttpInput, nodeId=null} should be 1048576 but is 212992.
2018-08-30T06:53:02.209Z INFO  [InputStateListener] Input [GELF HTTP/5b7c7790d0149b26555cb366] is now RUNNING


(NGerealy@08) #6

Below is the server.conf

############################
# GRAYLOG CONFIGURATION FILE
############################
#
# This is the Graylog configuration file. The file has to use ISO 8859-1/Latin-1 character encoding.
# Characters that cannot be directly represented in this encoding can be written using Unicode escapes
# as defined in https://docs.oracle.com/javase/specs/jls/se8/html/jls-3.html#jls-3.3, using the \u prefix.
# For example, \u002c.
# 
# * Entries are generally expected to be a single line of the form, one of the following:
#
# propertyName=propertyValue
# propertyName:propertyValue
#
# * White space that appears between the property name and property value is ignored,
#   so the following are equivalent:
# 
# name=Stephen
# name = Stephen
#
# * White space at the beginning of the line is also ignored.
#
# * Lines that start with the comment characters ! or # are ignored. Blank lines are also ignored.
#
# * The property value is generally terminated by the end of the line. White space following the
#   property value is not ignored, and is treated as part of the property value.
#
# * A property value can span several lines if each line is terminated by a backslash (‘\’) character.
#   For example:
#
# targetCities=\
#         Detroit,\
#         Chicago,\
#         Los Angeles
#
#   This is equivalent to targetCities=Detroit,Chicago,Los Angeles (white space at the beginning of lines is ignored).
# 
# * The characters newline, carriage return, and tab can be inserted with characters \n, \r, and \t, respectively.
# 
# * The backslash character must be escaped as a double backslash. For example:
# 
# path=c:\\docs\\doc1
#

# If you are running more than one instances of Graylog server you have to select one of these
# instances as master. The master will perform some periodical tasks that non-masters won't perform.
is_master = true

# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea
# to use an absolute file path here if you are starting Graylog server from init scripts or similar.
node_id_file = /etc/graylog/server/node-id

# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.
# Generate one by using for example: pwgen -N 1 -s 96
password_secret = XYANpuo0ngM3amiF1q0GhRm1WhWSDf27rJZ30wEDWmqv5tQrn7RClSiK41vUMfh8UlVK0QQPdMckAhpv1RiGhhJo7Qp6qwi7

# The default root user is named 'admin'
#root_username = admin

# You MUST specify a hash password for the root user (which you only need to initially set up the
# system and in case you lose connectivity to your authentication backend)
# This password cannot be changed using the API or via the web interface. If you need to change it,
# modify it in this file.
# Create one by using for example: echo -n yourpassword | shasum -a 256
# and put the resulting hash value into the following line
root_password_sha2 = e3c652f0ba0b4801205814f8b6bc49672c4c74e25b497770bb89b22cdeb4e951

# The email address of the root user.
# Default is empty
#root_email = ""

# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.
# Default is UTC
#root_timezone = UTC

# Set plugin directory here (relative or absolute)
plugin_dir = /usr/share/graylog-server/plugin

# REST API listen URI. Must be reachable by other Graylog server nodes if you run a cluster.
# When using Graylog Collectors, this URI will be used to receive heartbeat messages and must be accessible for all collectors.
rest_listen_uri = http://127.0.0.1:9000/api/

# REST API transport address. Defaults to the value of rest_listen_uri. Exception: If rest_listen_uri
# is set to a wildcard IP address (0.0.0.0) the first non-loopback IPv4 system address is used.
# If set, this will be promoted in the cluster discovery APIs, so other nodes may try to connect on
# this address and it is used to generate URLs addressing entities in the REST API. (see rest_listen_uri)
# You will need to define this, if your Graylog server is running behind a HTTP proxy that is rewriting
# the scheme, host name or URI.
# This must not contain a wildcard address (0.0.0.0).
# rest_transport_uri = http://18.222.234.98:12900

# Enable CORS headers for REST API. This is necessary for JS-clients accessing the server directly.
# If these are disabled, modern browsers will not be able to retrieve resources from the server.
# This is enabled by default. Uncomment the next line to disable it.
#rest_enable_cors = false

# Enable GZIP support for REST API. This compresses API responses and therefore helps to reduce
# overall round trip times. This is enabled by default. Uncomment the next line to disable it.
#rest_enable_gzip = false

# Enable HTTPS support for the REST API. This secures the communication with the REST API with
# TLS to prevent request forgery and eavesdropping. This is disabled by default. Uncomment the
# next line to enable it.
# rest_enable_tls = true

# The X.509 certificate chain file in PEM format to use for securing the REST API.
# rest_tls_cert_file = /home/ubuntu/cert.pem

# The PKCS#8 private key file in PEM format to use for securing the REST API.
# rest_tls_key_file = /home/ubuntu/pkcs8-encrypted.pem 

# The password to unlock the private key used for securing the REST API.
# rest_tls_key_password = secret

# The maximum size of the HTTP request headers in bytes.
#rest_max_header_size = 8192

# The size of the thread pool used exclusively for serving the REST API.
#rest_thread_pool_size = 16

# Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For
# header. May be subnets, or hosts.
#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128

# Enable the embedded Graylog web interface.
# Default: true
#web_enable = false

# Web interface listen URI.
# Configuring a path for the URI here effectively prefixes all URIs in the web interface. This is a replacement
# for the application.context configuration parameter in pre-2.0 versions of the Graylog web interface.
web_listen_uri = http://127.0.0.1:9000/

# Web interface endpoint URI. This setting can be overriden on a per-request basis with the X-Graylog-Server-URL header.
# Default: $rest_transport_uri
web_endpoint_uri = http://graylog-xxx.io:9000/api/
# Enable CORS headers for the web interface. This is necessary for JS-clients accessing the server directly.
# If these are disabled, modern browsers will not be able to retrieve resources from the server.
#web_enable_cors = false

# Enable/disable GZIP support for the web interface. This compresses HTTP responses and therefore helps to reduce
# overall round trip times. This is enabled by default. Uncomment the next line to disable it.
#web_enable_gzip = false

# Enable HTTPS support for the web interface. This secures the communication of the web browser with the web interface
# using TLS to prevent request forgery and eavesdropping.
# This is disabled by default. Uncomment the next line to enable it and see the other related configuration settings.
# web_enable_tls = true

# The X.509 certificate chain file in PEM format to use for securing the web interface.
# web_tls_cert_file = /home/ubuntu/cert.pem

# The PKCS#8 private key file in PEM format to use for securing the web interface.
# web_tls_key_file = /home/ubuntu/pkcs8-encrypted.pem

# The password to unlock the private key used for securing the web interface.
# web_tls_key_password = secret

# The maximum size of the HTTP request headers in bytes.
#web_max_header_size = 8192

# The size of the thread pool used exclusively for serving the web interface.
#web_thread_pool_size = 16

# List of Elasticsearch hosts Graylog should connect to.
# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.
# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that
# requires authentication.
#
# Default: http://127.0.0.1:9200
# elasticsearch_hosts = http://node1:9200,http://user:password@node2:19200
elasticsearch_hosts = https://vpc-graylog-xxx.amazonaws.com:443
# Maximum amount of time to wait for successfull connection to Elasticsearch HTTP port.
#
# Default: 10 Seconds
#elasticsearch_connect_timeout = 10s

# Maximum amount of time to wait for reading back a response from an Elasticsearch server.
#
# Default: 60 seconds
#elasticsearch_socket_timeout = 60s

# Maximum idle time for an Elasticsearch connection. If this is exceeded, this connection will
# be tore down.
#
# Default: inf
#elasticsearch_idle_timeout = -1s

# Maximum number of total connections to Elasticsearch.
#
# Default: 20
#elasticsearch_max_total_connections = 20

# Maximum number of total connections per Elasticsearch route (normally this means per
# elasticsearch server).
#
# Default: 2
#elasticsearch_max_total_connections_per_route = 2

# Maximum number of times Graylog will retry failed requests to Elasticsearch.
#
# Default: 2
#elasticsearch_max_retries = 2

# Enable automatic Elasticsearch node discovery through Nodes Info,
# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster-nodes-info.html
#
# WARNING: Automatic node discovery does not work if Elasticsearch requires authentication, e. g. with Shield.
#
# Default: false
# elasticsearch_discovery_enabled = true

# Filter for including/excluding Elasticsearch nodes in discovery according to their custom attributes,
# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster.html#cluster-nodes
#
# Default: empty
# elasticsearch_discovery_filter = rack:42

# Frequency of the Elasticsearch node discovery.
#
# Default: 30s
# elasticsearch_discovery_frequency = 30s

# Enable payload compression for Elasticsearch requests.
#
# Default: false
# elasticsearch_compression_enabled = false

# Graylog will use multiple indices to store documents in. You can configured the strategy it uses to determine
# when to rotate the currently active write index.
# It supports multiple rotation strategies:
#   - "count" of messages per index, use elasticsearch_max_docs_per_index below to configure
#   - "size" per index, use elasticsearch_max_size_per_index below to configure
# valid values are "count", "size" and "time", default is "count"
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
rotation_strategy = count

# (Approximate) maximum number of documents in an Elasticsearch index before a new index
# is being created, also see no_retention and elasticsearch_max_number_of_indices.
# Configure this if you used 'rotation_strategy = count' above.
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
elasticsearch_max_docs_per_index = 20000000

# (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also see
# no_retention and elasticsearch_max_number_of_indices. Default is 1GB.
# Configure this if you used 'rotation_strategy = size' above.
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
#elasticsearch_max_size_per_index = 1073741824

# (Approximate) maximum time before a new Elasticsearch index is being created, also see
# no_retention and elasticsearch_max_number_of_indices. Default is 1 day.
# Configure this if you used 'rotation_strategy = time' above.
# Please note that this rotation period does not look at the time specified in the received messages, but is
# using the real clock value to decide when to rotate the index!
# Specify the time using a duration and a suffix indicating which unit you want:
#  1w  = 1 week
#  1d  = 1 day
#  12h = 12 hours
# Permitted suffixes are: d for day, h for hour, m for minute, s for second.
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
#elasticsearch_max_time_per_index = 1d

# Disable checking the version of Elasticsearch for being compatible with this Graylog release.
# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!
#elasticsearch_disable_version_check = true

# Disable message retention on this node, i. e. disable Elasticsearch index rotation.
#no_retention = false

# How many indices do you want to keep?
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
elasticsearch_max_number_of_indices = 20

# Decide what happens with the oldest indices when the maximum number of indices is reached.
# The following strategies are availble:
#   - delete # Deletes the index completely (Default)
#   - close # Closes the index and hides it from the system. Can be re-opened later.
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
retention_strategy = delete

# How many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices.
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
elasticsearch_shards = 4
elasticsearch_replicas = 0

# Prefix for all Elasticsearch indices and index aliases managed by Graylog.
#
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
elasticsearch_index_prefix = graylog

# Name of the Elasticsearch index template used by Graylog to apply the mandatory index mapping.
# Default: graylog-internal
#
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
#elasticsearch_template_name = graylog-internal

# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only
# be enabled with care. See also: http://docs.graylog.org/en/2.1/pages/queries.html
allow_leading_wildcard_searches = false

# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and
# should only be enabled after making sure your Elasticsearch cluster has enough memory.
allow_highlighting = false

# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.
# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom
# Elasticsearch documentation: https://www.elastic.co/guide/en/elasticsearch/reference/2.3/analysis.html
# Note that this setting only takes effect on newly created indices.
#
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
elasticsearch_analyzer = standard

# Global request timeout for Elasticsearch requests (e. g. during search, index creation, or index time-range
# calculations) based on a best-effort to restrict the runtime of Elasticsearch operations.
# Default: 1m
#elasticsearch_request_timeout = 1m

# Global timeout for index optimization (force merge) requests.
# Default: 1h
#elasticsearch_index_optimization_timeout = 1h

# Maximum number of concurrently running index optimization (force merge) jobs.
# If you are using lots of different index sets, you might want to increase that number.
# Default: 20
#elasticsearch_index_optimization_jobs = 20

# Time interval for index range information cleanups. This setting defines how often stale index range information
# is being purged from the database.
# Default: 1h
#index_ranges_cleanup_interval = 1h

# Batch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch output
# module will get at once and write to Elasticsearch in a batch call. If the configured batch size has not been
# reached within output_flush_interval seconds, everything that is available will be flushed at once. Remember
# that every outputbuffer processor manages its own batch and performs its own batch write calls.
# ("outputbuffer_processors" variable)
output_batch_size = 500

# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two
# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages
# for this time period is less than output_batch_size * outputbuffer_processors.
output_flush_interval = 1

# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and
# over again. To prevent this, the following configuration options define after how many faults an output will
# not be tried again for an also configurable amount of seconds.
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

# The number of parallel running processors.
# Raise this number if your buffers are filling up.
processbuffer_processors = 5
outputbuffer_processors = 3

# The following settings (outputbuffer_processor_*) configure the thread pools backing each output buffer processor.
# See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html for technical details

# When the number of threads is greater than the core (see outputbuffer_processor_threads_core_pool_size),
# this is the maximum time in milliseconds that excess idle threads will wait for new tasks before terminating.
# Default: 5000
#outputbuffer_processor_keep_alive_time = 5000

# The number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set
# Default: 3
#outputbuffer_processor_threads_core_pool_size = 3

# The maximum number of threads to allow in the pool
# Default: 30
#outputbuffer_processor_threads_max_pool_size = 30

# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).
#udp_recvbuffer_sizes = 1048576

# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)
# Possible types:
#  - yielding
#     Compromise between performance and CPU usage.
#  - sleeping
#     Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.
#  - blocking
#     High throughput, low latency, higher CPU usage.
#  - busy_spinning
#     Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.
processor_wait_strategy = blocking

# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.
# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.
# Must be a power of 2. (512, 1024, 2048, ...)
ring_size = 65536

inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking

# Enable the disk based message journal.
message_journal_enabled = true

# The directory which will be used to store the message journal. The directory must me exclusively used by Graylog and
# must not contain any other files than the ones created by Graylog itself.
#
# ATTENTION:
#   If you create a seperate partition for the journal files and use a file system creating directories like 'lost+found'
#   in the root directory, you need to create a sub directory for your journal.
#   Otherwise Graylog will log an error message that the journal is corrupt and Graylog will not start.
message_journal_dir = /var/lib/graylog-server/journal

# Journal hold messages before they could be written to Elasticsearch.
# For a maximum of 12 hours or 5 GB whichever happens first.
# During normal operation the journal will be smaller.
#message_journal_max_age = 12h
#message_journal_max_size = 5gb

#message_journal_flush_age = 1m
#message_journal_flush_interval = 1000000
#message_journal_segment_age = 1h
#message_journal_segment_size = 100mb

# Number of threads used exclusively for dispatching internal events. Default is 2.
#async_eventbus_processors = 2

# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual
# shutdown process. Set to 0 if you have no status checking load balancers in front.
lb_recognition_period_seconds = 3

# Journal usage percentage that triggers requesting throttling for this server node from load balancers. The feature is
# disabled if not set.
#lb_throttle_threshold_percentage = 95

# Every message is matched against the configured streams and it can happen that a stream contains rules which
# take an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking.
# This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting other
# streams, Graylog limits the execution time for each stream.
# The default values are noted below, the timeout is in milliseconds.
# If the stream matching for one stream took longer than the timeout value, and this happened more than "max_faults" times
# that stream is disabled and a notification is shown in the web interface.
#stream_processing_timeout = 2000
#stream_processing_max_faults = 3

# Length of the interval in seconds in which the alert conditions for all streams should be checked
# and alarms are being sent.
#alert_check_interval = 60

# Since 0.21 the Graylog server supports pluggable output modules. This means a single message can be written to multiple
# outputs. The next setting defines the timeout for a single output module, including the default output module where all
# messages end up.
#
# Time in milliseconds to wait for all message outputs to finish writing a single message.
#output_module_timeout = 10000

# Time in milliseconds after which a detected stale master node is being rechecked on startup.
#stale_master_timeout = 2000

# Time in milliseconds which Graylog is waiting for all threads to stop on shutdown.
#shutdown_timeout = 30000

# MongoDB connection string
# See https://docs.mongodb.com/manual/reference/connection-string/ for details
mongodb_uri = mongodb://localhost/graylog

# Authenticate against the MongoDB server
#mongodb_uri = mongodb://grayloguser:secret@localhost:27017/graylog

# Use a replica set instead of a single host
#mongodb_uri = mongodb://grayloguser:secret@localhost:27017,localhost:27018,localhost:27019/graylog

# Increase this value according to the maximum connections your MongoDB server can handle from a single client
# if you encounter MongoDB connection problems.
mongodb_max_connections = 1000

# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5
# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5,
# then 500 threads can block. More than that and an exception will be thrown.
# http://api.mongodb.com/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultiplier
mongodb_threads_allowed_to_block_multiplier = 5

# Drools Rule File (Use to rewrite incoming log messages)
# See: http://docs.graylog.org/en/2.1/pages/drools.html
#rules_file = /etc/graylog/server/rules.drl

# Email transport
#transport_email_enabled = false
#transport_email_hostname = mail.example.com
#transport_email_port = 587
#transport_email_use_auth = true
#transport_email_use_tls = true
#transport_email_use_ssl = true
#transport_email_auth_username = you@example.com
#transport_email_auth_password = secret
#transport_email_subject_prefix = [graylog]
#transport_email_from_email = graylog@example.com

# Specify and uncomment this if you want to include links to the stream in your stream alert mails.
# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.
#transport_email_web_interface_url = https://graylog.example.com

# The default connect timeout for outgoing HTTP connections.
# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).
# Default: 5s
#http_connect_timeout = 5s

# The default read timeout for outgoing HTTP connections.
# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).
# Default: 10s
#http_read_timeout = 10s

# The default write timeout for outgoing HTTP connections.
# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).
# Default: 10s
#http_write_timeout = 10s

# HTTP proxy for outgoing HTTP connections
# ATTENTION: If you configure a proxy, make sure to also configure the "http_non_proxy_hosts" option so internal
#            HTTP connections with other nodes does not go through the proxy.
# Examples:
#   - http://proxy.example.com:8123
#   - http://username:password@proxy.example.com:8123
#http_proxy_uri =

# A list of hosts that should be reached directly, bypassing the configured proxy server.
# This is a list of patterns separated by ",". The patterns may start or end with a "*" for wildcards.
# Any host matching one of these patterns will be reached through a direct connection instead of through a proxy.
# Examples:
#   - localhost,127.0.0.1
#   - 10.0.*,*.example.com
#http_non_proxy_hosts =

# Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearch
# on heavily used systems with large indices, but it will decrease search performance. The default is to optimize
# cycled indices.
#
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
#disable_index_optimization = true

# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch
# on heavily used systems with large indices, but it will decrease search performance. The default is 1.
#
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
#index_optimization_max_num_segments = 1

# The threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notification
# will be generated to warn the administrator about possible problems with the system. Default is 1 second.
#gc_warning_threshold = 1s

# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.
#ldap_connection_timeout = 2000

# Disable the use of SIGAR for collecting system stats
#disable_sigar = false

# The default cache time for dashboard widgets. (Default: 10 seconds, minimum: 1 second)
#dashboard_widget_default_cache_time = 10s

# Automatically load content packs in "content_packs_dir" on the first start of Graylog.
#content_packs_loader_enabled = true

# The directory which contains content packs which should be loaded on the first start of Graylog.
content_packs_dir = /usr/share/graylog-server/contentpacks

# A comma-separated list of content packs (files in "content_packs_dir") which should be applied on
# the first start of Graylog.
# Default: empty
content_packs_auto_load = grok-patterns.json

# For some cluster-related REST requests, the node must query all other nodes in the cluster. This is the maximum number
# of threads available for this. Increase it, if '/cluster/*' requests take long to complete.
# Should be rest_thread_pool_size * average_cluster_size if you have a high number of concurrent users.
proxied_requests_thread_pool_size = 32


(Jan Doberstein) #7

sorry I did not spot any issue that has the source visible in one of the provided information.

Are you sure that the connection to the Elasticsearch Cluster is made HTTP and not HTTPS and without username/password?


(NGerealy@08) #8

I have tried with the connection to elasticsearch cluster as http(with no username/password) and still face the same error in the log file

2018-08-31T17:33:50.593Z ERROR [ConfigurationManagementPeriodical] Error while running migration <V20170607164210_MigrateReopenedIndicesToAliases{2017-06-07T16:42:10Z}>
org.graylog2.indexer.ElasticsearchException: Couldn't read cluster state for reopened indices [graylog_*]


        at org.graylog2.indexer.cluster.jest.JestUtils.specificException(JestUtils.java:95) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:57) ~[graylog.jar:?]
        at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:62) ~[graylog.jar:?]
        at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.getReopenedIndices(V20170607164210_MigrateReopenedIndicesToAliases.java:88) ~[graylog.jar:?]
        at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.getReopenedIndices(V20170607164210_MigrateReopenedIndicesToAliases.java:135) ~[graylog.jar:?]
        at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.lambda$upgrade$0(V20170607164210_MigrateReopenedIndicesToAliases.java:78) ~[graylog.jar:?]
        at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) ~[?:1.8.0_181]
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_181]
        at java.util.Collections$2.tryAdvance(Collections.java:4717) ~[?:1.8.0_181]
        at java.util.Collections$2.forEachRemaining(Collections.java:4725) ~[?:1.8.0_181]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_181]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_181]
        at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[?:1.8.0_181]
        at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[?:1.8.0_181]
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_181]
        at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[?:1.8.0_181]
        at org.graylog2.migrations.V20170607164210_MigrateReopenedIndicesToAliases.upgrade(V20170607164210_MigrateReopenedIndicesToAliases.java:80) ~[graylog.jar:?]
        at org.graylog2.periodical.ConfigurationManagementPeriodical.doRun(ConfigurationManagementPeriodical.java:43) [graylog.jar:?]
        at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
2018-08-31T17:33:50.746Z INFO  [JerseyService] Enabling CORS for HTTP endpoint

(NGerealy@08) #9

Hey, Are there any other config or log files which I can check that would help in debugging the issue. Please let me know.


#10

Your first post and the config file the elasticsearch_hosts params are different. Are you sure you use the correct one?
I don’t see the elasticsearch_cluster_name param in your GL config. In the ES health state and in GL config you should use the same name.
try tcpdump, to check your GL try to connect to the ES cluster, and it get answer from it. I don’t know AWS, but I think a big cluster behind your
and try the curl command from your server.conf sometimes I can’t see any misstyped character, but there is. so copy, not retype it.
Check the ES information via curl. Do you use the same (copy, not type…) cluster name in ES and in GL? Do you have indecies with name graylog_*

curl -XGET 'https://vpc-aws-es-cluster.com:443/graylog_*?pretty'

(system) #11

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.