Elasticsearch refuses to start

I have a Graylog server running in CentOS 7, which also hosts an NTP server, which is forwarded to corporate time zone. Graylog is on a DMZ network with no internet access and our corporate network only allows internet access through badge access. I think the errors down below is because Elastic is trying to get on the internet and will crash without access. Before Elastic would launch, but I didn’t log in for two week. I tried logging into the Graylog via the browser and nothing loaded (it worked fine before the hdd was full). I SSH into the box and noticed the hard drive was full. I chased down the culprit, which was the indices being full. I deleted them. Restarted the box and Elasticsearch will start, then stop and spits out this message when I check the status.

Blockquote# systemctl status elasticsearch.service
â elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-12-07 06:33:28 PST; 5s ago
Docs: http://www.elastic.co
Process: 1795 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG _DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
Process: 1793 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 1795 (code=exited, status=1/FAILURE)

Blockquote
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM elasticsearch[1795]: at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment…:100)
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM elasticsearch[1795]: at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(Environmen…a:75)
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM elasticsearch[1795]: at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentA…a:70)
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM elasticsearch[1795]: at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134)
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM elasticsearch[1795]: at org.elasticsearch.cli.Command.main(Command.java:90)
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM elasticsearch[1795]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91)
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM elasticsearch[1795]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84)
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM systemd[1]: Unit elasticsearch.service entered failed state.
Dec 07 06:33:28 SY-DMZ-Graylog-CPEM systemd[1]: elasticsearch.service failed.
Hint: Some lines were ellipsized, use -l to show in full

Before this happened, I was looking on the net and a lot of suggestion were to use CURL, but I always got this error:

Blockquote
curl -XPOST ‘localhost:9200/_cluster/reroute’ -d ‘{“commands”:[{“allocate”:{“index”:“graylog2_6”,“shard”:1,“node”:“gl-es01-esgl2”,“allow_primary”:true}}]}’

Access Denied


Access Denied (authentication_failed)

Your credentials could not be authenticated: "Credentials are missing.". You will not be permitted access until your credentials can be verified.
This is typically caused by an incorrect username and/or password, but could also be caused by network problems.

Client Name:
Client IP Address: 192.168.1.10
Server Name: Host Name

For assistance, please open a Ticket incident at <A href='Company-Tech-Support</a> </FONT> </TD></TR> </TABLE> </blockquote> </FONT> </BODY></HTML>

What’s in the logs of your Graylog and Elasticsearch nodes?
:arrow_right: http://docs.graylog.org/en/2.3/pages/configuration/file_location.html

Elasticsearch

[2017-12-07T00:05:08,206][WARN ][o.e.c.r.a.DiskThresholdMonitor] [qYyUHAp] high disk watermark [90%] exceeded on [qYyUHApAREySM8TuBTwSzg][qYyUHAp][/var/lib/elasticsearch/nodes/0] free: 231.7mb[0.8%], shards will be relo$
[2017-12-07T00:05:08,206][INFO ][o.e.c.r.a.DiskThresholdMonitor] [qYyUHAp] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-12-07T06:16:56,639][DEBUG][o.e.a.a.i.s.TransportIndicesStatsAction] [qYyUHAp] [indices:monitor/stats] failed to execute operation for shard [[graylog_1][2], node[qYyUHApAREySM8TuBTwSzg], [P], s[STARTED], a[id=AxGF$
org.elasticsearch.ElasticsearchException: failed to refresh store stats
        at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1393) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1378) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:54) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.index.store.Store.stats(Store.java:332) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.index.shard.IndexShard.storeStats(IndexShard.java:703) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:177) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:163) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:433) [elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:412) [elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:399) [elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33) [elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) [elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.3.jar:5.6.3]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: java.nio.file.NoSuchFileException: /var/lib/elasticsearch/nodes/0/indices/wXneAeRdRxuJ6YKQBTTd7g/2/index
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
        at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427) ~[?:?]
        at java.nio.file.Files.newDirectoryStream(Files.java:457) ~[?:1.8.0_151]
        at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:215) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:234) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.elasticsearch.index.store.Store$StoreStatsCache.estimateSize(Store.java:1399) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1391) ~[elasticsearch-5.6.3.jar:5.6.3]

Graylog

2017-12-06T10:41:47.787-08:00 INFO  [CmdLineTool] Loaded plugin: Elastic Beats Input 2.3.2 [org.graylog.plugins.beats.BeatsInputPlugin]
2017-12-06T10:41:47.791-08:00 INFO  [CmdLineTool] Loaded plugin: Collector 2.3.2 [org.graylog.plugins.collector.CollectorPlugin]
2017-12-06T10:41:47.792-08:00 INFO  [CmdLineTool] Loaded plugin: Enterprise Integration Plugin 2.3.2 [org.graylog.plugins.enterprise_integration.EnterpriseIntegrationPlugin]
2017-12-06T10:41:47.794-08:00 INFO  [CmdLineTool] Loaded plugin: MapWidgetPlugin 2.3.2 [org.graylog.plugins.map.MapWidgetPlugin]
2017-12-06T10:41:47.810-08:00 INFO  [CmdLineTool] Loaded plugin: Pipeline Processor Plugin 2.3.2 [org.graylog.plugins.pipelineprocessor.ProcessorPlugin]
2017-12-06T10:41:47.812-08:00 INFO  [CmdLineTool] Loaded plugin: Anonymous Usage Statistics 2.3.2 [org.graylog.plugins.usagestatistics.UsageStatsPlugin]
2017-12-06T10:41:48.212-08:00 INFO  [CmdLineTool] Running with JVM arguments: -Xms1g -Xmx1g -XX:NewRatio=1 -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNew$
2017-12-06T10:41:48.568-08:00 INFO  [Version] HV000001: Hibernate Validator null
2017-12-06T10:42:10.821-08:00 INFO  [InputBufferImpl] Message journal is enabled.
2017-12-06T10:42:10.843-08:00 INFO  [NodeId] Node ID: b396cb4b-10a9-403f-9b98-36bda6f366aa
2017-12-06T10:42:11.016-08:00 INFO  [LogManager] Loading logs.
2017-12-06T10:42:11.285-08:00 WARN  [LogSegment] Found invalid messages in log segment /var/lib/graylog-server/journal/messagejournal-0/00000000000097774019.log at byte offset 6885354: Message is corrupt (stored crc = 1$
2017-12-06T10:42:11.295-08:00 WARN  [Log] Corruption found in segment 97774019 of log messagejournal-0, truncating to offset 97785805.
2017-12-06T10:42:11.322-08:00 INFO  [LogManager] Logs loading complete.
2017-12-06T10:42:11.322-08:00 INFO  [KafkaJournal] Initialized Kafka based journal at /var/lib/graylog-server/journal
2017-12-06T10:42:11.345-08:00 INFO  [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy <BlockingWaitStrategy>, running 2 parallel message handlers.
2017-12-06T10:42:11.375-08:00 INFO  [cluster] Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=5000}
2017-12-06T10:42:11.441-08:00 INFO  [cluster] No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=$
2017-12-06T10:42:11.471-08:00 INFO  [connection] Opened connection [connectionId{localValue:1, serverValue:1}] to localhost:27017
2017-12-06T10:42:11.474-08:00 INFO  [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{$
2017-12-06T10:42:11.484-08:00 INFO  [connection] Opened connection [connectionId{localValue:2, serverValue:2}] to localhost:27017
2017-12-06T10:42:12.094-08:00 INFO  [AbstractJestClient] Setting server pool to a list of 1 servers: [http://127.0.0.1:9200]
2017-12-06T10:42:12.094-08:00 INFO  [JestClientFactory] Using multi thread/connection supporting pooling connection manager
2017-12-06T10:42:12.168-08:00 INFO  [JestClientFactory] Using custom ObjectMapper instance
2017-12-06T10:42:12.168-08:00 INFO  [JestClientFactory] Node Discovery disabled...
2017-12-06T10:42:12.168-08:00 INFO  [JestClientFactory] Idle connection reaping disabled...
2017-12-06T10:42:12.681-08:00 INFO  [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2017-12-06T10:42:15.003-08:00 INFO  [RulesEngineProvider] No static rules file loaded.
2017-12-06T10:42:15.249-08:00 WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2017-12-06T10:42:15.258-08:00 INFO  [OutputBuffer] Initialized OutputBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2017-12-06T10:42:15.288-08:00 WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2017-12-06T10:42:15.314-08:00 WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2017-12-06T10:42:15.340-08:00 WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2017-12-06T10:42:15.362-08:00 WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2017-12-06T10:42:15.690-08:00 INFO  [ServerBootstrap] Graylog server 2.3.2+3df951e starting up
2017-12-06T10:42:15.691-08:00 INFO  [ServerBootstrap] JRE: Oracle Corporation 1.8.0_151 on Linux 3.10.0-693.5.2.el7.x86_64
2017-12-06T10:42:15.691-08:00 INFO  [ServerBootstrap] Deployment: rpm
2017-12-06T10:42:15.691-08:00 INFO  [ServerBootstrap] OS: CentOS Linux 7 (Core) (centos)
2017-12-06T10:42:15.692-08:00 INFO  [ServerBootstrap] Arch: amd64
2017-12-06T10:42:15.696-08:00 WARN  [DeadEventLoggingListener] Received unhandled event of type <org.graylog2.plugin.lifecycles.Lifecycle> from event bus <AsyncEventBus{graylog-eventbus}>
2017-12-06T10:42:36.054-08:00 INFO  [PeriodicalsService] Starting 26 periodicals ...
2017-12-06T10:42:36.055-08:00 INFO  [Periodicals] Starting [org.graylog2.periodical.ThroughputCalculator] periodical in [0s], polling every [1s].
2017-12-06T10:42:36.056-08:00 INFO  [Periodicals] Starting [org.graylog2.periodical.AlertScannerThread] periodical in [10s], polling every [60s].
2017-12-06T10:42:36.056-08:00 INFO  [Periodicals] Starting [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] periodical in [0s], polling eve


2017-12-06T10:42:50.596-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #10).
2017-12-06T10:42:51.113-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #1).
2017-12-06T10:42:51.130-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #2).
2017-12-06T10:42:51.183-08:00 INFO  [IndexRangesCleanupPeriodical] Skipping index range cleanup because the Elasticsearch cluster is unreachable or unhealthy
2017-12-06T10:42:51.185-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #3).
2017-12-06T10:42:51.206-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #4).
2017-12-06T10:42:51.235-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #5).
2017-12-06T10:42:51.283-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #6).
2017-12-06T10:42:51.363-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #7).
2017-12-06T10:42:51.550-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #8).
2017-12-06T10:42:51.620-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #11).
2017-12-06T10:42:51.632-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #11).
2017-12-06T10:42:51.824-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #9).
2017-12-06T10:42:52.350-08:00 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #10).

org.glassfish.jersey.server.internal.process.MappableException: java.io.IOException: Connection closed
        at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:92) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130) ~[graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:711) [graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:444) [graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:434) [graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:329) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors.process(Errors.java:315) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors.process(Errors.java:297) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors.process(Errors.java:267) [graylog.jar:?]
        at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317) [graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) [graylog.jar:?]
        at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154) [graylog.jar:?]
        at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384) [graylog.jar:?]
        at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:224) [graylog.jar:?]
        at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) [graylog.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]


Caused by: java.io.IOException: Connection closed
        at org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:331) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:501) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(TCPNIOTransport.java:402) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.NIOConnection.doClose(NIOConnection.java:647) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.NIOConnection$6.run(NIOConnection.java:613) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.DefaultSelectorHandler$RunnableTask.run(DefaultSelectorHandler.java:495) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.DefaultSelectorHandler.processPendingTaskQueue(DefaultSelectorHandler.java:301) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.DefaultSelectorHandler.processPendingTasks(DefaultSelectorHandler.java:290) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.DefaultSelectorHandler.preSelect(DefaultSelectorHandler.java:101) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:335) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:279) ~[graylog.jar:?]
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:593) ~[graylog.jar:?]
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:573) ~[graylog.jar:?]


Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:1.8.0_151]
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[?:1.8.0_151]
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:1.8.0_151]
        at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[?:1.8.0_151]
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[?:1.8.0_151]
        at org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(TCPNIOUtils.java:149) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.transport.TCPNIOUtils.writeCompositeBuffer(TCPNIOUtils.java:86) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:129) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:106) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:260) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:169) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:71) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleWrite(TCPNIOTransportFilter.java:126) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.TransportFilter.handleWrite(TransportFilter.java:191) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112) ~[graylog.jar:?]
        at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:890) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:858) ~[graylog.jar:?]
        at org.glassfish.grizzly.http.io.OutputBuffer.flushBuffer(OutputBuffer.java:1059) ~[graylog.jar:?]
        at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:712) ~[graylog.jar:?]
        at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:567) ~[graylog.jar:?]
        at org.glassfish.grizzly.http.server.NIOOutputStreamImpl.write(NIOOutputStreamImpl.java:75) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:218) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:294) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.ByteArrayProvider.writeTo(ByteArrayProvider.java:96) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.ByteArrayProvider.writeTo(ByteArrayProvider.java:60) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:106) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:86) ~[graylog.jar:?]
        ... 20 more

…and thus corrupted your Elasticsearch cluster:

You might want to post a topic at https://discuss.elastic.co/ to find out how to recover your cluster or at least make Elasticsearch start again, except for removing all its data.

I’ll do that. Thanks for the help!

Link, if you wanted to see the progress: https://discuss.elastic.co/t/elasticsearch-cluster-corrupted/110721

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.