Post Upgrade (Graylog 3.0.2 / Elasticsearch 6.7.2) Index problems

I recently upgraded to graylog 3.0.2 and elasticsearch 6.7.2 and all seemed well until today. I rotated the active write index on my main index (after I took a snapshot on my vmware cluster) and the elasticsearch cluster went red. I rebooted the server and the elasticsearch cluster went green (and my index was rotated) but I was no longer receiving messages in the main window. I reverted in VMware and again I am receiving logs from my servers. Can someone help me in understanding where I am going wrong?

BTW before I rebooted the server (and after I rotated the index I ran “curl -XGET :9200/_cluster/allocation/explain?pretty” and got an explanation of “no allocations are allowed due to cluster setting [cluster.routing.allocation.enable=none]”.

He @oden08

what did you see inside your Graylog server.log and your Elasticsearch logfile?

My graylog log shows:

2019-05-20T12:12:08.412Z INFO [DeflectorResource] Cycling deflector for index set <5b045adbcb93e50aaf27e22b>. Reason: REST request.
2019-05-20T12:12:08.427Z INFO [MongoIndexSet] Cycling from <graylog_74> to <graylog_75>.
2019-05-20T12:12:08.429Z INFO [MongoIndexSet] Creating target index <graylog_75>.
2019-05-20T12:12:08.560Z INFO [Indices] Successfully created index template graylog-internal
2019-05-20T12:12:17.151Z WARN [IndexRotationThread] Deflector is pointing to [graylog_74], not the newest one: [graylog_75]. Re-pointing.
2019-05-20T12:12:18.413Z ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource
java.net.SocketTimeoutException: timeout
at okio.Okio$4.newTimeoutException(Okio.java:232) ~[graylog.jar:?]
at okio.AsyncTimeout.exit(AsyncTimeout.java:285) ~[graylog.jar:?]
at okio.AsyncTimeout$2.read(AsyncTimeout.java:241) ~[graylog.jar:?]
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:355) ~[graylog.jar:?]
at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:227) ~[graylog.jar:?]
at okhttp3.internal.http1.Http1Codec.readHeaderLine(Http1Codec.java:215) ~[graylog.jar:?]
at okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:189) ~[graylog.jar:?]
at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:88) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
at org.graylog2.rest.RemoteInterfaceProvider.lambda$get$0(RemoteInterfaceProvider.java:61) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) ~[graylog.jar:?]
at okhttp3.RealCall.execute(RealCall.java:77) ~[graylog.jar:?]
at retrofit2.OkHttpCall.execute(OkHttpCall.java:180) ~[graylog.jar:?]
at org.graylog2.rest.resources.cluster.ClusterDeflectorResource.cycle(ClusterDeflectorResource.java:75) ~[graylog.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_212]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_212]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_212]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_212]
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) ~[graylog.jar:?]
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144) ~[graylog.jar:?]
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161) ~[graylog.jar:?]
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:143) ~[graylog.jar:?]
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99) ~[graylog.jar:?]
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389) ~[graylog.jar:?]
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347) ~[graylog.jar:?]
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102) ~[graylog.jar:?]
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326) [graylog.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) [graylog.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) [graylog.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:315) [graylog.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:297) [graylog.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:267) [graylog.jar:?]
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317) [graylog.jar:?]
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) [graylog.jar:?]
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154) [graylog.jar:?]
at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384) [graylog.jar:?]
at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:224) [graylog.jar:?]
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181) [graylog.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.read(SocketInputStream.java:204) ~[?:1.8.0_212]
at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[?:1.8.0_212]
at okio.Okio$2.read(Okio.java:140) ~[graylog.jar:?]
at okio.AsyncTimeout$2.read(AsyncTimeout.java:237) ~[graylog.jar:?]
… 51 more

And my elasticsearch log:

[2019-05-20T11:58:17,137][DEBUG][o.e.a.b.TransportShardBulkAction] [bwFrRi0] [graylog_74][1] failed to execute bulk item (index) index {[graylog_deflector][message][8d905261-7af6-11e9-b59b-0050568daa06], source[{“Task”:0,“Keywords”:4611686018429487122,“MachineEnvironment”:“production”,“EventType”:“ERROR”,“gl2_remote_ip”:“”,“gl2_remote_port”:54780,“Opcode”:“Info”,“source”:“”,“gl2_source_input”:“5877c7f3b85fe8038960fafa”,“SeverityValue”:4,“Version”:0,“UserID”:“S-1-5-18”,“gl2_source_node”:“75678ef0-d302-48d8-8b5e-f9a439b9e7d1”,“ProcessID”:5836,“timestamp”:“2019-05-20 11:58:14.000”,“OpcodeValue”:0,“SourceModuleType”:“im_msvistalog”,“level”:3,“Channel”:“Microsoft-Windows-LiveId/Operational”,“streams”:[“000000000000000000000001”],“SourceName”:“Microsoft-Windows-LiveId”,“Severity”:“ERROR”,“message”:“SOAP Request of type Service for user CID ‘NULL’ in production e”,“AccountType”:“User”,“EventReceivedTime”:“2019-05-20 07:58:16”,“SourceModuleName”:“in”,“ProviderGuid”:“{05F02597-FE85-4E67-8542-69567AB8FD4F}”,“full_message”:“SOAP Request of type Service for user CID ‘NULL’ in production environment received the following error code from the Microsoft Account server: 0x80041F0D.”,“ThreadID”:18752,“EventID”:6114,“ErrorCode”:“2147753741”,“Domain”:“NT AUTHORITY”,“RecordNumber”:7415,“AccountName”:“SYSTEM”,“RequestType”:“1”,“cid”:“NULL”}]}
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [graylog_74] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:639) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:520) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:338) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:330) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:231) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-6.7.2.jar:6.7.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
[2019-05-20T11:58:17,141][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [bwFrRi0] failed to put mappings on indices [[[graylog_74/sv3Hf7jYR4SAfS-nv02Q-A]]], type [message]
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [graylog_74] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:639) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:520) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:338) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:330) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:231) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-6.7.2.jar:6.7.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
[2019-05-20T11:58:17,144][DEBUG][o.e.a.b.TransportShardBulkAction] [bwFrRi0] [graylog_74][2] failed to execute bulk item (index) index {[graylog_deflector][message][8d8fdd31-7af6-11e9-b59b-0050568daa06], source[{“Task”:0,“Keywords”:4611686018429487122,“MachineEnvironment”:“production”,“EventType”:“ERROR”,“gl2_remote_ip”:“”,“gl2_remote_port”:54780,“Opcode”:“Info”,“source”:“computer.domain”,“gl2_source_input”:“5877c7f3b85fe8038960fafa”,“SeverityValue”:4,“Version”:0,“UserID”:“S-1-5-18”,“gl2_source_node”:“75678ef0-d302-48d8-8b5e-f9a439b9e7d1”,“ProcessID”:5836,“timestamp”:“2019-05-20 11:58:14.000”,“OpcodeValue”:0,“SourceModuleType”:“im_msvistalog”,“level”:3,“Channel”:“Microsoft-Windows-LiveId/Operational”,“streams”:[“000000000000000000000001”],“SourceName”:“Microsoft-Windows-LiveId”,“Severity”:“ERROR”,“message”:“SOAP Request of type Service for user CID ‘NULL’ in production e”,“AccountType”:“User”,“EventReceivedTime”:“2019-05-20 07:58:16”,“SourceModuleName”:“in”,“ProviderGuid”:“{05F02597-FE85-4E67-8542-69567AB8FD4F}”,“full_message”:“SOAP Request of type Service for user CID ‘NULL’ in production environment received the following error code from the Microsoft Account server: 0x80041F0D.”,“ThreadID”:18752,“EventID”:6114,“ErrorCode”:“2147753741”,“Domain”:“NT AUTHORITY”,“RecordNumber”:7411,“AccountName”:“SYSTEM”,“RequestType”:“1”,“cid”:“NULL”}]}
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [graylog_74] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:639) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:520) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:338) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:330) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:231) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-6.7.2.jar:6.7.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
[2019-05-20T11:58:17,180][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [bwFrRi0] failed to put mappings on indices [[[graylog_74/sv3Hf7jYR4SAfS-nv02Q-A]]], type [message]
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [graylog_74] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:639) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:520) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:338) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:330) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:231) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-6.7.2.jar:6.7.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
[2019-05-20T11:58:17,181][DEBUG][o.e.a.b.TransportShardBulkAction] [bwFrRi0] [graylog_74][3] failed to execute bulk item (index) index {[graylog_deflector][message][8d8f6801-7af6-11e9-b59b-0050568daa06], source[{“Task”:0,“Keywords”:4611686018429487122,“MachineEnvironment”:“production”,“EventType”:“ERROR”,“gl2_remote_ip”:“”,“gl2_remote_port”:54780,“Opcode”:“Info”,“source”:“computer.domain”,“gl2_source_input”:“5877c7f3b85fe8038960fafa”,“SeverityValue”:4,“Version”:0,“UserID”:“S-1-5-18”,“gl2_source_node”:“75678ef0-d302-48d8-8b5e-f9a439b9e7d1”,“ProcessID”:5836,“timestamp”:“2019-05-20 11:58:14.000”,“OpcodeValue”:0,“SourceModuleType”:“im_msvistalog”,“level”:3,“Channel”:“Microsoft-Windows-LiveId/Operational”,“streams”:[“000000000000000000000001”],“SourceName”:“Microsoft-Windows-LiveId”,“Severity”:“ERROR”,“message”:“SOAP Request of type Service for user CID ‘NULL’ in production e”,“AccountType”:“User”,“EventReceivedTime”:“2019-05-20 07:58:16”,“SourceModuleName”:“in”,“ProviderGuid”:“{05F02597-FE85-4E67-8542-69567AB8FD4F}”,“full_message”:“SOAP Request of type Service for user CID ‘NULL’ in production environment received the following error code from the Microsoft Account server: 0x80041F0D.”,“ThreadID”:18752,“EventID”:6114,“ErrorCode”:“2147753741”,“Domain”:“NT AUTHORITY”,“RecordNumber”:7407,“AccountName”:“SYSTEM”,“RequestType”:“1”,“cid”:“NULL”}]}
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [graylog_74] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:639) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:520) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:338) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:330) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:231) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-6.7.2.jar:6.7.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-6.7.2.jar:6.7.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
[2019-05-20T12:12:08,508][INFO ][o.e.c.m.MetaDataIndexTemplateService] [bwFrRi0] adding template [graylog-internal] for index patterns [graylog_*]
[2019-05-20T12:12:08,596][INFO ][o.e.c.m.MetaDataCreateIndexService] [bwFrRi0] [graylog_75] creating index, cause [api], templates [graylog-internal], shards [4]/[0], mappings [message]
[2019-05-20T12:12:08,611][INFO ][o.e.c.r.a.AllocationService] [bwFrRi0] Cluster health status changed from [YELLOW] to [RED] (reason: [index [graylog_75] created]).

your ES cluster is red - in that status no data is forwarded. You might want to check why that has happened. It is not visible form the log:

[2019-05-20T12:12:08,596][INFO ][o.e.c.m.MetaDataCreateIndexService] [bwFrRi0] [graylog_75] creating index, cause [api], templates [graylog-internal], shards [4]/[0], mappings [message]
[2019-05-20T12:12:08,611][INFO ][o.e.c.r.a.AllocationService] [bwFrRi0] Cluster health status changed from [YELLOW] to [RED] (reason: [index [graylog_75] created]).

But what I see is

Limit of total fields [1000] in index [graylog_74] has been exceeded

You should check if you really need all those fields and if yes, split your data into different indices.

Thanks. Just to help anyone maybe doing the same, someone on the elasticsearch forum pointed me in the right direction. Shard Allocation was still disabled, in new elasticsearch 6.7.2 things need to be formatted a bit differently and needed to turn on allocation with - curl -X PUT :9200/_cluster/settings -H ‘Content-Type: application/json’ -d ‘{“persistent”: {“cluster.routing.allocation.enable”: “all”}}’

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.