Active write Index rotation not working in Graylog 2.3.2-1

Using the default values for index set (see attached screenshot) does not auto-rotate index set. I have to manually click “rotate active write index”. Am I missing any configuration for managing auto-rotation of indexes?

Thanks!

did you are facing any errors in the server.log of Graylog during the time when you expect the rotation to be done?

Thanks for your response. It appears around 19 million messages (documents) ES starts to fail. I reduced the message count to 15 million and that worked. At least one rotation successfully executed. I will continue to monitor this over the next few days and report back.

Make sure to check the logs of your Elasticsearch and Graylog nodes for errors and warnings.
:arrow_right: http://docs.graylog.org/en/2.3/pages/configuration/file_location.html

Thanks! I am using docker 17.09-ce on centos 7.3+

I use the docker setup and after trying to docker exec into the graylog’s container, I get the following error -

Error response from daemon: Cannot link to a non running container: /elasticsearch AS /graylog/elasticsearch

I was able to docker exe into the elastic search container without issues but I could not find any logs under /var/log.

Also, I set the message count to 17 million and it was unable to rotate the index. So for now, I have reset it to 15 million to see if that works again.

Any help with the graylog container logs would be much appreciated. I reviewed the logs link you sent and none of those logs exist under graylog

Thanks!
Atul

The official Elasticsearch and Graylog Docker images write their logs on stdout and can be accessed via the normal means of accessing Docker container logs.

https://docs.docker.com/engine/admin/logging/view_container_logs/

Ah - thanks - I see disk out of space errors. So assuming 17 million is too many documents per index. I have set it to 15 million. Let’s see if indexes get deleted after 20 indexes over the next week.

Thanks again!

You could also use the size-based rotation strategy to fit the available disk space.

Thanks!

Looks like the device is out of space issue is across sum of all indexes (active and archived) and it is limited to 20 million records. We have 400+ GB left on the host. We use docker, so what do I need to do to allow 20 archived indexes each with 15 million records?

This is displayed when I try to view a stream.

{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[]},"status":503}

Here are the ES logs

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
[2017-12-07T02:31:37,228][WARN ][o.e.g.MetaStateService   ] [p_BflMm] [[.triggered_watches/v7_Ise_lT_u7gb-dKJ2ljw]]: failed to write index state
java.nio.file.FileSystemException: /usr/share/elasticsearch/data/nodes/0/indices/v7_Ise_lT_u7gb-dKJ2ljw/_state/state-12.st.tmp: No space left on device
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
	at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[?:?]
	at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434) ~[?:1.8.0_141]
	at java.nio.file.Files.newOutputStream(Files.java:216) ~[?:1.8.0_141]
	at org.elasticsearch.gateway.MetaDataStateFormat.write(MetaDataStateFormat.java:124) ~[elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.gateway.MetaStateService.writeIndex(MetaStateService.java:132) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.gateway.GatewayMetaState.applyClusterState(GatewayMetaState.java:179) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.gateway.Gateway.applyClusterState(Gateway.java:183) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:814) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:768) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:587) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-5.5.1.jar:5.5.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
[2017-12-07T02:31:37,229][WARN ][o.e.g.MetaStateService   ] [p_BflMm] [[.watcher-history-3-2017.12.07/Xy_cO2KRRrKLa2qYXrIrHQ]]: failed to write index state
java.nio.file.FileSystemException: /usr/share/elasticsearch/data/nodes/0/indices/Xy_cO2KRRrKLa2qYXrIrHQ/_state/state-5.st.tmp: No space left on device
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
	at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[?:?]
	at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434) ~[?:1.8.0_141]
	at java.nio.file.Files.newOutputStream(Files.java:216) ~[?:1.8.0_141]
	at org.elasticsearch.gateway.MetaDataStateFormat.write(MetaDataStateFormat.java:124) ~[elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.gateway.MetaStateService.writeIndex(MetaStateService.java:132) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.gateway.GatewayMetaState.applyClusterState(GatewayMetaState.java:179) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.gateway.Gateway.applyClusterState(Gateway.java:183) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:814) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:768) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:587) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-5.5.1.jar:5.5.1]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-5.5.1.jar:5.5.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]

But obviously not in the Docker container running Elasticsearch…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.