Hey, I have a greylog for some 2 months there was no problem, 4-core process, 8gb of ram (Everything consumed by 80%), logs do not overwrite, I just want to save everything, packages are made after 15 GB, currently I have about 150GB of occupancy,
For several days began to freak processor has jumped at 100%, I can not view the logs, the browser hangs, like there logs something (I have 4 nodes) but it is not as much as was previously strangely this move is going …
he pours such messages:
Where you have enough space, restart elastic service, and say hello to your logs.
After that find the ‘restore graylog log database’ chapter in your disaster recovery plan, and implement.
[2019-02-27 00:03:05,750][WARN ][indices.cluster ] [Kurse] [[graylog_299][1]] marking and sending shard failed due to [failed recovery]
[graylog_299][[graylog_299][1]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: OutOfMemoryError[Java heap space];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: [graylog_299][[graylog_299][1]] EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: OutOfMemoryError[Java heap space];
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:174)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1513)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1497)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:970)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:942)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
… 5 more
Caused by: [graylog_299][[graylog_299][1]] EngineException[failed to recover from translog]; nested: OutOfMemoryError[Java heap space];
I mounted a 2 TB drive in it (basically it stands on proxmox) and I removed the overwrite options I wanted to save everything in packages of 15GB until it reaches the 2TB, and he got hung up on some 500GB because he has too much? I understand it well?
Emphasis added by myself. Your disks aren’t full, but your Java heap is. Allocate more memory to Elasticsearch.
P.S. Please, for the love of whatever deity you prefer, read the freaking error messages. The answer was right there, and you missed it. And it’s not like it’s hard to miss. Understand your problem first, before asking questions? It helps us help you, because right now all it does is evoke a facepalm moment.
Thank you very much as the only one who showed me the way, now I will look for where to change it and how logs now collects but I can not see them
Uncommited messages deleted from journal (triggered 13 minutes ago)
Some messages were deleted from the Graylog journal before they could be written to Elasticsearch. Please verify that your Elasticsearch cluster is healthy and fast enough. You may also want to review your Graylog journal settings and set a higher limit. (Node: abe4768a-f7a4-4b54-86c3-756cd9b8d8b9)