Graylog, log problem

#1

Hey, I have a greylog for some 2 months there was no problem, 4-core process, 8gb of ram (Everything consumed by 80%), logs do not overwrite, I just want to save everything, packages are made after 15 GB, currently I have about 150GB of occupancy,

For several days began to freak processor has jumped at 100%, I can not view the logs, the browser hangs, like there logs something (I have 4 nodes) but it is not as much as was previously strangely this move is going …
he pours such messages:

0 Likes

#2

Hi there,

looks like your Elasticsearch cluster has some issues as its status is RED.
What does the output of the following command looks like :

curl elasticsearchhost:9200/_cat/health
0 Likes

#3

This answer

1551304046 22:47:26 graylog red 2 1 165 165 0 4 3 0 - 95%

{

“cluster_name” : “graylog”,
“status” : “red”,
“timed_out” : false,
“number_of_nodes” : 2,
“number_of_data_nodes” : 1,
“active_primary_shards” : 165,
“active_shards” : 165,
“relocating_shards” : 0,
“initializing_shards” : 4,
“unassigned_shards” : 3,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0
“task_max_waiting_in_queue_millis” : 0
“active_shards_percent_as_number” : 95.93023255813954
}

0 Likes

(Jan Doberstein) #4

you should check your elasticsearch logfiles.

It is not possible to allocate new shards and the logfile of Elasticsearch should show you the reason for that.

0 Likes

#5

check disk space under you ES cluster

0 Likes

#6

0 Likes

(Jan Doberstein) #7

and now the question - what is the configured path.data in your Elasticsearch elasticseach.yml?

btw. just provide the output of the command df -h is more intuitive than what you provide …

1 Like

#8
 #Path to direcotry where to store the data 
 #Path.data: /path/to/data
 #Path to log files: 
 #Path.logs: /path/to/logs
0 Likes

(Jan Doberstein) #9

what makes it default to /var/lib/elasticsearch what is part of your / partition - what is complete full

0 Likes

#10

Will you explain how to change it? without losing logs that I already have. I’m weak in linux

0 Likes

#11

find you favorite text editor and set

Where you have enough space, restart elastic service, and say hello to your logs.
After that find the ‘restore graylog log database’ chapter in your disaster recovery plan, and implement.

0 Likes

#12

I have one partition and everything on it, and what to change the place of storage, for example?

#Path .data: / dev/ sda1 / media 
#Path .logs: / dev/ sda1 / media 

0 Likes

#13

I can’t suggest more at this time.

0 Likes

#14

I am sorry to take a shot

0 Likes

#15

:smiley:
your previous pictures shows a different view. eg check your / mount…

in this case check your elasticsearch process, check it’s log.

0 Likes

#16

var/log/elasticsearch/graylog.log-2019-02-27

[2019-02-27 00:03:05,750][WARN ][indices.cluster ] [Kurse] [[graylog_299][1]] marking and sending shard failed due to [failed recovery]
[graylog_299][[graylog_299][1]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: OutOfMemoryError[Java heap space];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: [graylog_299][[graylog_299][1]] EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: OutOfMemoryError[Java heap space];
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:174)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1513)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1497)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:970)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:942)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
… 5 more
Caused by: [graylog_299][[graylog_299][1]] EngineException[failed to recover from translog]; nested: OutOfMemoryError[Java heap space];

0 Likes

#17

hurray,
you find the problem, so now you can solve it.

0 Likes

#18

I mounted a 2 TB drive in it (basically it stands on proxmox) and I removed the overwrite options I wanted to save everything in packages of 15GB until it reaches the 2TB, and he got hung up on some 500GB because he has too much? I understand it well?

0 Likes

(Ben van Staveren) #19

Emphasis added by myself. Your disks aren’t full, but your Java heap is. Allocate more memory to Elasticsearch.

P.S. Please, for the love of whatever deity you prefer, read the freaking error messages. The answer was right there, and you missed it. And it’s not like it’s hard to miss. Understand your problem first, before asking questions? It helps us help you, because right now all it does is evoke a facepalm moment.

0 Likes

#20

Thank you very much as the only one who showed me the way, now I will look for where to change it and how logs now collects but I can not see them

Uncommited messages deleted from journal (triggered 13 minutes ago)
Some messages were deleted from the Graylog journal before they could be written to Elasticsearch. Please verify that your Elasticsearch cluster is healthy and fast enough. You may also want to review your Graylog journal settings and set a higher limit. (Node: abe4768a-f7a4-4b54-86c3-756cd9b8d8b9)

0 Likes