Graylog jurnal is full - loosing logs?

Hi!
Lets assume that Graylog process messages from RabbitMQ queue. What happens if Elasticsearch become unavaliable? AFAK Graylog jurnal will be filled, and than when there is no more space, next messages will be lost.

My goal here is to not loose a single message, it does not matter what happens. I would like for example block RabbitMQ message queue if elasticsearch gets disconected or unresponsive (for example read only indexes).

  1. Can i configure Graylog to block inputs if jurnal if full?
  2. Any other ideas to solve “loosing logs” problem?

he @krzysiek

Graylog will not consume logs from the queue if it can’t process them to elasticsearch.

@jan That is good to hear.
Still… can you elaborate a little bit more? Any keywords to look for and read how this actually works in detail? Any links / articles about Graylog jurnal?

Are you 100% positive that if elasticsearch is down not a single log will get into Graylog jurnal?

@jan
I made simple test to check how this actually works.

  1. I created dedicated queue on RabbitMQ
  2. I created Input on Graylog, which is consuming this queue and saves data to particular elasticsearch Index set.
  3. All works fine, messages are displayed in graylog stream

NOW (test time):
4) I configured small disk jurnal (message_journal_max_size = 500mb)
5) I turned off elasticsearch service

  1. Warning! The journal utilization is exceeding the maximum size defined. [Click here] for more information.
    173,311 unprocessed messages are currently in the journal, in 6 segments.

Disk Jurnal is now 120% max size and Graylog is still consuming messages from RabbitMQ queues.
Not really what i expected :frowning:

Is there a way to actually stop consuming queues when jurnal gets to max value?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.