Problems in graylog about Disk Journal and Process buffer

hi everyone,

I have the version of graylog 3.2.6 and I have the following errors:

  • I only have 1 node
  • Process buffer → 65536 messages in process buffer, 100.00% utilized.
  • Output buffer → 65536 messages in output buffer, 100.00% utilized.
  • Disk Journal → 101.51%

3,704,904 unprocessed messages are currently in the journal, in 53 segments.
0 messages have been appended in the last second, 0 messages have been read in the last second.

  • Memory/Heap usage → The JVM is using 816.8MiB of 972.8MiB heap space and will not attempt to use more than 972.8M

to finish I have configured a stream and an index-set with:
Shards:
4
Replicas:
0
Field type refresh interval:
5 seconds
Index rotation strategy:
Index Size
Max index size:
1073741824 bytes (1.0GiB)
Index retention strategy:
Delete
Max number of indices:10

and graylog :

  • cpu consumption by graylog java is: 86 %
  • the machine had 4 cpu and 8 memory and jvm ->4g
  • outputbuffer_processors = 3
  • processbuffer_processors = 5

more dates:

“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 1,
“number_of_data_nodes” : 1,
“active_primary_shards” : 84,
“active_shards” : 84,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0

more dates:
“size_in_bytes” : 6708393602
memory_size_in_bytes" : 9405192,
mem" : {
“total_in_bytes” : 8345530368,
“free_in_bytes” : 3799244800,
“used_in_bytes” : 4546285568,
“free_percent” : 46,
“used_percent” : 54

“process” : {
“cpu” : {
“percent” : 0
},
“open_file_descriptors” : {
“min” : 2040,
“max” : 2040,
“avg” : 2040
“mem” : {
“heap_used_in_bytes” : 336385816,
“heap_max_in_bytes” : 3186360320
},
“threads” : 48

Many many thanks,

…your disk journal is full. What’s disk utilization look like on that box? If your disk is full, then Graylog’s gonna have a bad time.

hi,
sorry me, how can I give you the information you ask for?. What I can tell you is that I don’t have the parameters configured in graylog. Would you have to configure them?

#message_journal_max_age = 12h
#message_journal_max_size = 5gb
#message_journal_flush_age = 1m
#message_journal_flush_interval = 1000000
#message_journal_segment_age = 1h
#message_journal_segment_size = 100mb

thanks,

[image]

Hey there. Run df -h on that system. What does it say? We’re not talking about Graylog itself at this point. It’s your system’s disk.

hi ,
there are no space problems. the fs are 44% free.

regards.

hi everyone,

I have already managed to lower the journal disk, increasing the space. But now, the big question is:
How can I clean the buffer? or How can I modify the buffer so that it is not 100%? ,

  • Output buffer → 65536 messages in output buffer, 100.00% utilized. → solution ?
  • Process buffer → 65536 messages in process buffer, 100.00% utilized. → solution ?

output_batch_size 500
processbuffer_processors = 4
outputbuffer_processors = 7

input → Receive Buffer Size → 1048576 , → should i increase it?
Can you please help me ?

Many many thanks much,

he @elpedrop

you should check the elasticsearch log.

I guess you have the worker queue full as your Graylog is connecting with up to 7 connections at the same time …

Lower the outputbuffer_processor to 3 and raise the output_batch_size to 1500 this should allow your Graylog to handover the messages to elasticsearch in time.

In addition you should create a custom mapping for Elasticsearch that sets the index_refresh rate to ~30 seconds. That will give you a performance boost at all.

hi jan ,

How can I make the change in elastic of index, how is it done? dynamically? or
or in the elastisearch.yml file?

many thanks

he @elpedrop

you have multiple options for that. Once is via the custom mapping as described here:

https://docs.graylog.org/en/4.0/pages/configuration/elasticsearch.html#custom-index-mappings

but your favorite search engine will give you more options.

hi ,
sorry, it already seems to work.

Many thanks,

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.