Logs not appear instantly in graylog search

I use Graylog3,elasticsearch 5.6,filebeat 5.6

i installed filebeat on our production log server ,we receive 2+GB/hour logs,after starting filebeat on the remote log server ,graylog application raise below alert :

Journal utilization is too high
Uncommited messages deleted from journal
and below logs entry in server.log

2019-07-09T11:10:41.576+02:00 WARN  [KafkaJournal] Journal utilization (105.0%) has gone over 95%.
2019-07-09T11:10:43.319+02:00 INFO  [KafkaJournal] Read offset 101601551 before start of log at 102050333, starting to read from the beginning of the journal.
2019-07-09T11:11:41.817+02:00 WARN  [KafkaJournal] Journal utilization (103.0%) has gone over 95%.
2019-07-09T11:11:41.822+02:00 INFO  [KafkaJournal] Read offset 102174535 before start of log at 102234700, starting to read from the beginning of the journal.
2019-07-09T11:12:41.613+02:00 WARN  [KafkaJournal] Journal utilization (106.0%) has gone over 95%.
2019-07-09T11:12:41.721+02:00 INFO  [KafkaJournal] Read offset 102375909 before start of log at 102785574, starting to read from the beginning of the journal.
2019-07-09T11:13:42.357+02:00 WARN  [KafkaJournal] Journal utilization (108.0%) has gone over 95%.
2019-07-09T11:13:42.471+02:00 INFO  [KafkaJournal] Read offset 102884614 before start of log at 103524467, starting to read from the beginning of the journal.
2019-07-09T11:14:42.538+02:00 WARN  [KafkaJournal] Journal utilization (107.0%) has gone over 95%.
2019-07-09T11:14:44.976+02:00 INFO  [KafkaJournal] Read offset 103639458 before start of log at 104256029, starting to read from the beginning of the journal.
2019-07-09T11:15:42.855+02:00 WARN  [KafkaJournal] Journal utilization (107.0%) has gone over 95%.
2019-07-09T11:15:42.920+02:00 INFO  [KafkaJournal] Read offset 104377272 before start of log at 104803020, starting to read from the beginning of the journal.
2019-07-09T11:16:41.813+02:00 WARN  [KafkaJournal] Journal utilization (108.0%) has gone over 95%.
2019-07-09T11:16:41.894+02:00 INFO  [KafkaJournal] Read offset 104919008 before start of log at 105533762, starting to read from the beginning of the journal.
2019-07-09T11:17:41.732+02:00 WARN  [KafkaJournal] Journal utilization (107.0%) has gone over 95%.
2019-07-09T11:17:41.829+02:00 INFO  [KafkaJournal] Read offset 105656889 before start of log at 106081359, starting to read from the beginning of the journal.
2019-07-09T11:18:42.539+02:00 WARN  [KafkaJournal] Journal utilization (109.0%) has gone over 95%.
2019-07-09T11:18:42.596+02:00 INFO  [KafkaJournal] Read offset 106177361 before start of log at 106814963, starting to read from the beginning of the journal.
2019-07-09T11:19:41.580+02:00 WARN  [KafkaJournal] Journal utilization (108.0%) has gone over 95%.
2019-07-09T11:19:43.146+02:00 INFO  [KafkaJournal] Read offset 106929802 before start of log at 107551218, starting to read from the beginning of the journal.
2019-07-09T11:20:41.599+02:00 WARN  [KafkaJournal] Journal utilization (107.0%) has gone over 95%.
2019-07-09T11:20:41.649+02:00 INFO  [KafkaJournal] Read offset 107661668 before start of log at 108273302, starting to read from the beginning of the journal.
2019-07-09T11:21:41.660+02:00 WARN  [KafkaJournal] Journal utilization (108.0%) has gone over 95%.
2019-07-09T11:21:41.721+02:00 INFO  [KafkaJournal] Read offset 108366905 before start of log at 109005791, starting to read from the beginning of the journal.
2019-07-09T11:22:41.686+02:00 WARN  [KafkaJournal] Journal utilization (107.0%) has gone over 95%.
2019-07-09T11:22:41.756+02:00 INFO  [KafkaJournal] Read offset 109119321 before start of log at 109552105, starting to read from the beginning of the journal.

after I saw these messages ,i tried to add another node to elasticserach cluster ,but the error still exist ,also logs appeared in graylog search but delayed

Check your buffers within Graylog. It may be that your process buffer is running at 100%.

Check in: System > Nodes > Details.

Yes ,it is 100%,any suggestions,how to solve?

looks like Graylog can’t ingest Data to Elasticsearch - the Graylog server.log and the Elasticsearch logfile might reveal the reason for that.

@jan would that not be if the Output buffer was 100%?

From my testing of things, it appears that Output buffer usage indicates resource exhaustion on ES side and Process buffer is resources on GL.

@AmrAbdelFattah I would look at the timing metrics of your process buffer. See if there is anything that could be causing the buffer usage there… Maybe extractors on inputs or even pipeline rules… If all the timings look reasonable, look at giving more resources to the graylog-server process.

@Ponet, I deleted all not used extractors ,and the issue still exists ,also after i stopped filebeat on the remote server ,i don’t see messages in graylog search ,only if i restart graylog application !

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.