Messages stuck in Output Buffer

I’m not sure where to look, i’m very new to Graylog and it’s overwhelming.
My messages seem to be stuck in the Output Buffer though.
I’ve tried updating to the latest version but no joy, i’m running v4.0 with elasticseach version 6.8.15

Thanks for any help

If messages are stuck in your output buffer, then that indicates that there’s an issue with Elasticsearch. What’s showing up in your ES logs? Is it running? How much heap do you have assigned to it? If you’re taking the default amount of heap (1G), but are ingesting more than ~300-400 events, then you’re probably going to need to bump up your heap.

1 Like

A lot of this is set to default because i’m new and hoped that would work!
Elasticsearch is running on the server and shows as green under the System > Overview

It looks as though it is a 1Gb heap space at the moment.
In the logs it is filled these:

2021-03-12T15:29:00.819+0000: 5707014.492: Total time for which application threads were stopped: 0.0073448 seconds, Stopping threads took: 0.0068589 seconds
2021-03-12T15:29:00.825+0000: 5707014.497: Total time for which application threads were stopped: 0.0019696 seconds, Stopping threads took: 0.0013229 seconds
2021-03-12T15:29:00.826+0000: 5707014.499: Total time for which application threads were stopped: 0.0002387 seconds, Stopping threads took: 0.0000393 seconds
2021-03-12T15:29:00.828+0000: 5707014.501: Total time for which application threads were stopped: 0.0002650 seconds, Stopping threads took: 0.0000344 seconds
2021-03-12T15:29:00.829+0000: 5707014.501: Total time for which application threads were stopped: 0.0002454 seconds, Stopping threads took: 0.0000465 seconds
2021-03-12T15:29:00.829+0000: 5707014.502: Total time for which application threads were stopped: 0.0002208 seconds, Stopping threads took: 0.0000357 seconds
2021-03-12T15:29:00.830+0000: 5707014.502: Total time for which application threads were stopped: 0.0002543 seconds, Stopping threads took: 0.0000384 seconds
2021-03-12T15:29:00.831+0000: 5707014.503: Total time for which application threads were stopped: 0.0002458 seconds, Stopping threads took: 0.0000335 seconds
2021-03-12T15:29:00.832+0000: 5707014.504: Total time for which application threads were stopped: 0.0002091 seconds, Stopping threads took: 0.0000323 seconds
2021-03-12T15:29:00.851+0000: 5707014.524: Total time for which application threads were stopped: 0.0078361 seconds, Stopping threads took: 0.0070470 seconds
2021-03-12T15:29:00.862+0000: 5707014.534: Total time for which application threads were stopped: 0.0055514 seconds, Stopping threads took: 0.0050861 seconds
2021-03-12T15:29:01.133+0000: 5707014.806: Total time for which application threads were stopped: 0.0868838 seconds, Stopping threads took: 0.0864663 seconds
Heap
par new generation total 153344K, used 85970K [0x00000000c0000000, 0x00000000ca660000, 0x00000000ca660000)
eden space 136320K, 62% used [0x00000000c0000000, 0x00000000c5354dd0, 0x00000000c8520000)
from space 17024K, 3% used [0x00000000c95c0000, 0x00000000c965fb40, 0x00000000ca660000)
to space 17024K, 0% used [0x00000000c8520000, 0x00000000c8520000, 0x00000000c95c0000)
concurrent mark-sweep generation total 878208K, used 678728K [0x00000000ca660000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 72748K, capacity 78693K, committed 78956K, reserved 1118208K
class space used 8788K, capacity 10376K, committed 10472K, reserved 1048576K

Another log file has these in:

2021-05-21T16:21:57.858+0100: 13068.096: Total time for which application threads were stopped: 0.0081460 seconds, Stopping threads took: 0.0001209 seconds
2021-05-21T16:22:46.863+0100: 13117.101: Total time for which application threads were stopped: 0.0002847 seconds, Stopping threads took: 0.0000705 seconds
2021-05-21T16:23:46.876+0100: 13177.114: Total time for which application threads were stopped: 0.0002707 seconds, Stopping threads took: 0.0001028 seconds
2021-05-21T16:24:46.898+0100: 13237.136: Total time for which application threads were stopped: 0.0002432 seconds, Stopping threads took: 0.0000627 seconds
2021-05-21T16:26:33.530+0100: 13343.768: [GC (Allocation Failure) 2021-05-21T16:26:33.530+0100: 13343.768: [ParNew
Desired survivor size 17432576 bytes, new threshold 6 (max 6)

  • age 1: 298408 bytes, 298408 total
  • age 2: 1960 bytes, 300368 total
  • age 3: 960 bytes, 301328 total
  • age 4: 32 bytes, 301360 total
  • age 5: 440 bytes, 301800 total
  • age 6: 88 bytes, 301888 total
    : 274304K->335K(306688K), 0.0061904 secs] 663583K->389615K(1014528K), 0.0062835 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
    2021-05-21T16:26:33.536+0100: 13343.774: Total time for which application threads were stopped: 0.0066506 seconds, Stopping threads took: 0.0000963 seconds
    2021-05-21T16:28:23.572+0100: 13453.811: Total time for which application threads were stopped: 0.0002376 seconds, Stopping threads took: 0.0000776 seconds
    2021-05-21T16:30:16.590+0100: 13566.828: Total time for which application threads were stopped: 0.0003281 seconds, Stopping threads took: 0.0001308 seconds
    2021-05-21T16:31:11.991+0100: 13622.230: [GC (Allocation Failure) 2021-05-21T16:31:11.992+0100: 13622.230: [ParNew
    Desired survivor size 17432576 bytes, new threshold 6 (max 6)
  • age 1: 293120 bytes, 293120 total
  • age 2: 6464 bytes, 299584 total
  • age 3: 1416 bytes, 301000 total
  • age 6: 440 bytes, 301440 total
    : 272686K->326K(306688K), 0.0056541 secs] 661965K->389606K(1014528K), 0.0057866 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]

Sorry - bombarding you with logs. This might be something though:

2021-05-21T16:33:36.166+01:00 WARN [IndexRotationThread] Deflector is pointing to [gl-system-events_16], not the newest one: [gl-system-events_17]. Re-pointing.
2021-05-21T16:33:36.167+01:00 ERROR [IndexRotationThread] Couldn’t point deflector to a new index
org.graylog2.indexer.ElasticsearchException: Couldn’t switch alias gl-system-events_deflector from index gl-system-events_16 to index gl-system-events_17

blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
at org.graylog.storage.elasticsearch6.jest.JestUtils.specificException(JestUtils.java:122) ~[?:?]
at org.graylog.storage.elasticsearch6.jest.JestUtils.execute(JestUtils.java:65) ~[?:?]
at org.graylog.storage.elasticsearch6.jest.JestUtils.execute(JestUtils.java:70) ~[?:?]
at org.graylog.storage.elasticsearch6.IndicesAdapterES6.cycleAlias(IndicesAdapterES6.java:591) ~[?:?]
at org.graylog2.indexer.indices.Indices.cycleAlias(Indices.java:318) ~[graylog.jar:?]
at org.graylog2.indexer.MongoIndexSet.pointTo(MongoIndexSet.java:354) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.checkAndRepair(IndexRotationThread.java:166) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:76) ~[graylog.jar:?]
at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_292]
at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_292]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_292]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_292]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_292]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_292]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_292]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]

Problem solved - I ran the below which solved one issue:
curl -X PUT “localhost:9200/_all/_settings” -H ‘Content-Type: application/json’ -d’{ “index.blocks.read_only_allow_delete” : false } }’

Then had to increase the heap size and now we’re working again :slight_smile:

2 Likes

Ah! Glad you were able to find that :slight_smile:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.