Graylog want start

Hi,

Please I can’t start graylog and I have this warn :

2018-07-30T10:08:16.081+02:00 WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=frghcslnetv04, type=org.graylog.plugins.beats.BeatsInput, nodeId=82702595-8116-4676-a17d-ea606560cb75} should be 1048576 but is 212992.
2018-07-30T10:08:16.088+02:00 INFO  [InputStateListener] Input [Beats/5b27800b526a9b0c14ef077d] is now RUNNING
2018-07-30T10:08:16.101+02:00 INFO  [KafkaJournal] Read offset 284769557 before start of log at 290327808, starting to read from the beginning of the journal.
2018-07-30T10:08:23.229+02:00 INFO  [KafkaJournal] Read offset 284769557 before start of log at 290327808, starting to read from the beginning of the journal.
2018-07-30T10:10:00.628+02:00 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2018-07-30T10:10:00.628+02:00 INFO  [IndexerClusterCheckerThread] Indexer not fully initialized yet. Skipping periodic cluster check.
2018-07-30T10:11:30.584+02:00 WARN  [KafkaJournal] Journal utilization (108.0%) has gone over 95%.
2018-07-30T10:11:30.586+02:00 INFO  [KafkaJournal] Journal usage is 108.00% (threshold 100%), changing load balancer status from ALIVE to THROTTLED
2018-07-30T10:12:30.591+02:00 WARN  [KafkaJournal] Journal utilization (136.0%) has gone over 95%.
2018-07-30T10:13:30.578+02:00 WARN  [KafkaJournal] Journal utilization (137.0%) has gone over 95%.
2018-07-30T10:14:30.583+02:00 WARN  [KafkaJournal] Journal utilization (136.0%) has gone over 95%.
2018-07-30T10:15:30.584+02:00 WARN  [KafkaJournal] Journal utilization (135.0%) has gone over 95%.

How can I solve this ?

hi salma
can you show us the graylog status??
and see

[root@frghcslnetv11 ~]# systemctl status graylog-server.service
● graylog-server.service - Graylog server
   Loaded: loaded (/usr/lib/systemd/system/graylog-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-07-30 11:53:20 CEST; 2h 10min ago
     Docs: http://docs.graylog.org/
 Main PID: 23215 (graylog-server)
    Tasks: 122
   CGroup: /system.slice/graylog-server.service
           ├─23215 /bin/sh /usr/share/graylog-server/bin/graylog-server
           └─23216 /usr/bin/java -Xms4g -Xmx4g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+U...

Jul 30 11:53:20 frghcslnetv11 systemd[1]: Started Graylog server.
Jul 30 11:53:20 frghcslnetv11 systemd[1]: Starting Graylog server...

On Elasticsearch logs, I have this warn :

[2018-07-30T10:29:18,258][WARN ][o.e.i.e.Engine           ] [9hSWSDU] [graylog_6][3] failed to rollback writer on close
java.nio.file.NoSuchFileException: /data/elasticsearch/nodes/0/indices/S7Pj25k1Rj6bark5lKr-OQ/3/index
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
        at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427) ~[?:?]
        at java.nio.file.Files.newDirectoryStream(Files.java:457) ~[?:1.8.0_171]
        at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:215) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:234) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:429) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.IndexWriter.rollbackInternalNoCommit(IndexWriter.java:2249) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2193) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2186) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.elasticsearch.index.engine.InternalEngine.closeNoLock(InternalEngine.java:1342) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.index.engine.Engine.close(Engine.java:1252) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.index.engine.Engine.flushAndClose(Engine.java:1239) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.index.shard.IndexShard.close(IndexShard.java:920) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.index.IndexService.closeShard(IndexService.java:407) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.index.IndexService.removeShard(IndexService.java:390) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.index.IndexService.close(IndexService.java:246) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.indices.IndicesService.removeIndex(IndicesService.java:539) ~[elasticsearch-5.6.10.jar:5.6.10]
        at org.elasticsearch.indices.IndicesService.lambda$doStop$2(IndicesService.java:240) ~[elasticsearch-5.6.10.jar:5.6.10]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]

did u check ur the capacity of ur disque ??

I add more CPU and RAM and now I have those errors :

[root@frghcslnetv11 graylog-server]# tail -30 server.log
2018-07-31T13:47:49.502+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x2e830dd8, /172.16.250.20:35448 => /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:43:40.321+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x3c3c31d6, /172.16.250.20:35148 :> /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:47:49.502+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x23cfa1c7, /172.16.250.20:35597 :> /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:48:08.175+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x2e830dd8, /172.16.250.20:35448 :> /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:47:49.502+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0xebd88267, /172.16.250.20:39069 :> /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:44:31.316+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0xf6ab20c3, /172.16.250.20:36621 :> /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:50:42.483+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x3ef16c26, /172.16.250.20:38603 => /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:51:09.470+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0xae75fdd2, /172.16.250.20:41738 => /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:52:49.141+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x9bae2490, /172.16.250.20:34734 => /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:43:58.206+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x73c99663, /172.16.250.20:37480 :> /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:49:51.012+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x45b634d5, /172.16.250.20:38234 :> /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:52:08.194+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0xcfb1f15a, /172.16.250.20:39186 :> /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:56:27.694+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x4367245d, /172.16.250.20:34741 => /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:54:10.592+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x12020977, /172.16.250.20:39104 => /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space
2018-07-31T13:53:34.624+02:00 ERROR [NettyTransport] Error in Input [Beats/5b27800b526a9b0c14ef077d] (channel [id: 0x9bae2490, /172.16.250.20:34734 :> /172.16.250.19:5044])
java.lang.OutOfMemoryError: Java heap space

ur virtual machine haven’t enough space to run graylog i think check ur disk by using the command line df-v if u see that ur disck is totally check the directory /var/log (df -h ) and delete the logs and i think it wil works

[root@frghcslnetv11 server]# df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vgroot-root     22G  9.9G   12G  47% /
devtmpfs                   3.8G     0  3.8G   0% /dev
tmpfs                      3.9G     0  3.9G   0% /dev/shm
tmpfs                      3.9G   21M  3.8G   1% /run
tmpfs                      3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                 1014M  261M  754M  26% /boot
/dev/mapper/vgdata-lvdata  196G  4.7G  183G   3% /data
tmpfs                      781M     0  781M   0% /run/user/6012
tmpfs                      781M     0  781M   0% /run/user/0

And I don’t know why there is data on the buffer and nothing goeas out on elasticsearch ?

did you check the health of your cluster if it is green or not ??

Yes the cluster is green

you can check this link

hada lah

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.