Graylog v. 3.3.11, Single Node, GL and ES on same server.
I am facing an issue where my Sidecars can “no longer connect to the Graylog server.” I put that in quotes as all of my sidecars show they are running in the WebUI.
This is what I see in the Sidecar logs:
time=“2022-02-18T22:39:25-06:00” level=info msg=“Stopping signal distributor”
time=“2022-02-18T22:40:00-06:00” level=info msg=“Starting signal distributor”
time=“2022-02-18T22:40:10-06:00” level=info msg=“No configurations assigned to this instance. Skipping configuration request.”
time=“2022-02-25T08:39:05-06:00” level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://graylog.domain.com:9000/api/sidecars/0f2c744a-ab24-40b4-a5cc-a077b6d9db92: dial tcp 172.19.122.1:9000: connectex: No connection could be made because the target machine actively refused it."
time=“2022-02-25T09:04:22-06:00” level=error msg="[UpdateRegistration] Failed to report collector status to server: Put http://graylog.domain.com:9000/api/sidecars/0f2c744a-ab24-40b4-a5cc-a077b6d9db92: read tcp 172.19.0.157:62418->172.19.122.1:9000: wsarecv: An existing connection was forcibly closed by the remote host."
However, I can reach the Graylog WebUI on port 9000
ubuntu@graylog:~$ sudo systemctl status graylog-server.service
● graylog-server.service - Graylog server
Loaded: loaded (/usr/lib/systemd/system/graylog-server.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/graylog-server.service.d
└─10-after_services.conf
Active: active (running) since Fri 2022-02-25 08:38:10 CST; 42min ago
Docs: http://docs.graylog.org/
Main PID: 519 (graylog-server)
Tasks: 190 (limit: 4915)
CGroup: /system.slice/graylog-server.service
├─519 /bin/sh /usr/share/graylog-server/bin/graylog-server
└─700 /usr/bin/java -Xms4g -Xmx4g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSwe
Feb 25 08:38:10 graylog systemd[1]: Started Graylog server.
Feb 25 08:49:37 graylog graylog-server[519]: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBi
Feb 25 08:49:37 graylog graylog-server[519]: SLF4J: Defaulting to no-operation (NOP) logger implementat
Feb 25 08:49:37 graylog graylog-server[519]: SLF4J: See SLF4J Error Codes
Just wanted to comment on the error above which could be a direct results of elasticsearch either crashed or lack of resources. Either way its prevent a connection.
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-02-25 11:13:39 CST; 3 days ago
Docs: http://www.elastic.co
Main PID: 600 (java)
Tasks: 70 (limit: 4915)
CGroup: /system.slice/elasticsearch.service
└─600 /usr/bin/java -Xms4g -Xmx4g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+U
Feb 25 11:13:39 graylog systemd[1]: Started Elasticsearch.
Feb 25 11:13:40 graylog elasticsearch[600]: warning: Falling back to java on path. This behavior is deprecated.
All, I apologize for being a newb on all this, I am trying to follow y’all and learn as much as I can.
This is what I see in my latest /var/log/graylog-server/server.log
2022-03-01T04:12:01.835-06:00 WARN [IndexRotationThread] Deflector is pointing to [dhcp__1], not the newest one: [dhcp__2]. $
2022-03-01T04:12:01.836-06:00 ERROR [IndexRotationThread] Couldn't point deflector to a new index
org.graylog2.indexer.ElasticsearchException: Couldn't switch alias dhcp__deflector from index dhcp__1 to index dhcp__2
blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
at org.graylog2.indexer.cluster.jest.JestUtils.specificException(JestUtils.java:110) ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:60) ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:65) ~[graylog.jar:?]
at org.graylog2.indexer.indices.Indices.cycleAlias(Indices.java:655) ~[graylog.jar:?]
at org.graylog2.indexer.MongoIndexSet.pointTo(MongoIndexSet.java:357) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.checkAndRepair(IndexRotationThread.java:166) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:76) ~[graylog.jar:?]
at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_282]
at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_282]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_282]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:1$
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:$
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_282]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_282]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]
2022-03-01T04:12:01.837-06:00 WARN [IndexRotationThread] Deflector is pointing to [wev_dc__8], not the newest one: [wev_dc__$
2022-03-01T04:12:01.838-06:00 ERROR [IndexRotationThread] Couldn't point deflector to a new index
org.graylog2.indexer.ElasticsearchException: Couldn't switch alias wev_dc__deflector from index wev_dc__8 to index wev_dc__9
blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
at org.graylog2.indexer.cluster.jest.JestUtils.specificException(JestUtils.java:110) ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:60) ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:65) ~[graylog.jar:?]
at org.graylog2.indexer.indices.Indices.cycleAlias(Indices.java:655) ~[graylog.jar:?]
at org.graylog2.indexer.MongoIndexSet.pointTo(MongoIndexSet.java:357) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.checkAndRepair(IndexRotationThread.java:166) ~[graylog.jar:?]
at org.graylog2.periodical.IndexRotationThread.lambda$doRun$0(IndexRotationThread.java:76) ~[graylog.jar:?]
at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_282]
at org.graylog2.periodical.IndexRotationThread.doRun(IndexRotationThread.java:73) [graylog.jar:?]
at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
Before we resolve your sidecar issue, your Elasticsearch issue needs to be cleared up. The index has been set to read-only so you aren’t writing to Elasticsearch anymore. This is usually caused by running low on disk space where Elasticsearch data is located. Elasticsearch calls it the high water mark.
Check to see what you have for disk space - I think the read-only kicks off at 90% - and add /clear disk space as needed. It is preferable to use the Graylog GUI if you are adjusting index storage rather than using Elasticsearch commands to delete as Graylog will get confused. Once you clear up disk space, you can clear the read-only flag on Elastic. There are a lot of posts in the forum of this happening to people, it should be easy to search on. (I don’t mind answering questions, I just don’t want to re-type questions already asked/answered… )
Once you get all that cleared up, then we can focus on sidecar and why it isn’t connecting.
That is one way/place to do it, or you can add disk space - I don’t know how much data that represents in relation to the rest of your disk. I also don’t know what you r retention requirements are. Here is a quick search of the Graylog community for Elasticsearch and high watermark. With a little hunting you can find the commands to look at Elasticsearch and clear the read only as well as get a better understanding of what is happening in there. This post has a whole series of commands that will help in general…