I realize this isn’t very helpful, but it looks like your Graylog API is unreachable
Either:
The services are not up and running,
The traffic is being blocked,
There are other issues with Graylog
Now, the traffic from the Graylog GUI to the API should mostly be from the Graylog host to the Graylog host. So blocking the traffic is probably not it
Unfortunately I have zero Docker experience (I only know how to write “Docker”) so I can’t help you in finding your logs, sorry
when i visit this url: http://172.16.98.10:9000/api/sources?range=3600
I get this error: {“type”:“ApiError”,“message”:“ElasticsearchException{message=Unable to perform terms query, errorDetails=}”}
How stupid of me. The elasticsearch docker vm wasn’t running!
stopped my graylog and mongodb vms and re ran the docker-compose file which brought it all up again.
But now another problemer came up. Cause it seems not all my data was present when it came back up.
as i understand it the graylog docker compose file creates these external volumes for persistant data:
volumes:
- mongo_data:/data/db
volumes:
- es_data:/usr/share/elasticsearch/data
volumes:
- graylog_journal:/usr/share/graylog/data/journal
volumes:
mongo_data:
driver: local
es_data:
driver: local
graylog_journal:
driver: local
now the wierd part is that none of those folders exist on the docker host?
but something is working because my dashboard is still there. The input streams are setup and all the devices that reports syslog messenges to the graylog server are sending data which is being logged.
All elstaticsearch data is gone though. I can’t see any data from before i re ran the docker-compose file.
am i missing something in my config? I don’t see it?? but i am faily new to both graylog and docker
The dashboards and Graylog configuration are stored in MongoDB, while all the actual logging data goes into ElasticSearch. If your ES install got hosed and the datafiles were lost, that would explain your current predicament.
Before continuing the most important question is: is this a testbed, or your actual production environment?
A follow-up would be: if it’s your production environment, did this system contain vital data that should not be lost? Because if so, it’s time to tread very carefully! And perhaps call in some expert help on-site.
Whatever situation you’re in, it’s very important for you to start understanding how things are hooked into eachother. Which data goes where, what runs on which host, how is it all built, etc. You’ll need to go beyond "I followed this tutorial and ran docker compose", to “I’m running my Graylog system inside Docker, which builds environments X, Y and Z by going through these steps. My data etc live here and if it goes tits-up I know how to rescue it.”.
Well i guess im working my way from " I followed this tutorial and ran docker compose " to “ I’m running my Graylog system inside Docker, which builds environments X, Y and Z by going through these steps. My data etc live here and if it goes tits-up I know how to rescue it. ”
but in order to get there I was hoping to get some help from the forum?
From what i see in the docker-compose file it looks right when i corrolate what i can find from my google searches.
I do though feel a little stupid for not seeing the elstaicsearch vm wasn’t running. I admit that.
I know what you are trying to say. but its hard to say without offending people. knowing that i tried not to be offended… i just need someone to point me in the right direction like @jan did.
And we’re glad to help! Especially since you are clearly putting in lots of effort yourself! That always makes me happy. You’re definitely not one of the hit-and-run folks who throw an error message on here, expecting to get ready-made answers for them.