Can't search anymore in graylog error cannot GET http:

Hi all,

First of all i owe a big thankyou to @jan for helping me with the issue in this post:

I got it working but becuse of hollidays and such i didn’t get back in time before the post was locked.
So thank you Jan…

We have been running graylog in docker for 13 days and it looks very promissing.

But then suddenly today we couldn’t search anymore.
When we go to the sources pages or do a seach on the search page we get this error:

This being a docker image complicates things a bit for me. I have been trying to find some log files but im not sure where to look really.

can anyone help?

This is the docker compose file we are using:

version: '2'
services:
  # MongoDB: https://hub.docker.com/_/mongo/
  mongodb:
    image: mongo:3
    volumes:
      - mongo_data:/data/db
  # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.x/docker.html
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.5.1
    volumes:
      - es_data:/usr/share/elasticsearch/data
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      # Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/6.x/security-settings.html#general-security-settings
      - xpack.security.enabled=false
      - xpack.watcher.enabled=false
      - xpack.monitoring.enabled=false
      - xpack.security.audit.enabled=false
      - xpack.ml.enabled=false
      - xpack.graph.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
  # Graylog: https://hub.docker.com/r/graylog/graylog/
  graylog:
    image: graylog/graylog:2.5
    volumes:
      - graylog_journal:/usr/share/graylog/data/journal
    environment:
      # CHANGE ME!
      - GRAYLOG_PASSWORD_SECRET=somepasswordpepper
      # Password: admin
      - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
      - GRAYLOG_WEB_ENDPOINT_URI=http://172.16.98.10:9000/api
    links:
      - mongodb:mongo
      - elasticsearch
    depends_on:
      - mongodb
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Syslog TCP
      - 514:514
      # Syslog UDP
      - 514:514/udp
      # GELF TCP
      - 12201:12201
      # GELF UDP
      - 12201:12201/udp
# Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
  mongo_data:
    driver: local
  es_data:
    driver: local
  graylog_journal:
    driver: local

I realize this isn’t very helpful, but it looks like your Graylog API is unreachable :slight_smile:

Either:

  • The services are not up and running,
  • The traffic is being blocked,
  • There are other issues with Graylog

Now, the traffic from the Graylog GUI to the API should mostly be from the Graylog host to the Graylog host. So blocking the traffic is probably not it :smiley:

Unfortunately I have zero Docker experience (I only know how to write “Docker”) so I can’t help you in finding your logs, sorry :frowning:

Thanks @Totally_Not_A_Robot i hope someone with docker experince drops bye and helps out :slight_smile:

In the mean time, there’s the obvious questions:

  • What happens when YOU try to access the URL that Graylog has trouble accessing?
  • Can YOU reach that URL from your workstation?
  • Can you reach that URL with something like Curl or WGet from the Graylog server’s commandline?

please refer to the docker-compose documentation how to see the logs: https://docs.docker.com/compose/reference/logs/

Hi @Totally_Not_A_Robot

when i visit this url: http://172.16.98.10:9000/api/sources?range=3600
I get this error: {“type”:“ApiError”,“message”:“ElasticsearchException{message=Unable to perform terms query, errorDetails=}”}

When i visit this url: http://172.16.98.10:9000/api/system/fields
I get this error: {“message”:“Couldn’t read cluster state for indices graylog_*”,“details”:}

@jan

Thanks

When doing docker-composer logs tail=“all” I get these errors over and over again and im not sure how to fix it…

can you help?

graylog_1        | 2019-01-26 04:17:02,115 INFO : org.graylog2.periodical.IndexRetentionThread - Elasticsearch cluster not available, skipping index retention checks.
graylog_1        | 2019-01-26 04:17:02,116 ERROR: org.graylog2.indexer.cluster.Cluster - Couldn't read cluster health for indices [graylog_*] (elasticsearch)
graylog_1        | 2019-01-26 04:17:02,116 INFO : org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully initialized yet. Skipping periodic cluster check.
graylog_1        | 2019-01-26 04:17:02,740 WARN : org.graylog2.migrations.V20161130141500_DefaultStreamRecalcIndexRanges - Interrupted or timed out waiting for Elasticsearch cluster, checking again.
graylog_1        | 2019-01-26 04:17:05,319 ERROR: org.graylog2.indexer.messages.Messages - Caught exception during bulk indexing: java.net.UnknownHostException: elasticsearch, retrying (attempt #2731).

Edit:

These are my running docker containers:
18c7cadb4704        graylog/graylog:2.5   "/docker-entrypoint.…"   2 weeks ago         Up 23 hours (healthy)   0.0.0.0:514->514/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:514->514/udp, 0.0.0.0:12201->12201/tcp, 0.0.0.0:12201->12201/udp   graylog_graylog_1
eeb922f3f0da        mongo:3               "docker-entrypoint.s…"   2 weeks ago         Up 23 hours             27017/tcp                                                                                                                graylog_mongodb_1

Well, as both the errors and the logs clearly show: ElasticSearch is down or unreachable. You will have to fix that. Where are you running Elastic?

How stupid of me. The elasticsearch docker vm wasn’t running!

stopped my graylog and mongodb vms and re ran the docker-compose file which brought it all up again.

But now another problemer came up. Cause it seems not all my data was present when it came back up.
as i understand it the graylog docker compose file creates these external volumes for persistant data:

 volumes:
      - mongo_data:/data/db

volumes:
      - es_data:/usr/share/elasticsearch/data

volumes:
      - graylog_journal:/usr/share/graylog/data/journal

volumes:
  mongo_data:
    driver: local
  es_data:
    driver: local
  graylog_journal:
    driver: local

now the wierd part is that none of those folders exist on the docker host?

but something is working because my dashboard is still there. The input streams are setup and all the devices that reports syslog messenges to the graylog server are sending data which is being logged.

All elstaticsearch data is gone though. I can’t see any data from before i re ran the docker-compose file.

am i missing something in my config? I don’t see it?? but i am faily new to both graylog and docker

1 Like

The dashboards and Graylog configuration are stored in MongoDB, while all the actual logging data goes into ElasticSearch. If your ES install got hosed and the datafiles were lost, that would explain your current predicament.

  • Before continuing the most important question is: is this a testbed, or your actual production environment?
  • A follow-up would be: if it’s your production environment, did this system contain vital data that should not be lost? Because if so, it’s time to tread very carefully! And perhaps call in some expert help on-site.
  • Whatever situation you’re in, it’s very important for you to start understanding how things are hooked into eachother. Which data goes where, what runs on which host, how is it all built, etc. You’ll need to go beyond "I followed this tutorial and ran docker compose", to “I’m running my Graylog system inside Docker, which builds environments X, Y and Z by going through these steps. My data etc live here and if it goes tits-up I know how to rescue it.”.
1 Like

Hi,

Thanks for your response :slight_smile:

It’s a test setup NOT production

Well i guess im working my way from " I followed this tutorial and ran docker compose " to “ I’m running my Graylog system inside Docker, which builds environments X, Y and Z by going through these steps. My data etc live here and if it goes tits-up I know how to rescue it.

but in order to get there I was hoping to get some help from the forum?

From what i see in the docker-compose file it looks right when i corrolate what i can find from my google searches.

I do though feel a little stupid for not seeing the elstaicsearch vm wasn’t running. I admit that.

1 Like

I’m sorry if I offended you, I did not mean to talk down to you. You’re doing great :slight_smile:

And this is always good :slight_smile: Gives you a time and place to safely muck around with things :slight_smile:

I know what you are trying to say. but its hard to say without offending people. knowing that i tried not to be offended… i just need someone to point me in the right direction like @jan did.

And we’re glad to help! Especially since you are clearly putting in lots of effort yourself! That always makes me happy. You’re definitely not one of the hit-and-run folks who throw an error message on here, expecting to get ready-made answers for them. :slight_smile: :+1:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.