No messages in Graylog UI that should be recieved from Flask app

Hi guys,
I’ve been struggling with this problem for a while and I couldn’t find the solution to this on the community forum. I have a python Flask application and I want to send the logs to Graylog. The problem is that in the Graylog UI I can’t see any messages. This is my app code:

from flask import Flask, request, render_template, url_for, redirect, send_from_directory
import logging
import graypy

app = Flask(__name__)

{application logic where I send logs e.g. logger.info("Successful upload")}

if __name__ == '__main__':
  logger = logging.getLogger(__name__)
  logger.setLevel(logging.DEBUG)
  handler = graypy.GELFUDPHandler('127.0.0.1', 12201)
  logger.addHandler(handler)

  app.run(host='0.0.0.0', port=8080)

My docker-compose.yml:

version: '2'
services:
    web:
        build: .
        environment:
          - FLASK_APP=app.py
        ports:
            - "8080:8080"
        volumes:
            - .:/code
            - ./templates:/tmp/templates
    mongodb:
        image: mongo:3
        volumes:
          - mongo_data:/data/db
      # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.x/docker.html
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.2
        volumes:
          - es_data:/usr/share/elasticsearch/data
        environment:
          - http.host=0.0.0.0
          - transport.host=localhost
          - network.host=0.0.0.0
          - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
        ulimits:
          memlock:
            soft: -1
            hard: -1
        mem_limit: 1g
      # Graylog: https://hub.docker.com/r/graylog/graylog/
    graylog:
        image: graylog/graylog:3.1
        volumes:
          - graylog_journal:/usr/share/graylog/data/journal
        environment:
          # CHANGE ME (must be at least 16 characters)!
          - GRAYLOG_PASSWORD_SECRET=mysecretpassword
          # Password: admin
          - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
          - GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
        links:
          - mongodb:mongo
          - elasticsearch
        depends_on:
          - mongodb
          - elasticsearch
        ports:
          # Graylog web interface and REST API
          - 9000:9000
          # Syslog TCP
          - 1514:1514
          # Syslog UDP
          - 1514:1514/udp
          # GELF TCP
          - 12201:12201
          # GELF UDP
          - 12201:12201/udp
    # Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
    mongo_data:
        driver: local
    es_data:
        driver: local
    graylog_journal:
        driver: local

In the Graylog UI I launched an input:

I set up everything basing on the tutorials, I believe that the problem is caused by something really small that I just can’t notice. I would appreciate any help.

1 Like

What shows up when I start docker:

Attaching to flaskapp_elasticsearch_1, flaskapp_mongodb_1, flaskapp_web_1, flaskapp_graylog_1
mongodb_1 | 2019-12-17T10:36:48.379+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=7ea8629859a3
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] db version v3.6.16
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] git version: 204ab367a130a4fd2db1c54b02cd6a86e4e07f56
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] allocator: tcmalloc
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] modules: none
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] build environment:
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] distmod: ubuntu1604
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] distarch: x86_64
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] target_arch: x86_64
mongodb_1 | 2019-12-17T10:36:48.391+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true } }
mongodb_1 | 2019-12-17T10:36:48.392+0000 I - [initandlisten] Detected data files in /data/db created by the ‘wiredTiger’ storage engine, so setting the active storage engine to ‘wiredTiger’.
mongodb_1 | 2019-12-17T10:36:48.392+0000 I STORAGE [initandlisten]
mongodb_1 | 2019-12-17T10:36:48.392+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1 | 2019-12-17T10:36:48.392+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1 | 2019-12-17T10:36:48.392+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=487M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release=“3.0”,require_max=“3.0”),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
web_1 | * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
mongodb_1 | 2019-12-17T10:36:49.897+0000 I STORAGE [initandlisten] WiredTiger message [1576579009:897782][1:0x7f294c60fa40], txn-recover: Main recovery loop: starting at 34/1909504
mongodb_1 | 2019-12-17T10:36:50.033+0000 I STORAGE [initandlisten] WiredTiger message [1576579010:33926][1:0x7f294c60fa40], txn-recover: Recovering log 34 through 35
mongodb_1 | 2019-12-17T10:36:50.151+0000 I STORAGE [initandlisten] WiredTiger message [1576579010:151247][1:0x7f294c60fa40], txn-recover: Recovering log 35 through 35
mongodb_1 | 2019-12-17T10:36:50.232+0000 I STORAGE [initandlisten] WiredTiger message [1576579010:232953][1:0x7f294c60fa40], txn-recover: Set global recovery timestamp: 0
elasticsearch_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
mongodb_1 | 2019-12-17T10:36:50.291+0000 I CONTROL [initandlisten]
mongodb_1 | 2019-12-17T10:36:50.291+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
mongodb_1 | 2019-12-17T10:36:50.291+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
mongodb_1 | 2019-12-17T10:36:50.291+0000 I CONTROL [initandlisten]
elasticsearch_1 | OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=1
mongodb_1 | 2019-12-17T10:36:50.318+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory ‘/data/db/diagnostic.data’
mongodb_1 | 2019-12-17T10:36:50.320+0000 I NETWORK [initandlisten] listening via socket bound to 0.0.0.0
mongodb_1 | 2019-12-17T10:36:50.320+0000 I NETWORK [initandlisten] listening via socket bound to /tmp/mongodb-27017.sock
mongodb_1 | 2019-12-17T10:36:50.320+0000 I NETWORK [initandlisten] waiting for connections on port 27017
graylog_1 | 2019-12-17 10:36:57,964 INFO : org.graylog2.bootstrap.CmdLineTool - Loaded plugin: AWS plugins 3.1.3 [org.graylog.aws.AWSPlugin]
graylog_1 | 2019-12-17 10:36:57,979 INFO : org.graylog2.bootstrap.CmdLineTool - Loaded plugin: Collector 3.1.3 [org.graylog.plugins.collector.CollectorPlugin]
graylog_1 | 2019-12-17 10:36:57,984 INFO : org.graylog2.bootstrap.CmdLineTool - Loaded plugin: Threat Intelligence Plugin 3.1.3 [org.graylog.plugins.threatintel.ThreatIntelPlugin]
graylog_1 | 2019-12-17 10:36:58,951 INFO : org.graylog2.bootstrap.CmdLineTool - Running with JVM arguments: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:NewRatio=1 -XX:MaxMetaspaceSize=256m -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Dlog4j.configurationFile=/usr/share/graylog/data/config/log4j2.xml -Djava.library.path=/usr/share/graylog/lib/sigar/ -Dgraylog2.installation_source=docker
graylog_1 | 2019-12-17 10:36:59,651 INFO : org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.1.3.Final
elasticsearch_1 | [2019-12-17T10:37:00,881][INFO ][o.e.e.NodeEnvironment ] [SKm2xIK] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [47.8gb], net total_space [58.4gb], types [ext4]
elasticsearch_1 | [2019-12-17T10:37:00,896][INFO ][o.e.e.NodeEnvironment ] [SKm2xIK] heap size [1007.3mb], compressed ordinary object pointers [true]
elasticsearch_1 | [2019-12-17T10:37:01,241][INFO ][o.e.n.Node ] [SKm2xIK] node name derived from node ID [SKm2xIKcT4Wnisjj1mt9ZQ]; set [node.name] to override
elasticsearch_1 | [2019-12-17T10:37:01,243][INFO ][o.e.n.Node ] [SKm2xIK] version[6.8.2], pid[1], build[oss/docker/b506955/2019-07-24T15:24:41.545295Z], OS[Linux/4.9.184-linuxkit/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/12.0.1/12.0.1+12]
elasticsearch_1 | [2019-12-17T10:37:01,249][INFO ][o.e.n.Node ] [SKm2xIK] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-2745913817504859475, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xms1g, -Xmx1g, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=docker]
graylog_1 | 2019-12-17 10:37:06,548 INFO : org.graylog2.shared.buffers.InputBufferImpl - Message journal is enabled.
graylog_1 | 2019-12-17 10:37:06,595 INFO : org.graylog2.plugin.system.NodeId - Node ID: 885bd4c7-3f1a-49f8-8e9f-a2ae1675f5c1
graylog_1 | 2019-12-17 10:37:07,134 INFO : kafka.log.LogManager - Loading logs.
graylog_1 | 2019-12-17 10:37:07,192 WARN : kafka.log.Log - Found a corrupted index file, /usr/share/graylog/data/journal/messagejournal-0/00000000000000000000.index, deleting and rebuilding index…
elasticsearch_1 | [2019-12-17T10:37:07,249][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [aggs-matrix-stats]
elasticsearch_1 | [2019-12-17T10:37:07,249][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [analysis-common]
elasticsearch_1 | [2019-12-17T10:37:07,251][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [ingest-common]
elasticsearch_1 | [2019-12-17T10:37:07,252][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [ingest-geoip]
elasticsearch_1 | [2019-12-17T10:37:07,253][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [ingest-user-agent]
elasticsearch_1 | [2019-12-17T10:37:07,253][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [lang-expression]
elasticsearch_1 | [2019-12-17T10:37:07,270][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [lang-mustache]
elasticsearch_1 | [2019-12-17T10:37:07,270][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [lang-painless]
elasticsearch_1 | [2019-12-17T10:37:07,271][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [mapper-extras]
elasticsearch_1 | [2019-12-17T10:37:07,272][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [parent-join]
elasticsearch_1 | [2019-12-17T10:37:07,273][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [percolator]
elasticsearch_1 | [2019-12-17T10:37:07,274][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [rank-eval]
elasticsearch_1 | [2019-12-17T10:37:07,274][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [reindex]
elasticsearch_1 | [2019-12-17T10:37:07,275][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [repository-url]
elasticsearch_1 | [2019-12-17T10:37:07,276][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [transport-netty4]
elasticsearch_1 | [2019-12-17T10:37:07,277][INFO ][o.e.p.PluginsService ] [SKm2xIK] loaded module [tribe]
elasticsearch_1 | [2019-12-17T10:37:07,279][INFO ][o.e.p.PluginsService ] [SKm2xIK] no plugins loaded
graylog_1 | 2019-12-17 10:37:07,305 INFO : kafka.log.LogManager - Logs loading complete.
graylog_1 | 2019-12-17 10:37:07,310 INFO : org.graylog2.shared.journal.KafkaJournal - Initialized Kafka based journal at /usr/share/graylog/data/journal
graylog_1 | 2019-12-17 10:37:07,384 INFO : org.mongodb.driver.cluster - Cluster created with settings {hosts=[mongo:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout=‘30000 ms’, maxWaitQueueSize=500}
mongodb_1 | 2019-12-17T10:37:07.542+0000 I NETWORK [listener] connection accepted from 172.18.0.5:60076 #1 (1 connection now open)
graylog_1 | 2019-12-17 10:37:07,562 INFO : org.mongodb.driver.cluster - Cluster description not yet available. Waiting for 30000 ms before timing out
mongodb_1 | 2019-12-17T10:37:07.568+0000 I NETWORK [conn1] received client metadata from 172.18.0.5:60076 conn1: { driver: { name: “mongo-java-driver”, version: “unknown” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “4.9.184-linuxkit” }, platform: “Java/Oracle Corporation/1.8.0_232-b09” }
graylog_1 | 2019-12-17 10:37:07,644 INFO : org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:1}] to mongo:27017
graylog_1 | 2019-12-17 10:37:07,659 INFO : org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 6, 16]}, minWireVersion=0, maxWireVersion=6, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=11708292}
mongodb_1 | 2019-12-17T10:37:07.718+0000 I NETWORK [listener] connection accepted from 172.18.0.5:60078 #2 (2 connections now open)
mongodb_1 | 2019-12-17T10:37:07.722+0000 I NETWORK [conn2] received client metadata from 172.18.0.5:60078 conn2: { driver: { name: “mongo-java-driver”, version: “unknown” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “4.9.184-linuxkit” }, platform: “Java/Oracle Corporation/1.8.0_232-b09” }
graylog_1 | 2019-12-17 10:37:07,743 INFO : org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:2}] to mongo:27017
graylog_1 | 2019-12-17 10:37:08,524 INFO : org.graylog2.shared.buffers.InputBufferImpl - Initialized InputBufferImpl with ring size <65536> and wait strategy , running 2 parallel message handlers.
graylog_1 | 2019-12-17 10:37:09,263 INFO : io.searchbox.client.AbstractJestClient - Setting server pool to a list of 1 servers: [http://elasticsearch:9200]
graylog_1 | 2019-12-17 10:37:09,265 INFO : io.searchbox.client.JestClientFactory - Using multi thread/connection supporting pooling connection manager
graylog_1 | 2019-12-17 10:37:09,499 INFO : io.searchbox.client.JestClientFactory - Using custom ObjectMapper instance
graylog_1 | 2019-12-17 10:37:09,500 INFO : io.searchbox.client.JestClientFactory - Node Discovery disabled…
graylog_1 | 2019-12-17 10:37:09,500 INFO : io.searchbox.client.JestClientFactory - Idle connection reaping disabled…
graylog_1 | 2019-12-17 10:37:09,936 INFO : org.graylog2.shared.buffers.ProcessBuffer - Initialized ProcessBuffer with ring size <65536> and wait strategy .
graylog_1 | 2019-12-17 10:37:10,868 WARN : org.graylog.plugins.map.geoip.GeoIpResolverEngine - GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
graylog_1 | 2019-12-17 10:37:10,947 INFO : org.graylog2.buffers.OutputBuffer - Initialized OutputBuffer with ring size <65536> and wait strategy .
mongodb_1 | 2019-12-17T10:37:11.132+0000 I NETWORK [listener] connection accepted from 172.18.0.5:60090 #3 (3 connections now open)
mongodb_1 | 2019-12-17T10:37:11.133+0000 I NETWORK [conn3] received client metadata from 172.18.0.5:60090 conn3: { driver: { name: “mongo-java-driver”, version: “unknown” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “4.9.184-linuxkit” }, platform: “Java/Oracle Corporation/1.8.0_232-b09” }
graylog_1 | 2019-12-17 10:37:11,135 INFO : org.mongodb.driver.connection - Opened connection [connectionId{localValue:3, serverValue:3}] to mongo:27017
graylog_1 | 2019-12-17 10:37:11,188 WARN : org.graylog.plugins.map.geoip.GeoIpResolverEngine - GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
graylog_1 | 2019-12-17 10:37:11,317 WARN : org.graylog.plugins.map.geoip.GeoIpResolverEngine - GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
graylog_1 | 2019-12-17 10:37:11,422 WARN : org.graylog.plugins.map.geoip.GeoIpResolverEngine - GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
graylog_1 | 2019-12-17 10:37:11,568 WARN : org.graylog.plugins.map.geoip.GeoIpResolverEngine - GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
graylog_1 | 2019-12-17 10:37:13,869 INFO : org.graylog2.bootstrap.ServerBootstrap - Graylog server 3.1.3+cda805f starting up
graylog_1 | 2019-12-17 10:37:13,870 INFO : org.graylog2.bootstrap.ServerBootstrap - JRE: Oracle Corporation 1.8.0_232 on Linux 4.9.184-linuxkit
graylog_1 | 2019-12-17 10:37:13,871 INFO : org.graylog2.bootstrap.ServerBootstrap - Deployment: docker
graylog_1 | 2019-12-17 10:37:13,872 INFO : org.graylog2.bootstrap.ServerBootstrap - OS: Debian GNU/Linux 10 (buster) (debian)
graylog_1 | 2019-12-17 10:37:13,872 INFO : org.graylog2.bootstrap.ServerBootstrap - Arch: amd64
graylog_1 | 2019-12-17 10:37:13,938 INFO : org.graylog2.shared.initializers.PeriodicalsService - Starting 29 periodicals …
graylog_1 | 2019-12-17 10:37:13,939 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ThroughputCalculator] periodical in [0s], polling every [1s].
graylog_1 | 2019-12-17 10:37:13,997 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration] periodical, running forever.
graylog_1 | 2019-12-17 10:37:14,035 INFO : org.graylog2.shared.initializers.PeriodicalsService - Not starting [org.graylog2.periodical.AlertScannerThread] periodical. Not configured to run on this node.
graylog_1 | 2019-12-17 10:37:14,036 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] periodical in [0s], polling every [1s].
graylog_1 | 2019-12-17 10:37:14,045 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [120s], polling every [20s].
graylog_1 | 2019-12-17 10:37:14,060 INFO : org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration - Legacy default stream has no connections, no migration needed.
graylog_1 | 2019-12-17 10:37:14,070 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.GarbageCollectionWarningThread] periodical, running forever.
graylog_1 | 2019-12-17 10:37:14,126 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
graylog_1 | 2019-12-17 10:37:14,128 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexRetentionThread] periodical in [0s], polling every [300s].
mongodb_1 | 2019-12-17T10:37:14.147+0000 I NETWORK [listener] connection accepted from 172.18.0.5:60094 #4 (4 connections now open)
mongodb_1 | 2019-12-17T10:37:14.148+0000 I NETWORK [listener] connection accepted from 172.18.0.5:60096 #5 (5 connections now open)
mongodb_1 | 2019-12-17T10:37:14.148+0000 I NETWORK [conn5] received client metadata from 172.18.0.5:60096 conn5: { driver: { name: “mongo-java-driver”, version: “unknown” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “4.9.184-linuxkit” }, platform: “Java/Oracle Corporation/1.8.0_232-b09” }
mongodb_1 | 2019-12-17T10:37:14.149+0000 I NETWORK [conn4] received client metadata from 172.18.0.5:60094 conn4: { driver: { name: “mongo-java-driver”, version: “unknown” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “4.9.184-linuxkit” }, platform: “Java/Oracle Corporation/1.8.0_232-b09” }
graylog_1 | 2019-12-17 10:37:14,154 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexRotationThread] periodical in [0s], polling every [10s].
graylog_1 | 2019-12-17 10:37:14,159 INFO : org.mongodb.driver.connection - Opened connection [connectionId{localValue:4, serverValue:4}] to mongo:27017
graylog_1 | 2019-12-17 10:37:14,162 INFO : org.mongodb.driver.connection - Opened connection [connectionId{localValue:5, serverValue:5}] to mongo:27017
graylog_1 | 2019-12-17 10:37:14,169 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.NodePingThread] periodical in [0s], polling every [1s].
graylog_1 | 2019-12-17 10:37:14,180 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.VersionCheckThread] periodical in [300s], polling every [1800s].
graylog_1 | 2019-12-17 10:37:14,249 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ThrottleStateUpdaterThread] periodical in [1s], polling every [1s].
graylog_1 | 2019-12-17 10:37:14,250 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.events.ClusterEventPeriodical] periodical in [0s], polling every [1s].
graylog_1 | 2019-12-17 10:37:14,257 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.events.ClusterEventCleanupPeriodical] periodical in [0s], polling every [86400s].
graylog_1 | 2019-12-17 10:37:14,269 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ClusterIdGeneratorPeriodical] periodical, running forever.
graylog_1 | 2019-12-17 10:37:14,272 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexRangesMigrationPeriodical] periodical, running forever.
graylog_1 | 2019-12-17 10:37:14,278 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexRangesCleanupPeriodical] periodical in [15s], polling every [3600s].
graylog_1 | 2019-12-17 10:37:14,355 INFO : org.graylog2.shared.initializers.PeriodicalsService - Not starting [org.graylog2.periodical.UserPermissionMigrationPeriodical] periodical. Not configured to run on this node.
graylog_1 | 2019-12-17 10:37:14,356 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ConfigurationManagementPeriodical] periodical, running forever.
graylog_1 | 2019-12-17 10:37:14,378 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.LdapGroupMappingMigration] periodical, running forever.
graylog_1 | 2019-12-17 10:37:14,395 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexFailuresPeriodical] periodical, running forever.
graylog_1 | 2019-12-17 10:37:14,435 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.TrafficCounterCalculator] periodical in [0s], polling every [1s].
graylog_1 | 2019-12-17 10:37:14,437 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.indexer.fieldtypes.IndexFieldTypePollerPeriodical] periodical in [0s], polling every [3600s].
graylog_1 | 2019-12-17 10:37:14,476 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.scheduler.periodicals.ScheduleTriggerCleanUp] periodical in [120s], polling every [86400s].
graylog_1 | 2019-12-17 10:37:14,492 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.sidecar.periodical.PurgeExpiredSidecarsThread] periodical in [0s], polling every [600s].
graylog_1 | 2019-12-17 10:37:14,525 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.sidecar.periodical.PurgeExpiredConfigurationUploads] periodical in [0s], polling every [600s].
graylog_1 | 2019-12-17 10:37:14,541 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.views.search.db.SearchesCleanUpJob] periodical in [0s], polling every [28800s].
graylog_1 | 2019-12-17 10:37:14,546 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.events.periodicals.EventNotificationStatusCleanUp] periodical in [120s], polling every [86400s].
graylog_1 | 2019-12-17 10:37:14,560 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread] periodical in [0s], polling every [3600s].
graylog_1 | 2019-12-17 10:37:14,886 INFO : org.graylog2.indexer.fieldtypes.IndexFieldTypePollerPeriodical - Cluster not connected yet, delaying index field type initialization until it is reachable.
graylog_1 | 2019-12-17 10:37:14,888 ERROR: org.graylog2.indexer.cluster.Cluster - Couldn’t read cluster health for indices [graylog_, gl-events_, gl-system-events_*] (Could not connect to http://elasticsearch:9200)
graylog_1 | 2019-12-17 10:37:14,889 INFO : org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully initialized yet. Skipping periodic cluster check.
graylog_1 | 2019-12-17 10:37:14,920 INFO : org.graylog2.periodical.IndexRetentionThread - Elasticsearch cluster not available, skipping index retention checks.
graylog_1 | 2019-12-17 10:37:15,118 INFO : org.graylog2.migrations.V20161130141500_DefaultStreamRecalcIndexRanges - Cluster not connected yet, delaying migration until it is reachable.
graylog_1 | 2019-12-17 10:37:15,706 INFO : org.graylog2.shared.initializers.JerseyService - Enabling CORS for HTTP endpoint
elasticsearch_1 | [2019-12-17T10:37:21,721][INFO ][o.e.d.DiscoveryModule ] [SKm2xIK] using discovery type [zen] and host providers [settings]
elasticsearch_1 | [2019-12-17T10:37:24,430][INFO ][o.e.n.Node ] [SKm2xIK] initialized
elasticsearch_1 | [2019-12-17T10:37:24,431][INFO ][o.e.n.Node ] [SKm2xIK] starting …
elasticsearch_1 | [2019-12-17T10:37:25,456][INFO ][o.e.t.TransportService ] [SKm2xIK] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
elasticsearch_1 | [2019-12-17T10:37:28,813][INFO ][o.e.c.s.MasterService ] [SKm2xIK] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {SKm2xIK}{SKm2xIKcT4Wnisjj1mt9ZQ}{Xxg_WzeISpWInTz3jemnvw}{localhost}{127.0.0.1:9300}
elasticsearch_1 | [2019-12-17T10:37:28,824][INFO ][o.e.c.s.ClusterApplierService] [SKm2xIK] new_master {SKm2xIK}{SKm2xIKcT4Wnisjj1mt9ZQ}{Xxg_WzeISpWInTz3jemnvw}{localhost}{127.0.0.1:9300}, reason: apply cluster state (from master [master {SKm2xIK}{SKm2xIKcT4Wnisjj1mt9ZQ}{Xxg_WzeISpWInTz3jemnvw}{localhost}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
elasticsearch_1 | [2019-12-17T10:37:28,930][INFO ][o.e.h.n.Netty4HttpServerTransport] [SKm2xIK] publish_address {172.18.0.3:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch_1 | [2019-12-17T10:37:28,931][INFO ][o.e.n.Node ] [SKm2xIK] started
elasticsearch_1 | [2019-12-17T10:37:29,284][WARN ][o.e.d.c.j.Joda ] [SKm2xIK] ‘y’ year should be replaced with ‘u’. Use ‘y’ for year-of-era. Prefix your date format with ‘8’ to use the new specifier.
graylog_1 | 2019-12-17 10:37:29,932 INFO : org.graylog2.periodical.IndexRangesCleanupPeriodical - Skipping index range cleanup because the Elasticsearch cluster is unreachable or unhealthy
elasticsearch_1 | [2019-12-17T10:37:30,969][INFO ][o.e.m.j.JvmGcMonitorService] [SKm2xIK] [gc][young][6][6] duration [879ms], collections [1]/[1.4s], total [879ms]/[3.1s], memory [178.5mb]->[85.1mb]/[1007.3mb], all_pools {[young] [126.3mb]->[295.4kb]/[133.1mb]}{[survivor] [14.7mb]->[7mb]/[16.6mb]}{[old] [37.3mb]->[78.1mb]/[857.6mb]}
elasticsearch_1 | [2019-12-17T10:37:30,979][WARN ][o.e.m.j.JvmGcMonitorService] [SKm2xIK] [gc][6] overhead, spent [879ms] collecting in the last [1.4s]
elasticsearch_1 | [2019-12-17T10:37:31,453][INFO ][o.e.g.GatewayService ] [SKm2xIK] recovered [3] indices into cluster_state
elasticsearch_1 | [2019-12-17T10:37:34,508][INFO ][o.e.c.r.a.AllocationService] [SKm2xIK] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[graylog_0][1]] …]).
elasticsearch_1 | [2019-12-17T10:37:35,034][WARN ][o.e.d.r.a.a.i.RestGetMappingAction] [SKm2xIK] [types removal] The parameter include_type_name should be explicitly specified in get mapping requests to prepare for 7.0. In 7.0 include_type_name will default to ‘false’, which means responses will omit the type name in mapping definitions.
graylog_1 | 2019-12-17 10:37:52,524 INFO : org.glassfish.grizzly.http.server.NetworkListener - Started listener bound to [0.0.0.0:9000]
graylog_1 | 2019-12-17 10:37:52,526 INFO : org.glassfish.grizzly.http.server.HttpServer - [HttpServer] Started.
graylog_1 | 2019-12-17 10:37:52,526 INFO : org.graylog2.shared.initializers.JerseyService - Started REST API at <0.0.0.0:9000>
graylog_1 | 2019-12-17 10:37:52,528 INFO : org.graylog2.shared.initializers.ServiceManagerListener - Services are healthy
graylog_1 | 2019-12-17 10:37:52,529 INFO : org.graylog2.shared.initializers.InputSetupService - Triggering launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] to Running [LB:ALIVE]
graylog_1 | 2019-12-17 10:37:52,532 INFO : org.graylog2.bootstrap.ServerBootstrap - Services started, startup times in ms: {GracefulShutdownService [RUNNING]=56, OutputSetupService [RUNNING]=102, BufferSynchronizerService [RUNNING]=103, KafkaJournal [RUNNING]=106, JobSchedulerService [RUNNING]=135, EtagService [RUNNING]=184, ConfigurationEtagService [RUNNING]=187, JournalReader [RUNNING]=196, InputSetupService [RUNNING]=210, LookupTableService [RUNNING]=404, MongoDBProcessingStatusRecorderService [RUNNING]=414, StreamCacheService [RUNNING]=562, PeriodicalsService [RUNNING]=648, JerseyService [RUNNING]=38620}
graylog_1 | 2019-12-17 10:37:52,557 INFO : org.graylog2.bootstrap.ServerBootstrap - Graylog server up and running.
graylog_1 | 2019-12-17 10:37:52,570 INFO : org.graylog2.inputs.InputStateListener - Input [GELF UDP/5df6b0d4adbe1d0012467818] is now STARTING
graylog_1 | 2019-12-17 10:37:52,692 INFO : org.graylog2.inputs.InputStateListener - Input [GELF UDP/5df6b0d4adbe1d0012467818] is now RUNNING
graylog_1 | 2019-12-17 10:37:52,708 WARN : org.graylog2.inputs.transports.UdpTransport - receiveBufferSize (SO_RCVBUF) for input GELFUDPInput{title=FlaskApp, type=org.graylog2.inputs.gelf.udp.GELFUDPInput, nodeId=885bd4c7-3f1a-49f8-8e9f-a2ae1675f5c1} (channel [id: 0x6cc5289e, L:/127.0.0.1:12201]) should be 1048576 but is 425984.
graylog_1 | 2019-12-17 10:37:52,709 WARN : org.graylog2.inputs.transports.UdpTransport - receiveBufferSize (SO_RCVBUF) for input GELFUDPInput{title=FlaskApp, type=org.graylog2.inputs.gelf.udp.GELFUDPInput, nodeId=885bd4c7-3f1a-49f8-8e9f-a2ae1675f5c1} (channel [id: 0xbb30fe22, L:/127.0.0.1:12201]) should be 1048576 but is 425984.
mongodb_1 | 2019-12-17T10:37:53.723+0000 I NETWORK [listener] connection accepted from 172.18.0.5:60428 #6 (6 connections now open)
mongodb_1 | 2019-12-17T10:37:53.726+0000 I NETWORK [conn6] received client metadata from 172.18.0.5:60428 conn6: { driver: { name: “mongo-java-driver”, version: “unknown” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “4.9.184-linuxkit” }, platform: “Java/Oracle Corporation/1.8.0_232-b09” }
graylog_1 | 2019-12-17 10:37:53,730 INFO : org.mongodb.driver.connection - Opened connection [connectionId{localValue:6, serverValue:6}] to mongo:27017
mongodb_1 | 2019-12-17T10:37:53.844+0000 I NETWORK [listener] connection accepted from 172.18.0.5:60430 #7 (7 connections now open)
mongodb_1 | 2019-12-17T10:37:53.847+0000 I NETWORK [conn7] received client metadata from 172.18.0.5:60430 conn7: { driver: { name: “mongo-java-driver”, version: “unknown” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “4.9.184-linuxkit” }, platform: “Java/Oracle Corporation/1.8.0_232-b09” }
graylog_1 | 2019-12-17 10:37:53,864 INFO : org.mongodb.driver.connection - Opened connection [connectionId{localValue:7, serverValue:7}] to mongo:27017
elasticsearch_1 | [2019-12-17T10:37:55,001][INFO ][o.e.m.j.JvmGcMonitorService] [SKm2xIK] [gc][30] overhead, spent [437ms] collecting in the last [1s]
web_1 | 172.18.0.1 - - [17/Dec/2019 10:38:31] “GET / HTTP/1.1” 200 -

1 Like

you should first start your Graylog stack and check that …

What I have seen:

2019-12-17 10:37:14,888 ERROR: org.graylog2.indexer.cluster.Cluster - Couldn’t read cluster health for indices [graylog_ , gl-events_ , gl-system-events_*] (Could not connect to http://elasticsearch:9200)

It might be that the elasticsearch is not up before Graylog starts … but if Graylog is up and running, send a message from the command line and check if that can be received. After that check your application if that can send messages … you need to check every component of your stack carefully and verify

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.