FATA[0000] Failed service action: Failed to start Graylog Sidecar: "service" failed: exit status 1, Starting graylog-sidecar Unable to start, see /var/log/graylog-sidecar.log and /var/log/graylog-sidecar.err

Description of your problem

Hello, please help, what I do wrong?
I run the next command sudo graylog-sidecar -service start but receive the next error:

 
FATA[0000] Failed service action: Failed to start Graylog Sidecar: "service" failed: exit status 1, Starting graylog-sidecar Unable to start, see /var/log/graylog-sidecar.log and /var/log/graylog-sidecar.err

File /var/log/graylog-sidecar.err contents the next information:

time="2021-09-08T02:13:16Z" level=info msg="node-id file doesn't exist, generating a new one"
time="2021-09-08T02:13:16Z" level=info msg="Using node-id: 983a9099-91ee-4367-8c3c-0b57bd051f41"
time="2021-09-08T02:13:16Z" level=info msg="No node name was configured, falling back to hostname"
time="2021-09-08T02:13:16Z" level=info msg="Starting signal distributor"
time="2021-09-08T02:13:26Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/983a9099-91ee-4367-8c3c-0b57bd051f41\": dial tcp 127.0.0.1:9000: connect: connection refused"
time="2021-09-08T02:13:36Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/983a9099-91ee-4367-8c3c-0b57bd051f41\": dial tcp 127.0.0.1:9000: connect: connection refused"

Graylog webinterface runing in the docker on http://localhost:9000/

Operating system information

  • Linux 5.10.47-linuxkit x86_64 (Docker)

Package versions

  • Graylog Collector Sidecar version 1.1.0 (89c7225) [go1.14.3/amd64]
  • Graylog 4.1
  • MongoDB 4.2
  • Elasticsearch 7.10.2
version: '3'
services:
  # MongoDB: https://hub.docker.com/_/mongo/
  mongo:
    image: mongo:4.2
    networks:
      - graylog
  # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    deploy:
      resources:
        limits:
          memory: 1g
    networks:
      - graylog
  # Graylog: https://hub.docker.com/r/graylog/graylog/
  graylog:
    image: graylog/graylog:4.1
    environment:
      # CHANGE ME (must be at least 16 characters)!
      - GRAYLOG_PASSWORD_SECRET=somepasswordpepper
      # Password: admin
      - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
      - GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
    entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 --  /docker-entrypoint.sh
    networks:
      - graylog
    restart: always
    depends_on:
      - mongo
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Syslog TCP
      - 1514:1514
      # Syslog UDP
      - 1514:1514/udp
      # GELF TCP
      - 12201:12201
      # GELF UDP
      - 12201:12201/udp
nginx:
        container_name: nginx
        build:
            context: ./manager/docker/development
            dockerfile: nginx.docker
        volumes:
            - ./manager:/app
        depends_on:
            - php-fpm
        ports:
            - '8080:80'
networks:
  graylog:
    driver: bridge
  ...

Hello && Welcome

Looks like a misconfiguration on your sidecar perhaps.

level=error msg="[UpdateRegistration] Failed to report collector status to server
  • How did you configure your GL sidecar?
  • How did you configure your collector?
  • What documentation did you use for installing your GL sidecar?
  • Is GL Sidecar on the same server as you Graylog Server?

I run docker containers in my local machine (macOS Big Sur). My file docker-compose.yml contains the next:

version: '3'
services:
    mongo:
        image: mongo:4.2
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
        environment:
            - http.host=0.0.0.0
            - transport.host=localhost
            - network.host=0.0.0.0
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        ulimits:
            memlock:
                soft: -1
                hard: -1
        deploy:
            resources:
                limits:
                    memory: 1g
    graylog:
        image: graylog/graylog:4.1
        environment:
            # CHANGE ME (must be at least 16 characters)!
            - GRAYLOG_PASSWORD_SECRET=somepasswordpepper
            # Password: admin
            - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
            - GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
        entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 --  /docker-entrypoint.sh
        restart: always
        depends_on:
            - mongo
            - elasticsearch
        ports:
            # Graylog web interface and REST API
            - "9000:9000"
            # Syslog TCP
            - "1514:1514"
            # Syslog UDP
            - "1514:1514/udp"
            # GELF TCP
            - "12201:12201"
            # GELF UDP
            - "12201:12201/udp"

    nginx:
        container_name: nginx
        build:
            context: ./manager/docker/development
            dockerfile: nginx.docker
        volumes:
            - ./manager:/app
        depends_on:
            - php-fpm
        ports:
            - '8080:80'
    php-fpm:
        container_name: php-fpm
        environment:
            - PHP_IDE_CONFIG=serverName=Docker
        build:
            context: ./manager/docker/development
            dockerfile: php-fpm.docker
        volumes:
            - ./manager:/app
        depends_on:
            - zookeeper
            - kafka
            - clickhouse
            - postgres
            - redis
            - queue-redis
            - storage
            - mailer
            - centrifugo
    php-cli:
        container_name: php-cli
        build:
            context: ./manager/docker/development
            dockerfile: php-cli.docker
        volumes:
            - ./manager:/app
            - composer:/root/.composer/cache
        depends_on:
            - zookeeper
            - kafka
            - clickhouse
            - postgres
            - redis
            - queue-redis
            - storage
            - mailer
            - centrifugo
    queue-worker:
        container_name: queue-worker
        build:
            context: ./manager/docker/development
            dockerfile: php-cli.docker
        volumes:
            - ./manager:/app
            - composer:/root/.composer/cache
        depends_on:
            - clickhouse
            - postgres
            - redis
            - queue-redis
            - storage
            - mailer
            - centrifugo
        command: sh -c "until [ -f .ready ] ; do sleep 1 ; done && php bin/console messenger:consume async -vv"
    node-watch:
        container_name: node-watch
        image: node:14.17.5-alpine
        volumes:
            - ./manager:/app
        working_dir: /app
        command: sh -c "until [ -f .ready ] ; do sleep 1 ; done && npm run watch"
    node:
        container_name: node
        image: node:14.17.5-alpine
        volumes:
            - ./manager:/app
        working_dir: /app
    redis:
        container_name: redis
        image: redis:6.2.5-alpine
        volumes:
            - redis:/data
        command:
            - 'redis-server'
            - '--databases 2'
            - '--save 900 1'
            - '--save 300 10'
            - '--save 60 10000'
            - '--requirepass secret'
    queue-redis:
        container_name: queue-redis
        image: redis:6.2.5-alpine
        volumes:
            - queue-redis:/data
    storage:
        container_name: storage
        build:
            context: ./storage/docker/development
            dockerfile: nginx.docker
        volumes:
            - ./storage:/app
        ports:
            - '8081:80'
    storage-ftp:
        container_name: storage-ftp
        image: stilliard/pure-ftpd
        environment:
            FTP_USER_NAME: app
            FTP_USER_PASS: secret
            FTP_USER_HOME: /app
        volumes:
            - ./storage/public:/app

    mailer:
        container_name: mailer
        image: mailhog/mailhog
        ports:
            - '8082:8025'
    centrifugo:
        container_name: centrifugo
        image: centrifugo/centrifugo:v2.2
        ulimits:
            nofile:
                soft: 65536
                hard: 65536
        environment:
            CENTRIFUGO_SECRET: secret
            CENTRIFUGO_API_KEY: secret
        volumes:
            - ./centrifugo/docker/development/centrifugo:/centrifugo
        ports:
            - '8083:8000'
        command: centrifugo --admin --admin_insecure

    postgres:
        container_name: postgres
        image: postgres:13.4-alpine
        volumes:
            - postgres:/var/lib/postgresql/data
        environment:
            POSTGRES_USER: app
            POSTGRES_PASSWORD: secret
            POSTGRES_DB: app
        ports:
            - '54321:5432'

    clickhouse:
        container_name: clickhouse
        image: yandex/clickhouse-server
        ports:
            - '8123:8123'
        volumes:
            - ./clickhouse-db:/var/lib/clickhouse

    zookeeper:
        container_name: zookeeper
        image: 'bitnami/zookeeper:latest'
        ports:
            - '2181:2181'
        environment:
            - ALLOW_ANONYMOUS_LOGIN=yes

    kafka:
        image: wurstmeister/kafka
        ports:
            - '9092:9092'
        environment:
            KAFKA_ADVERTISED_HOST_NAME: kafka
            #KAFKA_CREATE_TOPICS: "test:1:1"
            KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        depends_on:
            - zookeeper

    kafka-ui:
        image: provectuslabs/kafka-ui
        container_name: kafka-ui
        ports:
            - '8085:8080'
        restart: always
        environment:
            - KAFKA_CLUSTERS_0_NAME=local
            - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
            - KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
            - KAFKA_CLUSTERS_0_READONLY=false

volumes:
    postgres:
    redis:
    queue-redis:
    composer:

Setting of graylog for docker-compose.yml version 3 I got from official documentation.
I only removed attribute networks from docker-compose.yml and run it.
Graylog (localhost:9000) and my site (localhost:8080) successfully running.
Command docker-compose ps shows the next information:

Name                              Command                                 State          Ports
centrifugo                        centrifugo --admin --admin ...   Up             0.0.0.0:8083->8000/tcp,:::8083->8000/tcp
clickhouse                        /entrypoint.sh                   Up             0.0.0.0:8123->8123/tcp,:::8123->8123/tcp, 9000/tcp, 9009/tcp
kafka-ui                          /bin/sh -c java $JAVA_OPTS ...   Up             0.0.0.0:8085->8080/tcp,:::8085->8080/tcp
mailer                            MailHog                          Up             1025/tcp, 0.0.0.0:8082->8025/tcp,:::8082->8025/tcp
nginx                             /docker-entrypoint.sh ngin ...   Up             0.0.0.0:8080->80/tcp,:::8080->80/tcp
node                              docker-entrypoint.sh node        Exit 0
node-watch                        docker-entrypoint.sh sh -c ...   Up
php-cli                           docker-php-entrypoint php -a     Exit 0
php-fpm                           docker-php-entrypoint php-fpm    Up             9000/tcp
postgres                          docker-entrypoint.sh postgres    Up             0.0.0.0:54321->5432/tcp,:::54321->5432/tcp
project-manager_elasticsearch_1   /tini -- /usr/local/bin/do ...   Up             9200/tcp, 9300/tcp
project-manager_graylog_1         /usr/bin/tini -- wait-for- ...   Up (healthy)   0.0.0.0:12201->12201/tcp,:::12201->12201/tcp, 0.0.0.0:12201->12201/udp,:::12201->12201/udp,
                                                                                  0.0.0.0:1514->1514/tcp,:::1514->1514/tcp, 0.0.0.0:1514->1514/udp,:::1514->1514/udp,
                                                                                  0.0.0.0:9000->9000/tcp,:::9000->9000/tcp
project-manager_kafka_1           start-kafka.sh                   Up             0.0.0.0:9092->9092/tcp,:::9092->9092/tcp
project-manager_mongo_1           docker-entrypoint.sh mongod      Up             27017/tcp
queue-redis                       docker-entrypoint.sh redis ...   Up             6379/tcp
queue-worker                      docker-php-entrypoint sh - ...   Up
redis                             docker-entrypoint.sh redis ...   Up             6379/tcp
storage                           /docker-entrypoint.sh ngin ...   Up             0.0.0.0:8081->80/tcp,:::8081->80/tcp
storage-ftp                       /bin/sh -c /run.sh -l pure ...   Up             21/tcp, 30000/tcp, 30001/tcp, 30002/tcp, 30003/tcp, 30004/tcp, 30005/tcp, 30006/tcp, 30007/tcp, 30008/tcp, 30009/tcp
zookeeper                         /opt/bitnami/scripts/zooke ...   Up             0.0.0.0:2181->2181/tcp,:::2181->2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp

After I go to the page ocalhost:9000/system/sidecars and create token for graylog-sidecar:
8v89da5n7cmogkpkvpb4a7muaj4clsn1g9vqr9543ik6kl0seef
After I go to page localhost:9000/system/inputs and create Beats input. I only select checkbox global, another setting by default.
This inputs is running. And has next setting:

* bind_address:0.0.0.0
* no_beats_prefix:false
* number_worker_threads: 4
* override_source:**
* port: 5044
* recv_buffer_size:1048576
* tcp_keepalive:false
* tls_cert_file:**
* tls_client_auth:disabled
* tls_client_auth_cert_file:**
* tls_enable:false
* tls_key_file:**
* tls_key_password:********

After I go on the page localhost:9000/system/sidecars/configuration and create New Collector Configuration with collector: filebeat on Linux and with the next configuration:

# Needed for Graylog
fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}

filebeat.inputs:
- input_type: log
  paths:
      # path to my log nginx
    - /var/log/nginx/*.log
  type: log
output.logstash:
   hosts: ["0.0.0.0:5044"]
path:
  data: /var/lib/graylog-sidecar/collectors/filebeat/data
  logs: /var/lib/graylog-sidecar/collectors/filebeat/log

After I login on container with name nginx (Debian GNU/Linux 10 Linux 5.10.47-linuxkit x86_64) (list of active containers I wrote before). Command: docker exec -it nginx sh and run the next commands for ubuntu from official documentation.

  1. Install the Graylog Sidecar repository configuration and Graylog Sidecar itself with the following commands:
    $ wget https://packages.graylog2.org/repo/packages/graylog-sidecar-repository_1-2_all.deb
    $ sudo dpkg -i graylog-sidecar-repository_1-2_all.deb
    $ sudo apt-get update && sudo apt-get install graylog-sidecar
  2. Edit the configuration file vi /etc/graylog/sidecar/sidecar.yml set
    server_url: “http://localhost:9000/api/ and server_api_token: "8v89da5n7cmogkpkvpb4a7muaj4clsn1g9vqr9543ik6kl0seef"
  3. $ sudo graylog-sidecar -service install
  4. Check version of sidecar: graylog-sidecar -version
    Output: Graylog Collector Sidecar version 1.1.0 (89c7225) [go1.14.3/amd64]
  5. $ sudo graylog-sidecar -configtest
    Output: INFO[0000] No node name was configured, falling back to hostname
    Config OK
  6. $ sudo graylog-sidecar -service start
Output: FATA[0000] Failed service action: Failed to start Graylog Sidecar: "service" failed: exit status 1, Starting graylog-sidecar
Unable to start, see /var/log/graylog-sidecar.log and /var/log/graylog-sidecar.err
File /var/log/graylog-sidecar.log is empty
File /var/log/graylog-sidecar.err contains the next information:
time="2021-09-08T16:32:12Z" level=info msg="Using node-id: 49cdd2e4-0afb-4c67-89d3-a29639a406ac"
time="2021-09-08T16:32:12Z" level=info msg="No node name was configured, falling back to hostname"
time="2021-09-08T16:32:12Z" level=info msg="Starting signal distributor"
time="2021-09-08T16:32:22Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/49cdd2e4-0afb-4c67-89d3-a29639a406ac\": dial tcp 127.0.0.1:9000: connect: connection refused"
time="2021-09-08T16:32:28Z" level=info msg="Using node-id: 49cdd2e4-0afb-4c67-89d3-a29639a406ac"
time="2021-09-08T16:32:28Z" level=info msg="No node name was configured, falling back to hostname"
time="2021-09-08T16:32:28Z" level=info msg="Starting signal distributor"
time="2021-09-08T16:32:32Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/49cdd2e4-0afb-4c67-89d3-a29639a406ac\": dial tcp 127.0.0.1:9000: connect: connection refused"
time="2021-09-08T16:32:38Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/49cdd2e4-0afb-4c67-89d3-a29639a406ac\": dial tcp 127.0.0.1:9000: connect: connection refused"
time="2021-09-08T16:32:42Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/49cdd2e4-0afb-4c67-89d3-a29639a406ac\": dial tcp 127.0.0.1:9000: connect: connection refused"

I run docker containers in my local machine (macOS Big Sur). My file docker-compose.yml contains the next:

version: '3'
services:
    mongo:
        image: mongo:4.2
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
        environment:
            - http.host=0.0.0.0
            - transport.host=localhost
            - network.host=0.0.0.0
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        ulimits:
            memlock:
                soft: -1
                hard: -1
        deploy:
            resources:
                limits:
                    memory: 1g
    graylog:
        image: graylog/graylog:4.1
        environment:
            # CHANGE ME (must be at least 16 characters)!
            - GRAYLOG_PASSWORD_SECRET=somepasswordpepper
            # Password: admin
            - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
            - GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
        entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 --  /docker-entrypoint.sh
        restart: always
        depends_on:
            - mongo
            - elasticsearch
        ports:
            # Graylog web interface and REST API
            - "9000:9000"
            # Syslog TCP
            - "1514:1514"
            # Syslog UDP
            - "1514:1514/udp"
            # GELF TCP
            - "12201:12201"
            # GELF UDP
            - "12201:12201/udp"

    nginx:
        container_name: nginx
        build:
            context: ./manager/docker/development
            dockerfile: nginx.docker
        volumes:
            - ./manager:/app
        depends_on:
            - php-fpm
        ports:
            - '8080:80'
    php-fpm:
        container_name: php-fpm
        environment:
            - PHP_IDE_CONFIG=serverName=Docker
        build:
            context: ./manager/docker/development
            dockerfile: php-fpm.docker
        volumes:
            - ./manager:/app
        depends_on:
            - zookeeper
            - kafka
            - clickhouse
            - postgres
            - redis
            - queue-redis
            - storage
            - mailer
            - centrifugo
    php-cli:
        container_name: php-cli
        build:
            context: ./manager/docker/development
            dockerfile: php-cli.docker
        volumes:
            - ./manager:/app
            - composer:/root/.composer/cache
        depends_on:
            - zookeeper
            - kafka
            - clickhouse
            - postgres
            - redis
            - queue-redis
            - storage
            - mailer
            - centrifugo
    queue-worker:
        container_name: queue-worker
        build:
            context: ./manager/docker/development
            dockerfile: php-cli.docker
        volumes:
            - ./manager:/app
            - composer:/root/.composer/cache
        depends_on:
            - clickhouse
            - postgres
            - redis
            - queue-redis
            - storage
            - mailer
            - centrifugo
        command: sh -c "until [ -f .ready ] ; do sleep 1 ; done && php bin/console messenger:consume async -vv"
    node-watch:
        container_name: node-watch
        image: node:14.17.5-alpine
        volumes:
            - ./manager:/app
        working_dir: /app
        command: sh -c "until [ -f .ready ] ; do sleep 1 ; done && npm run watch"
    node:
        container_name: node
        image: node:14.17.5-alpine
        volumes:
            - ./manager:/app
        working_dir: /app
    redis:
        container_name: redis
        image: redis:6.2.5-alpine
        volumes:
            - redis:/data
        command:
            - 'redis-server'
            - '--databases 2'
            - '--save 900 1'
            - '--save 300 10'
            - '--save 60 10000'
            - '--requirepass secret'
    queue-redis:
        container_name: queue-redis
        image: redis:6.2.5-alpine
        volumes:
            - queue-redis:/data
    storage:
        container_name: storage
        build:
            context: ./storage/docker/development
            dockerfile: nginx.docker
        volumes:
            - ./storage:/app
        ports:
            - '8081:80'
    storage-ftp:
        container_name: storage-ftp
        image: stilliard/pure-ftpd
        environment:
            FTP_USER_NAME: app
            FTP_USER_PASS: secret
            FTP_USER_HOME: /app
        volumes:
            - ./storage/public:/app

    mailer:
        container_name: mailer
        image: mailhog/mailhog
        ports:
            - '8082:8025'
    centrifugo:
        container_name: centrifugo
        image: centrifugo/centrifugo:v2.2
        ulimits:
            nofile:
                soft: 65536
                hard: 65536
        environment:
            CENTRIFUGO_SECRET: secret
            CENTRIFUGO_API_KEY: secret
        volumes:
            - ./centrifugo/docker/development/centrifugo:/centrifugo
        ports:
            - '8083:8000'
        command: centrifugo --admin --admin_insecure

    postgres:
        container_name: postgres
        image: postgres:13.4-alpine
        volumes:
            - postgres:/var/lib/postgresql/data
        environment:
            POSTGRES_USER: app
            POSTGRES_PASSWORD: secret
            POSTGRES_DB: app
        ports:
            - '54321:5432'

    clickhouse:
        container_name: clickhouse
        image: yandex/clickhouse-server
        ports:
            - '8123:8123'
        volumes:
            - ./clickhouse-db:/var/lib/clickhouse

    zookeeper:
        container_name: zookeeper
        image: 'bitnami/zookeeper:latest'
        ports:
            - '2181:2181'
        environment:
            - ALLOW_ANONYMOUS_LOGIN=yes

    kafka:
        image: wurstmeister/kafka
        ports:
            - '9092:9092'
        environment:
            KAFKA_ADVERTISED_HOST_NAME: kafka
            #KAFKA_CREATE_TOPICS: "test:1:1"
            KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        depends_on:
            - zookeeper

    kafka-ui:
        image: provectuslabs/kafka-ui
        container_name: kafka-ui
        ports:
            - '8085:8080'
        restart: always
        environment:
            - KAFKA_CLUSTERS_0_NAME=local
            - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
            - KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
            - KAFKA_CLUSTERS_0_READONLY=false

volumes:
    postgres:
    redis:
    queue-redis:
    composer:

Setting of graylog for docker-compose.yml version 3 I got from official documentation.
I only removed attribute networks from docker-compose.yml and run it.
Graylog (localhost:9000) and my site (localhost:8080) successfully running.
Command docker-compose ps shows the next information:

Name                              Command                                 State          Ports
centrifugo                        centrifugo --admin --admin ...   Up             0.0.0.0:8083->8000/tcp,:::8083->8000/tcp
clickhouse                        /entrypoint.sh                   Up             0.0.0.0:8123->8123/tcp,:::8123->8123/tcp, 9000/tcp, 9009/tcp
kafka-ui                          /bin/sh -c java $JAVA_OPTS ...   Up             0.0.0.0:8085->8080/tcp,:::8085->8080/tcp
mailer                            MailHog                          Up             1025/tcp, 0.0.0.0:8082->8025/tcp,:::8082->8025/tcp
nginx                             /docker-entrypoint.sh ngin ...   Up             0.0.0.0:8080->80/tcp,:::8080->80/tcp
node                              docker-entrypoint.sh node        Exit 0
node-watch                        docker-entrypoint.sh sh -c ...   Up
php-cli                           docker-php-entrypoint php -a     Exit 0
php-fpm                           docker-php-entrypoint php-fpm    Up             9000/tcp
postgres                          docker-entrypoint.sh postgres    Up             0.0.0.0:54321->5432/tcp,:::54321->5432/tcp
project-manager_elasticsearch_1   /tini -- /usr/local/bin/do ...   Up             9200/tcp, 9300/tcp
project-manager_graylog_1         /usr/bin/tini -- wait-for- ...   Up (healthy)   0.0.0.0:12201->12201/tcp,:::12201->12201/tcp, 0.0.0.0:12201->12201/udp,:::12201->12201/udp,
                                                                                  0.0.0.0:1514->1514/tcp,:::1514->1514/tcp, 0.0.0.0:1514->1514/udp,:::1514->1514/udp,
                                                                                  0.0.0.0:9000->9000/tcp,:::9000->9000/tcp
project-manager_kafka_1           start-kafka.sh                   Up             0.0.0.0:9092->9092/tcp,:::9092->9092/tcp
project-manager_mongo_1           docker-entrypoint.sh mongod      Up             27017/tcp
queue-redis                       docker-entrypoint.sh redis ...   Up             6379/tcp
queue-worker                      docker-php-entrypoint sh - ...   Up
redis                             docker-entrypoint.sh redis ...   Up             6379/tcp
storage                           /docker-entrypoint.sh ngin ...   Up             0.0.0.0:8081->80/tcp,:::8081->80/tcp
storage-ftp                       /bin/sh -c /run.sh -l pure ...   Up             21/tcp, 30000/tcp, 30001/tcp, 30002/tcp, 30003/tcp, 30004/tcp, 30005/tcp, 30006/tcp, 30007/tcp, 30008/tcp, 30009/tcp
zookeeper                         /opt/bitnami/scripts/zooke ...   Up             0.0.0.0:2181->2181/tcp,:::2181->2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp

After I go to the page ocalhost:9000/system/sidecars and create token for graylog-sidecar:
8v89da5n7cmogkpkvpb4a7muaj4clsn1g9vqr9543ik6kl0seef
After I go to page localhost:9000/system/inputs and create Beats input. I only select checkbox global, another setting by default.
This inputs is running. And has next setting:

* bind_address:0.0.0.0
* no_beats_prefix:false
* number_worker_threads: 4
* override_source:*<empty>*
* port: 5044
* recv_buffer_size:1048576
* tcp_keepalive:false
* tls_cert_file:*<empty>*
* tls_client_auth:disabled
* tls_client_auth_cert_file:*<empty>*
* tls_enable:false
* tls_key_file:*<empty>*
* tls_key_password:********

After I go on the page localhost:9000/system/sidecars/configuration and create New Collector Configuration with collector: filebeat on Linux and with the next configuration:

# Needed for Graylog
fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}

filebeat.inputs:
- input_type: log
  paths:
      # path to my log nginx
    - /var/log/nginx/*.log
  type: log
output.logstash:
   hosts: ["0.0.0.0:5044"]
path:
  data: /var/lib/graylog-sidecar/collectors/filebeat/data
  logs: /var/lib/graylog-sidecar/collectors/filebeat/log

After I login on container with name nginx (Debian GNU/Linux 10 Linux 5.10.47-linuxkit x86_64) (list of active containers I wrote before). Command: docker exec -it nginx sh and run the next commands for ubuntu from official documentation.

  1. Install the Graylog Sidecar repository configuration and Graylog Sidecar itself with the following commands:
    $ wget https://packages.graylog2.org/repo/packages/graylog-sidecar-repository_1-2_all.deb
    $ sudo dpkg -i graylog-sidecar-repository_1-2_all.deb
    $ sudo apt-get update && sudo apt-get install graylog-sidecar
  2. Edit the configuration file vi /etc/graylog/sidecar/sidecar.yml set
    server_url: “http://localhost:9000/api/ and server_api_token: "8v89da5n7cmogkpkvpb4a7muaj4clsn1g9vqr9543ik6kl0seef"
  3. $ sudo graylog-sidecar -service install
  4. Check version of sidecar: graylog-sidecar -version
    Output: Graylog Collector Sidecar version 1.1.0 (89c7225) [go1.14.3/amd64]
  5. $ sudo graylog-sidecar -configtest
    Output: INFO[0000] No node name was configured, falling back to hostname
    Config OK
  6. $ sudo graylog-sidecar -service start
Output: FATA[0000] Failed service action: Failed to start Graylog Sidecar: "service" failed: exit status 1, Starting graylog-sidecar
Unable to start, see /var/log/graylog-sidecar.log and /var/log/graylog-sidecar.err

File /var/log/graylog-sidecar.log is empty
File /var/log/graylog-sidecar.err contains the next information:

time="2021-09-08T16:32:12Z" level=info msg="Using node-id: 49cdd2e4-0afb-4c67-89d3-a29639a406ac"
time="2021-09-08T16:32:12Z" level=info msg="No node name was configured, falling back to hostname"
time="2021-09-08T16:32:12Z" level=info msg="Starting signal distributor"
time="2021-09-08T16:32:22Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/49cdd2e4-0afb-4c67-89d3-a29639a406ac\": dial tcp 127.0.0.1:9000: connect: connection refused"
time="2021-09-08T16:32:28Z" level=info msg="Using node-id: 49cdd2e4-0afb-4c67-89d3-a29639a406ac"
time="2021-09-08T16:32:28Z" level=info msg="No node name was configured, falling back to hostname"
time="2021-09-08T16:32:28Z" level=info msg="Starting signal distributor"
time="2021-09-08T16:32:32Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/49cdd2e4-0afb-4c67-89d3-a29639a406ac\": dial tcp 127.0.0.1:9000: connect: connection refused"
time="2021-09-08T16:32:38Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/49cdd2e4-0afb-4c67-89d3-a29639a406ac\": dial tcp 127.0.0.1:9000: connect: connection refused"
time="2021-09-08T16:32:42Z" level=error msg="[UpdateRegistration] Failed to report collector status to server: Put \"http://localhost:9000/api/sidecars/49cdd2e4-0afb-4c67-89d3-a29639a406ac\": dial tcp 127.0.0.1:9000: connect: connection refused"

Hello,

Thank you for the added information, much appreciated.

Below is my GL Sidecar config file. I set my log file as shown below.

grep -v "^#\|^$" /etc/graylog/sidecar/sidecar.yml

server_url: "https://8.8.8.8:9000/api/"
server_api_token: "sdhhdsdhdhehbccbeb337rrr0wfdjdhadha9sdasdhajdakjd"
node_id: "file:/etc/graylog/sidecar/node-id"
node_name: "ansible"
update_interval: 10
tls_skip_verify: true
send_status: true
log_path: "/var/log/graylog-sidecar"
log_rotate_max_file_size: "10MiB"
log_rotate_keep_files: 10

Think that maybe a text mistake but not sure. I believe you need to go to
localhost:9000/system/users/tokens/ to create the token for Sidecar System User (built-in)

This has something to do with your Beat using TCP port 5044.

Do you have a firewall enabled? If so, do you have port 5044 opened?

Have you tried to use UDP Beat INPUT instead to see if that works?

Not really noticing anything else that could be the problem. Since I don’t really use Docker on Apple devices I fall short for helping you there. I would assume that installing Graylog-Sidecar on any device and starting would be fairly simple. I have 3 Ubuntu server with Graylog-Sidecars installed. Only problem I did have was connection issues which was " User Error" on my part.

EDIT: I just noticed something, did you install Graylog-Sidecar on Debian OS? if so, I was curious why you would start Graylog-sidecar with this

And not this

[Ubuntu 14.04 with Upstart]
$ sudo start graylog-sidecar

[Ubuntu 16.04 and later with Systemd]
$ sudo systemctl start graylog-sidecar

Hope that helps

Hello again! Thanks a lot for answer! My problem was in ip address of greylog container, I ping it from nginx container and received correct ip (in my case 172.25.0.16) and replace in all configuration files.
For my container nginx correct command service graylog-sidecar start instead of sudo systemctl start graylog-sidecar, but I received next error:

FATA[0000] Failed service action: Failed to start Graylog Sidecar: "service" failed: exit status 1, Starting graylog-sidecar
Unable to start, see /var/log/graylog-sidecar.log and /var/log/graylog-sidecar.err

/var/log/graylog-sidecar.err has next information:

time="2021-09-09T12:08:06Z" level=info msg="Using node-id: 90be45fa-50ba-426a-9d5b-683f882f10b6"
time="2021-09-09T12:08:06Z" level=info msg="No node name was configured, falling back to hostname"
time="2021-09-09T12:08:06Z" level=info msg="Starting signal distributor"

root@14bb87d99946:/app# tail -f /var/log/graylog-sidecar.err

time="2021-09-09T12:08:16Z" level=info msg="[filebeat] Configuration change detected, rewriting configuration file."
time="2021-09-09T12:08:16Z" level=info msg="[filebeat] Starting (exec driver)"
time="2021-09-09T12:08:17Z" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 1/3."
time="2021-09-09T12:08:17Z" level=info msg="[filebeat] Starting (exec driver)"
time="2021-09-09T12:08:18Z" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 2/3."
time="2021-09-09T12:08:18Z" level=info msg="[filebeat] Starting (exec driver)"
time="2021-09-09T12:08:19Z" level=error msg="[filebeat] Backend finished unexpectedly, trying to restart 3/3."
time="2021-09-09T12:08:19Z" level=info msg="[filebeat] Starting (exec driver)"
time="2021-09-09T12:08:20Z" level=error msg="[filebeat] Unable to start collector after 3 tries, giving up!"
time="2021-09-09T12:08:20Z" level=error msg="[filebeat] Collector output: Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).\n" 

But despite the mistake logx nginx displayed in GrayLog. I don’t understand the next moments:

  1. I could not to start service graylog-sidecar, but logs of nginx are display in graylog.
  2. I received the next error: Collector output: Exiting: data path already locked by another beat, but I created one file configuration and apply it:
fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}

filebeat.inputs:
- input_type: log
  paths:
    - /var/log/nginx/nginx-access.log
  type: log
output.logstash:
   hosts: ["172.25.0.16:5044"]
path:
  data: /var/lib/graylog-sidecar/collectors/filebeat/data
  logs: /var/lib/graylog-sidecar/collectors/filebeat/log

Command service --status-all dispay the next information:

 [ - ]  dbus
 [ + ]  filebeat
 [ - ]  graylog-sidecar
 [ ? ]  hwclock.sh
 [ + ]  nginx
 [ + ]  nginx-debug
 [ - ]  sudo

How to fix this error?

1 Like

Hello,

It seams your trying to start two instances of Beats. This is probably the reason you having troubles starting.

It means that your data path (/var/lib/filebeats) are locked by another FileBeat instance. So execute

 sudo systemctl stop filebeat

or however you execute a stop service command.
Now be ensure that you don’t have running filebeat and then run filebeat with sudo filebeat -e which prints logs in console.

In another scenario check for a lock file in the data path. This could be /var/lib/filebeat/filebeat.lock depending on the configuration. Delete the file and run sudo filebeat -e

My Docker knowledge is not very good.

Hope that helps

Thanks a lot! Have a nice day!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.