Docker: Can't open up streams and elasticsearch crashes

Hey there,
i’m in the middle of deploying graylog3.0 as a docker container.
Everything works fine, server is reachable, mongodb and elasticsearch are online and resolvable and logs are flowing in.

docker logs -f graylog_elasticsearch_1
[2019-03-13T14:47:20,401][INFO ][o.e.n.Node               ] [ZZLyHP9] started
[2019-03-13T14:47:20,421][INFO ][o.e.g.GatewayService     ] [ZZLyHP9] recovered [0] indices into cluster_state
[2019-03-13T14:47:32,394][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] [ZZLyHP9] Deprecated field [template] used, replaced by [index_patterns]
[2019-03-13T14:47:32,523][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZZLyHP9] adding template [graylog-internal] for index patterns [graylog_*]
[2019-03-13T14:47:32,581][INFO ][o.e.c.m.MetaDataCreateIndexService] [ZZLyHP9] [graylog_0] creating index, cause [api], templates [graylog-internal], shards [4]/[0], mappings [message]
[2019-03-13T14:47:32,854][INFO ][o.e.c.r.a.AllocationService] [ZZLyHP9] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[graylog_0][1], [graylog_0][2], [graylog_0][3], [graylog_0][0]] ...]).

Problems come up if i click on “Streams”=>“All messages” were it will stay in ‘Loading…’.
While doing so, for every update on the side following log will occure:

docker logs -f graylog_elasticsearch_1
[2019-03-13T15:18:30,220][WARN ][o.e.d.c.ParseField       ] [ZZLyHP9] Deprecated field [split_on_whitespace] used, replaced by [This setting is ignored, the parser always splits on operator]
[2019-03-13T15:18:30,221][WARN ][o.e.d.c.ParseField       ] [ZZLyHP9] Deprecated field [disable_coord] used, replaced by [disable_coord has been removed]
[2019-03-13T15:18:30,221][WARN ][o.e.d.c.ParseField       ] [ZZLyHP9] Deprecated field [disable_coord] used, replaced by [disable_coord has been removed]
[2019-03-13T15:18:30,248][WARN ][o.e.d.c.ParseField       ] [ZZLyHP9] Deprecated field [use_dis_max] used, replaced by [Set [tie_breaker] to 1 instead]
[2019-03-13T15:18:30,248][WARN ][o.e.d.c.ParseField       ] [ZZLyHP9] Deprecated field [auto_generate_phrase_queries] used, replaced by [This setting is ignored, use [type=phrase] instead to make phrase queries out of all text that is within query operators, or use explicitly quoted strings if you need finer-grained control]

If there are to many requests, e.g. while setting Updating on every second, the elasticsearch container will crash in less then a minute without writing any different log.

This will result in graylog not beeing able to connect to elasticsearch anymore.

Logs from my servers are comming in as they should.

Host: Ubuntu 18.04.2 LTS
Docker: version 18.03.1-ce, build 9ee9f40
docker-compose: version 1.22.0, build f46880fe

version: '2'

services:
    mongo:
        image: mongo:3
        volumes:
            - mongo_data:/data/db
        networks:
            backend:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1
        volumes:
            - es_data:/usr/share/elasticsearch/data
        networks:
            backend:
        environment:
            - http.host=0.0.0.0
            - transport.host=localhost
            - network.host=0.0.0.0
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        ulimits:
            memlock:
                soft: -1
                hard: -1
        mem_limit: 1g
    graylog:
        image: graylog/graylog:3.0
        volumes:
            - graylog_journal:/usr/share/graylog/data/journal
        environment:
            - GRAYLOG_PASSWORD_SECRET=<some-valid-password>
            - GRAYLOG_ROOT_PASSWORD_SHA2=<some-sha2-hash>
            - GRAYLOG_HTTP_EXTERNAL_URI=http://<static-ip-of-server>:9000/
            - GRAYLOG_ROOT_TIMEZONE=<my-timezone>
        depends_on:
            - mongo
            - elasticsearch
        networks:
            backend:
            custom-br:
                ipv4_address: ${IPADDRESS}    # via export file
        ports:
            - "${PORT_HOST}:9000"             # via export file, map always to same 9000:9000 and 514:514
            - "${PORT_SYSLOG}:514"
            - "${PORT_SYSLOG}:514/udp"
            - "${PORT_RAW}:5555"
            - "${PORT_RAW}:5555/udp"
            - "${PORT_GELF}:12201"
            - "${PORT_GELF}:12201/udp"
volumes:
    mongo_data:
        driver: local
    es_data:
        driver: local
    graylog_journal:
        driver: local
networks:
    backend:
        internal: true
    custom-br:
        external:
            name: custom-br

Same docker-compose file will work fluently on my laptop but not on my server.
I feed it with the logs of an rsyslog to a syslog udp input via

tail -n 1 /etc/rsyslog.conf
*.* @<ip-of-graylog-container>:514

but it will fail with raw input from nc on 5555 as well.

I’ve never used graylog before.
Thank you for your help in advance :slight_smile:
- bleak

            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"

you really want to give elasticsearch more ressources … as you already notice running ES with only 512m HEAP isn’t meant for productive use. Adjust this with up to 32GB heap, if you have that amount of RAM available.

And all will flow.

Thank you very much for that fast reply.
I tried changing the values and testing but the maximum is 512m.
With >=1g and >=1024m it will always result in an instant stop of the container and the oneliner

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.

how much memory do you have available on the Docker HOST?

And the message you provide has no connection to the problem you might have. I guess you have spelled something wrong when you changed the settings.

Ah well i got just adviced that i cannot give Xms==Xmx.
Now ES can start again
There are 64gb RAM on the host

Ok i now can open the “All messages” stream but it still crashes with all those deprecated field warnings.
On my notebook using 512m i could process >800 msg/s, now it crashes with 20g heap and 3msg/s.

Errormsg en website:

Error Message:
Unable to perform search query
Details:
Search status code:
500
Search response:
cannot GET http://10.16.7.33:9000/api/search/universal/relative?query=%2A&range=172800&filter=streams%3A000000000000000000000001&limit=150&sort=timestamp%3Adesc (500)

without any log message nowbody is able to help you further.

I’ve set it up on a test server and same error occures:
I can define an input, it’s receiving messages, but it’s not able to make them visible in the “All messages” stream.
It’s now without the heap specifications, if i define them with anything exept 512m it will again crash es with the

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.

warning. If that happens no more log than this line is available.
Logs of es when trying to open up the “All messages” stream are in that pastebin
https://pastebin.com/YKTyFLpU
If i can provide you with a specific log just name it and i will look for it, i haven’t found any but

$ docker exec graylog_elasticsearch_1 cat /var/log/grubby_prune_debug 
[1543973801] Start       : Begin search for extraneous debug arguments
[1543973801] Error       : Could not find a bootloader configuration to back up
[1543973801] Exit        : Exiting script

The memory/heap usage while opening the stream:

The JVM is using 279.3MB of 339.6MB heap space and will not attempt to use more than 1.8GB
version: '2'

services:
    mongo:
        image: mongo:3
        volumes:
            - mongo_data:/data/db
        networks:
            backend:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1
        volumes:
            - es_data:/usr/share/elasticsearch/data
        networks:
            backend:
        environment:
            - http.host=0.0.0.0
            - transport.host=localhost
            - network.host=0.0.0.0
            - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
        ulimits:
            memlock:
                soft: -1
                hard: -1
        mem_limit: 2g
    graylog:
        image: graylog/graylog:3.0
        volumes:
            - graylog_journal:/usr/share/graylog/data/journal
        environment:
            - GRAYLOG_PASSWORD_SECRET=<some-valid-password>
            - GRAYLOG_ROOT_PASSWORD_SHA2=<some-sha2-hash>
            - GRAYLOG_HTTP_EXTERNAL_URI=http://<static-ip-of-server>:9000/
            - GRAYLOG_ROOT_TIMEZONE=<my-timezone>
        depends_on:
            - mongo
            - elasticsearch
        networks:
            backend:
            custom-br:
                ipv4_address: ${IPADDRESS}    # via export file
        ports:
            - "${PORT_HOST}:9000"             # via export file, map always to same 9000:9000 and 514:514
            - "${PORT_SYSLOG}:514"
            - "${PORT_SYSLOG}:514/udp"
            - "${PORT_RAW}:5555"
            - "${PORT_RAW}:5555/udp"
            - "${PORT_GELF}:12201"
            - "${PORT_GELF}:12201/udp"
volumes:
    mongo_data:
        driver: local
    es_data:
        driver: local
    graylog_journal:
        driver: local
networks:
    backend:
        internal: true
    custom-br:
        external:
            name: custom-br

So what if you change the docker compose - elasticsearch assigned memory for heap and raised the available memory for the VM …

I’ll try thanks. What i’ve noticed is:

Defining a stdout output for the “All messages” stream will result in all the incomming logs to be written to the graylog container logs.
It will still say “Nothing found in stream “All messages””. There is no filter set and if i define the filter to show everything from a input it won’t show anything

Solved.

GRAYLOG_ROOT_TIMEZONE will not set the overall timezone resulting in messages beeing in the future.

Thank you for your help :slight_smile:

GRAYLOG_ROOT_TIMEZONE is only for the user root… as written in the configuration: http://docs.graylog.org/en/3.0/pages/configuration/server.conf.html#general

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.