Graylog Docker Container on QNAP Container Station

Has anyone managed to get the Graylog Docker container running in the Container Station application on the QNAP NAS products?

When I run through the suggested install, I get a working mongodb docker, but the Elasticsearch docker always fails to load, which prevents the Graylog container from being created at all.

I am currently running Graylog in a Virtualbox VM on my desktop machine. For many reasons, this is not the optimal solution. If I can get it running on the NAS (Quad-Core AMD CPU with 16GB of RAM), it would certainly make life easier, but I am relatively green when it comes to docker work. I have a couple working, and have a workable understanding of the environment, but certainly not an expert… But installing, per the documentation, just doesn’t work on here.

Please elaborate on what you did exactly and what the result of each command was.

Also, please provide details what “Elasticsearch docker always fails to load” means exactly.

Sure. Here are the commands from the documentation:

$ docker run --name mongo -d mongo:3
$ docker run --name elasticsearch
-e “http.host=0.0.0.0” -e “xpack.security.enabled=false”
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.2
$ docker run --link mongo --link elasticsearch
-p 9000:9000 -p 12201:12201 -p 514:514
-e GRAYLOG_WEB_ENDPOINT_URI=“http://127.0.0.1:9000/api
-d graylog/graylog:2.4.0-1

Here are the results…
$ docker run --name mongo -d mongo:3

  • Works fine. After completion, there is a container named mongo, using the image mongo:3. Running with no issues.

$ docker run --name elasticsearch
-e “http.host=0.0.0.0” -e “xpack.security.enabled=false”
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.2

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
[2018-03-23T15:12:28,674][INFO ][o.e.n.Node ] initializing …
[2018-03-23T15:12:29,057][INFO ][o.e.e.NodeEnvironment ] [0C6eTE5] using [1] data paths, mounts [[/ (overlay)]], net usable_space [1.2tb], net total_space
[5.2tb], spins? [possibly], types [overlay]
[2018-03-23T15:12:29,061][INFO ][o.e.e.NodeEnvironment ] [0C6eTE5] heap size [1.9gb], compressed ordinary object pointers [true]
[2018-03-23T15:12:29,068][INFO ][o.e.n.Node ] node name [0C6eTE5] derived from node ID [0C6eTE5BS2ClqZdxMPKOhg]; set [node.name] to override
[2018-03-23T15:12:29,069][INFO ][o.e.n.Node ] version[5.6.2], pid[1], build[57e20f3/2017-09-23T13:16:45.703Z], OS[Linux/4.2.8/amd64], JVM[Oracl
e Corporation/OpenJDK 64-Bit Server VM/1.8.0_141/25.141-b16]
[2018-03-23T15:12:29,073][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -X
X:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCano
nicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false
, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsea
rch]
[2018-03-23T15:12:35,714][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [aggs-matrix-stats]
[2018-03-23T15:12:35,715][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [ingest-common]
[2018-03-23T15:12:35,715][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [lang-expression]
[2018-03-23T15:12:35,715][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [lang-groovy]
[2018-03-23T15:12:35,715][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [lang-mustache]
[2018-03-23T15:12:35,715][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [lang-painless]
[2018-03-23T15:12:35,715][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [parent-join]
[2018-03-23T15:12:35,716][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [percolator]
[2018-03-23T15:12:35,716][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [reindex]
[2018-03-23T15:12:35,716][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [transport-netty3]
[2018-03-23T15:12:35,716][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded module [transport-netty4]
[2018-03-23T15:12:35,718][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded plugin [ingest-geoip]
[2018-03-23T15:12:35,718][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded plugin [ingest-user-agent]
[2018-03-23T15:12:35,719][INFO ][o.e.p.PluginsService ] [0C6eTE5] loaded plugin [x-pack]
[2018-03-23T15:12:45,405][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/56] [Main.cc@128] controller (64 bit): Version 5.6.2 (Build 228329870d1c63)
Copyright (c) 2017 Elasticsearch BV
[2018-03-23T15:12:45,593][INFO ][o.e.d.DiscoveryModule ] [0C6eTE5] using discovery type [zen]
[2018-03-23T15:12:49,993][INFO ][o.e.n.Node ] initialized
[2018-03-23T15:12:49,994][INFO ][o.e.n.Node ] [0C6eTE5] starting …
[2018-03-23T15:12:50,913][INFO ][o.e.t.TransportService ] [0C6eTE5] publish_address {10.0.3.3:9300}, bound_addresses {0.0.0.0:9300}
[2018-03-23T15:12:50,959][INFO ][o.e.b.BootstrapChecks ] [0C6eTE5] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap ch
ecks
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-03-23T15:12:50,988][INFO ][o.e.n.Node ] [0C6eTE5] stopping …
[2018-03-23T15:12:51,163][INFO ][o.e.n.Node ] [0C6eTE5] stopped
[2018-03-23T15:12:51,164][INFO ][o.e.n.Node ] [0C6eTE5] closing …
[2018-03-23T15:12:51,203][INFO ][o.e.n.Node ] [0C6eTE5] closed

Cannot even complete the graylog install, because it needs to link to the non-running elasticsearch instance.

I’ve also tried it with creating some local storage, and giving the container its own IP rather than NAT, but no luck.

You should address these.

That got me pretty far. I ended up working it out with the docker-compose script.

Now I have to figure out how to get my configuration over from the existing server. I don’t need any of the data, but would love to have the rules and configuration if possible.

The pipeline rules and parts of the configuration are stored in MongoDB. You can dump the MongoDB database on your old system and restore it on your new machine.

The other part of the configuration is in the Graylog configuration file.

I’m going to have to pick this up later. I am running into a wall trying to get persistent data volumes using the docker-compose script. I have the process down when doing it manually, but the syntax is getting me.

On the NAS, I want the data to all reside under:
/share/CACHEDEV1_DATA/Container/graylog

But, if I enter that as the path, it seems to be ignored. I’m sure it is something I’m doing wrong, but I’ll get eyes back on it later tonight, when they are no longer crossed.

Thanks for all of the help so far!

Well, I think I am almost there. Just one more question, regarding the IP configuration…

I would like to give, at very least, the graylog container it’s own IP address (192.168.0.14). I’m not that familiar with the docker-compose script, having a hard time getting it configured in there.

Also, once I do, do the elasticsearch and mongo containers need the same, or will they still be accessible to the linked container?

you could sneak into my lab ( https://github.com/jalogisch/d-gray-lab/blob/master/docker-compose.yml ) or one presentation lab from jochen ( https://github.com/joschi/osmc-2017-dig-in-the-dirt/blob/master/docker/docker-compose.yml ) to get an idea.

last but not least, read the docs https://docs.docker.com/compose/

Those look very helpful.

Before I saw it, I worked through most of my issues. Spent the last week going over various things.

The only immediate problem I am still having is ports. The standards are all forwarded with no issue (9000,9200,514,12201), but the additional ports I have defined for inputs do not seem to be working. I have defined inputs at 12211, 12301 and 12302. The inputs aren’t getting anything, and netstat doesn’t show the ports open on the host. I will read through the docs you sent over and look at the lab and presentation you provided, and get back if I am still having issues.

Have you updated the Docker container or Docker compose configuration accordingly?

Several times, but clearly not correctly. I ended up doing a docker-compose down and then up again, and all were showing up fine.

Right now, it looks like the container is running pretty much exactly the way I want, with the exception of the bridged network. It’s not a problem, as much as it is me needing to read the docs.

I appreciate all of your help!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.