Unable to get log messages

Hi,
I have done Graylog 2.1.1 setup using docker compose file. below docker file i used. I receive greylog web interface without any issues. i created a syslog udp input and also i created a file in /etc/rsyslog.d/ and the contents in the file are

*.* @52.88.xxx.xx:514;RSYSLOG_SyslogProtocol23Format
version: '2'
services:
  some-mongo:
    image: "mongo:3"
  some-elasticsearch:
    image: "elasticsearch:2"
    command: "elasticsearch -Des.cluster.name='graylog'"
  graylog:
    image: graylog2/server:2.1.1-1
    environment:

      GRAYLOG_PASSWORD_SECRET: 123456789123456789
      GRAYLOG_ROOT_PASSWORD_SHA2: b5867a2a76366b304f8334d38e94a77dde29b4a935098d7ad2448a4fefc84174
      GRAYLOG_WEB_ENDPOINT_URI: http://52.88.xxx.xx:9000/api
    links:
      - some-mongo:mongo
      - some-elasticsearch:elasticsearch
    ports:
      - "9000:9000"
      - "12900:12900"
      - "1514:1514"
      - "12200:12200"

even i try for syslog tcp also but same issue.![greylog|548x500]

This attachment is while i create syslog udp input

Thanks

You configured the Syslog UDP input to listen on port 514/udp but declared port 1514/tcp in your docker-compose.yml.

Hi jochen

update docker compose file from 1514 port to 514 port but still same issue is on.

f84df159 / 012950f53dc3
Network IO: 0B 0B (total: 0B 0B )
Empty messages discarded: 0

What’s the complete docker-compose.yml and the configuration of your Syslog UDP input now?
Has the Syslog UDP input been started in the Graylog Docker container?

This my docker compose file after i updated.

version: '2'
services:
  some-mongo:
    image: "mongo:3"
  some-elasticsearch:
    image: "elasticsearch:2"
    command: "elasticsearch -Des.cluster.name='graylog'"
  graylog:
    image: graylog2/server:2.1.1-1
    environment:

      GRAYLOG_PASSWORD_SECRET: 123456789123456789
      GRAYLOG_ROOT_PASSWORD_SHA2: b5867a2a76366b304f8334d38e94a77dde29b4a935098d7ad2448a4fefc84174
      GRAYLOG_WEB_ENDPOINT_URI: http://34.215.xxx.xxx:9000/api
    links:
      - some-mongo:mongo
      - some-elasticsearch:elasticsearch
    ports:
      - "9000:9000"
      - "12900:12900"
      - "514:514"
      - "1514:1514"

even when i check in terminal the syslog udp input is running. plz see the log which i took from terminal.

graylog_1             | 2017-11-30 05:47:52,828 INFO : org.graylog2.bootstrap.ServerBootstrap - Graylog server up and running.
graylog_1             | 2017-11-30 05:47:52,828 INFO : org.graylog2.shared.initializers.ServiceManagerListener - Services are healthy
graylog_1             | 2017-11-30 05:47:52,837 INFO : org.graylog2.shared.initializers.InputSetupService - Triggering launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] to Running [LB:ALIVE]
graylog_1             | 2017-11-30 05:47:52,908 INFO : org.graylog2.inputs.InputStateListener - Input [Syslog UDP/5a1ead542ab79c00012c7173] is now STARTING

Please read the Docker documentation on the ports section in Docker Compose:

Hint: Syntax for TCP and UDP ports is different.

thanks.

I will do same setup without docker and let me check on this with normal installation.

Hi jochen

I setup graylog without docker and iam able to access web ui even i uncomment the lines tcp lines in rsyslog.conf file also .while i creating the syslog tcp inputs it got failed. while checking the notifications i observer the below things.

An input has failed to start (triggered 11 minutes ago)
Input 5a2111d756d84034c726236b has failed to start on node c087192c-7830-4509-b783-b75bf0b7155d for this reason: »Permission denied.«. This means that you are unable to receive any messages from this input. This is mostly an indication for a misconfiguration or an error. You can click here to solve this.

Thanks.

http://docs.graylog.org/en/2.3/pages/faq.html#how-can-i-start-an-input-on-a-port-below-1024

Thx now iam getting messages apper in graylog.

But when i start again iam facing below error
Error Message:
Unable to perform search query.
then i check elastic search it is inactive then i restart the elastic search and run this command
curl -i 'http://127.0.0.1:9200/?pretty
still the connection is refused but elastic search is active now.

elastic search command i run.
Initially it is running when i type systemctl status elasticsearch.service command but after 5 min the service automatically stopped
[root@logm ~]# systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2017-12-04 11:43:15 IST; 42s ago
Docs: http://www.elastic.co
Process: 4532 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 4534 (java)
CGroup: /system.slice/elasticsearch.service
└─4534 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInit…

Dec 04 11:43:15 logm systemd[1]: Starting Elasticsearch…
Dec 04 11:43:15 logm systemd[1]: Started Elasticsearch.
Dec 04 11:43:16 logm elasticsearch[4534]: OpenJDK 64-Bit Server VM warning: …N
Hint: Some lines were ellipsized, use -l to show in full.

What’s in the logs of your Graylog and Elasticsearch nodes?
:arrow_right: http://docs.graylog.org/en/2.3/pages/configuration/file_location.html

while i checking the errors i observed

2017-12-04T13:08:54.111+05:30 ERROR [Cluster] Couldn't read cluster health for indices [graylog_*] (Could not connect to http://127.0.0.1:9200)
2017-12-04T13:08:54.111+05:30 INFO  [IndexerClusterCheckerThread] Indexer not fully initialized yet. Skipping periodic cluster check.
2017-12-04T13:08:56.173+05:30 ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #224).
[2017-12-04T11:28:20,754][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-12-04T11:28:32,140][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [aggs-matrix-stats]
[2017-12-04T11:28:32,141][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [ingest-common]
[2017-12-04T11:28:32,142][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [lang-expression]
[2017-12-04T11:28:32,142][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [lang-groovy]
[2017-12-04T11:28:32,142][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [lang-mustache]
[2017-12-04T11:28:32,142][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [lang-painless]
[2017-12-04T11:28:32,142][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [parent-join]
[2017-12-04T11:28:32,143][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [percolator]
[2017-12-04T11:28:32,143][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [reindex]
[2017-12-04T11:28:32,143][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [transport-netty3]
[2017-12-04T11:28:32,143][INFO ][o.e.p.PluginsService     ] [VJWndjv] loaded module [transport-netty4]
[2017-12-04T11:28:32,144][INFO ][o.e.p.PluginsService     ] [VJWndjv] no plugins loaded
[2017-12-04T11:29:45,893][INFO ][o.e.d.DiscoveryModule    ] [VJWndjv] using discovery type [zen]
[2017-12-04T11:31:42,027][INFO ][o.e.n.Node               ] initialized
[2017-12-04T11:31:42,074][INFO ][o.e.n.Node               ] [VJWndjv] starting ...
[2017-12-04T11:37:53,914][INFO ][o.e.n.Node               ] [] initializing ...
[2017-12-04T11:37:55,534][INFO ][o.e.e.NodeEnvironment    ] [VJWndjv] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [13.9gb], net total_space [22.4gb], spins? [unknown], types [rootfs]
[2017-12-04T11:37:55,535][INFO ][o.e.e.NodeEnvironment    ] [VJWndjv] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-12-04T11:37:55,946][INFO ][o.e.n.Node               ] node name [VJWndjv] derived from node ID [VJWndjv3RE-x-Byxw173Jg]; set [node.name] to override
[2017-12-04T11:37:55,951][INFO ][o.e.n.Node               ] version[5.6.4], pid[4411], build[8bbedf5/2017-10-31T18:55:38.105Z], OS[Linux/3.10.0-693.5.2.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
[2017-12-04T11:37:55,951][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-12-04T11:44:31,812][INFO ][o.e.n.Node               ] [] initializing ...
[2017-12-04T11:44:39,776][INFO ][o.e.e.NodeEnvironment    ] [VJWndjv] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [13.9gb], net total_space [22.4gb], spins? [unknown], types [rootfs]
[2017-12-04T11:44:39,857][INFO ][o.e.e.NodeEnvironment    ] [VJWndjv] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-12-04T11:44:40,942][INFO ][o.e.n.Node               ] node name [VJWndjv] derived from node ID [VJWndjv3RE-x-Byxw173Jg]; set [node.name] to override
[2017-12-04T11:44:40,953][INFO ][o.e.n.Node               ] version[5.6.4], pid[4534], build[8bbedf5/2017-10-31T18:55:38.105Z], OS[Linux/3.10.0-693.5.2.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
[2017-12-04T11:44:40,953][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]

Your Graylog node is unable to connect to the Elasticsearch node at 127.0.0.1.

what is the process to resolve this issue

Making sure that the Graylog nodes can communicate with the configured Elasticsearch nodes.

If you’re still running Graylog in a Docker container, using the loopback interface (127.0.0.1) to communicate with Elasticsearch is wrong.

My advice would be to upgrade to the latest version of the Graylog Docker image and follow its documentation:


https://hub.docker.com/r/graylog/graylog/

Hi Jochen,
iam not using docker in this case.

i found the issue when i run
/usr/share/elasticsearch/bin/elasticsearch start the jvm has giving problem and elastic search is killed due to low ram and processor and i increased the ram and processor and i run the elastic search and i run curl -i 'http://127.0.0.1:9200/?pretty
Now its connected.

Thx Jochen for ur great support.

Now i need to replicate this using docker.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.