Loading Forever in search

Hello, I’m having problems with our servers, today we have 4 servers with the following configurations:

HARDWARE

VPS, high performance Intel CPU 6 core, 32GB ram, 2TB HD (boosted SSD), 1Gbit / s port
UNLIMITED Traffic

SOFTWARE
CentOS 7.4
Graylog 2.3.2 + 3df951e
Oracle Corporation 1.8.0_151 on Linux 3.10.0-693.11.1.el7.x86_64

We have around 300 Mikrotik’s generating logs by Syslog UDP, an average of 1500 to 2000 messages per second at peak times, most often hold at 1000 messages per second, after getting that amount I have my server very slow on the web interface , the queries do not work, I always need to extract logs with the filters for a certain time but they are not working, in all search I only get a load without end, if anyone can help, thanks

2017-12-19T12:14:02.333-02:00 INFO [InputStateListener] Input [Syslog UDP/5a15c1621baace2adad41d4e] is now STARTING
2017-12-19T12:14:02.335-02:00 INFO [ServerBootstrap] Graylog server up and running.
2017-12-19T12:14:02.338-02:00 INFO [InputStateListener] Input [Syslog UDP/5a15c1621baace2adad41d4e] is now RUNNING
2017-12-19T12:14:42.599-02:00 ERROR [Cluster] Couldn’t read cluster health for indices [graylog_deflector] (Request aborted)
2017-12-19T12:14:42.599-02:00 WARN [V20161130141500_DefaultStreamRecalcIndexRanges] Interrupted or timed out waiting for Elasticsearch cluster, checking again.
2017-12-19T12:15:05.857-02:00 ERROR [Messages] Caught exception during bulk indexing: java.net.SocketTimeoutException: Read timed out, retrying (attempt #1).
2017-12-19T12:15:08.147-02:00 ERROR [Messages] Caught exception during bulk indexing: java.net.SocketTimeoutException: Read timed out, retrying (attempt #1).
2017-12-19T12:15:42.600-02:00 WARN [V20161130141500_DefaultStreamRecalcIndexRanges] Interrupted or timed out waiting for Elasticsearch cluster, checking again.
2017-12-19T12:16:05.985-02:00 ERROR [Messages] Caught exception during bulk indexing: java.net.SocketTimeoutException: Read timed out, retrying (attempt #1).
2017-12-19T12:16:08.201-02:00 ERROR [Messages] Caught exception during bulk indexing: java.net.SocketTimeoutException: Read timed out, retrying (attempt #1).
2017-12-19T12:16:42.601-02:00 WARN [V20161130141500_DefaultStreamRecalcIndexRanges] Interrupted or timed out waiting for Elasticsearch cluster, checking again.
2017-12-19T12:16:58.879-02:00 INFO [Messages] Bulk indexing finally successful (attempt #2).
2017-12-19T12:17:00.046-02:00 INFO [Messages] Bulk indexing finally successful (attempt #2).
2017-12-19T12:17:01.713-02:00 INFO [Messages] Bulk indexing finally successful (attempt #2).
2017-12-19T12:17:02.532-02:00 INFO [Messages] Bulk indexing finally successful (attempt #2).
2017-12-19T12:28:09.833-02:00 ERROR [Messages] Caught exception during bulk indexing: java.net.SocketTimeoutException: Read timed out, retrying (attempt #1).
2017-12-19T12:28:16.157-02:00 ERROR [Messages] Caught exception during bulk indexing: java.net.SocketTimeoutException: Read timed out, retrying (attempt #1).
2017-12-19T12:29:25.404-02:00 INFO [Messages] Bulk indexing finally successful (attempt #2).
2017-12-19T12:29:26.811-02:00 INFO [Messages] Bulk indexing finally successful (attempt #2).
2017-12-19T12:32:47.254-02:00 INFO [connection] Opened connection [connectionId{localValue:8, serverValue:8}] to localhost:27017
2017-12-19T13:00:01.588-02:00 INFO [connection] Opened connection [connectionId{localValue:9, serverValue:9}] to localhost:27017
2017-12-19T13:07:47.450-02:00 INFO [connection] Opened connection [connectionId{localValue:10, serverValue:10}] to localhost:27017
2017-12-19T13:07:53.273-02:00 INFO [connection] Opened connection [connectionId{localValue:11, serverValue:11}] to localhost:27017

How did you installed Graylog?
Is Elasticsearch running on the same System?

All provided Information indicates that your Elasticsearch is having a hard time.

1 Like

hi, thanks for reply,
I followed this manual, everything is on the same server, as the manual informs.

you should check Elasticsearch Logfile that might give you some additional Information.

1 Like

[2017-12-19T13:53:25,067][WARN ][o.e.t.TransportService ] [-QunLnS] Transport response handler not found of id [4762]
[2017-12-19T13:53:25,214][WARN ][o.e.t.TransportService ] [-QunLnS] Transport response handler not found of id [4761]
[2017-12-19T13:53:26,537][WARN ][o.e.t.TransportService ] [-QunLnS] Transport response handler not found of id [4763]
[2017-12-19T13:53:31,266][INFO ][o.e.n.Node ] [-QunLnS] stopped
[2017-12-19T13:53:31,268][INFO ][o.e.n.Node ] [-QunLnS] closing …
[2017-12-19T13:53:31,316][INFO ][o.e.n.Node ] [-QunLnS] closed
[2017-12-19T13:54:37,818][INFO ][o.e.n.Node ] [] initializing …
[2017-12-19T13:54:38,258][INFO ][o.e.e.NodeEnvironment ] [-QunLnS] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.6tb], net total_space [1.9tb], spins? [unknown], types [rootfs]
[2017-12-19T13:54:38,259][INFO ][o.e.e.NodeEnvironment ] [-QunLnS] heap size [19.9gb], compressed ordinary object pointers [true]
[2017-12-19T13:54:38,305][INFO ][o.e.n.Node ] node name [-QunLnS] derived from node ID [-QunLnS0QWuf7MWifYoJzA]; set [node.name] to override
[2017-12-19T13:54:38,306][INFO ][o.e.n.Node ] version[5.6.5], pid[916], build[6a37571/2017-12-04T07:50:10.466Z], OS[Linux/3.10.0-693.11.1.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot™ 64-Bit Server VM/1.8.0_151/25.151-b12]
[2017-12-19T13:54:38,306][INFO ][o.e.n.Node ] JVM arguments [-Xms10g, -Xmx20g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-12-19T13:54:41,337][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [aggs-matrix-stats]
[2017-12-19T13:54:41,337][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [ingest-common]
[2017-12-19T13:54:41,338][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-expression]
[2017-12-19T13:54:41,338][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-groovy]
[2017-12-19T13:54:41,338][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-mustache]
[2017-12-19T13:54:41,338][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-painless]
[2017-12-19T13:54:41,338][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [parent-join]
[2017-12-19T13:54:41,339][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [percolator]
[2017-12-19T13:54:41,339][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [reindex]
[2017-12-19T13:54:41,339][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [transport-netty3]
[2017-12-19T13:54:41,339][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [transport-netty4]
[2017-12-19T13:54:41,339][INFO ][o.e.p.PluginsService ] [-QunLnS] no plugins loaded
[2017-12-19T13:54:41,583][ERROR][o.e.b.Bootstrap ] Exception
java.lang.IllegalArgumentException: script.disable_dynamic is not a supported setting, replace with fine-grained script settings.
Dynamic scripts can be enabled for all languages and all operations by replacing script.disable_dynamic: false with script.inline: true and script.stored: true in elasticsearch.yml
at org.elasticsearch.script.ScriptService.(ScriptService.java:124) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.script.ScriptModule.(ScriptModule.java:72) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.script.ScriptModule.create(ScriptModule.java:59) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.node.Node.(Node.java:338) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.node.Node.(Node.java:245) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:233) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:233) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.6.5.jar:5.6.5]
[2017-12-19T13:54:41,591][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: script.disable_dynamic is not a supported setting, replace with fine-grained script settings.
Dynamic scripts can be enabled for all languages and all operations by replacing script.disable_dynamic: false with script.inline: true and script.stored: true in elasticsearch.yml
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.6.5.jar:5.6.5]
Caused by: java.lang.IllegalArgumentException: script.disable_dynamic is not a supported setting, replace with fine-grained script settings.
Dynamic scripts can be enabled for all languages and all operations by replacing script.disable_dynamic: false with script.inline: true and script.stored: true in elasticsearch.yml
at org.elasticsearch.script.ScriptService.(ScriptService.java:124) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.script.ScriptModule.(ScriptModule.java:72) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.script.ScriptModule.create(ScriptModule.java:59) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.node.Node.(Node.java:338) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.node.Node.(Node.java:245) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:233) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:233) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) ~[elasticsearch-5.6.5.jar:5.6.5]
… 6 more
[2017-12-19T14:00:26,430][INFO ][o.e.n.Node ] [] initializing …
[2017-12-19T14:00:26,721][INFO ][o.e.e.NodeEnvironment ] [-QunLnS] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.6tb], net total_space [1.9tb], spins? [unknown], types [rootfs]
[2017-12-19T14:00:26,721][INFO ][o.e.e.NodeEnvironment ] [-QunLnS] heap size [19.9gb], compressed ordinary object pointers [true]
[2017-12-19T14:00:26,876][INFO ][o.e.n.Node ] node name [-QunLnS] derived from node ID [-QunLnS0QWuf7MWifYoJzA]; set [node.name] to override
[2017-12-19T14:00:26,876][INFO ][o.e.n.Node ] version[5.6.5], pid[799], build[6a37571/2017-12-04T07:50:10.466Z], OS[Linux/3.10.0-693.11.1.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot™ 64-Bit Server VM/1.8.0_151/25.151-b12]
[2017-12-19T14:00:26,876][INFO ][o.e.n.Node ] JVM arguments [-Xms10g, -Xmx20g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-12-19T14:00:30,880][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [aggs-matrix-stats]
[2017-12-19T14:00:30,880][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [ingest-common]
[2017-12-19T14:00:30,880][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-expression]
[2017-12-19T14:00:30,880][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-groovy]
[2017-12-19T14:00:30,880][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-mustache]
[2017-12-19T14:00:30,880][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-painless]
[2017-12-19T14:00:30,880][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [parent-join]
[2017-12-19T14:00:30,881][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [percolator]
[2017-12-19T14:00:30,881][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [reindex]
[2017-12-19T14:00:30,881][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [transport-netty3]
[2017-12-19T14:00:30,881][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [transport-netty4]
[2017-12-19T14:00:30,883][INFO ][o.e.p.PluginsService ] [-QunLnS] no plugins loaded
[2017-12-19T14:00:31,840][ERROR][o.e.b.Bootstrap ] Exception
java.lang.IllegalArgumentException: script.disable_dynamic is not a supported setting, replace with fine-grained script settings.
Dynamic scripts can be enabled for all languages and all operations by replacing script.disable_dynamic: false with script.inline: true and script.stored: true in elasticsearch.yml
at org.elasticsearch.script.ScriptService.(ScriptService.java:124) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.script.ScriptModule.(ScriptModule.java:72) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.script.ScriptModule.create(ScriptModule.java:59) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.node.Node.(Node.java:338) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.node.Node.(Node.java:245) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:233) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:233) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.6.5.jar:5.6.5]
[2017-12-19T14:00:31,848][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: script.disable_dynamic is not a supported setting, replace with fine-grained script settings.
Dynamic scripts can be enabled for all languages and all operations by replacing script.disable_dynamic: false with script.inline: true and script.stored: true in elasticsearch.yml
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.6.5.jar:5.6.5]
Caused by: java.lang.IllegalArgumentException: script.disable_dynamic is not a supported setting, replace with fine-grained script settings.
Dynamic scripts can be enabled for all languages and all operations by replacing script.disable_dynamic: false with script.inline: true and script.stored: true in elasticsearch.yml
at org.elasticsearch.script.ScriptService.(ScriptService.java:124) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.script.ScriptModule.(ScriptModule.java:72) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.script.ScriptModule.create(ScriptModule.java:59) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.node.Node.(Node.java:338) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.node.Node.(Node.java:245) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:233) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:233) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) ~[elasticsearch-5.6.5.jar:5.6.5]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) ~[elasticsearch-5.6.5.jar:5.6.5]
… 6 more
[2017-12-19T14:07:31,649][INFO ][o.e.n.Node ] [] initializing …
[2017-12-19T14:07:31,912][INFO ][o.e.e.NodeEnvironment ] [-QunLnS] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.6tb], net total_space [1.9tb], spins? [unknown], types [rootfs]
[2017-12-19T14:07:31,912][INFO ][o.e.e.NodeEnvironment ] [-QunLnS] heap size [19.9gb], compressed ordinary object pointers [true]
[2017-12-19T14:07:31,946][INFO ][o.e.n.Node ] node name [-QunLnS] derived from node ID [-QunLnS0QWuf7MWifYoJzA]; set [node.name] to override
[2017-12-19T14:07:31,947][INFO ][o.e.n.Node ] version[5.6.5], pid[787], build[6a37571/2017-12-04T07:50:10.466Z], OS[Linux/3.10.0-693.11.1.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot™ 64-Bit Server VM/1.8.0_151/25.151-b12]
[2017-12-19T14:07:31,947][INFO ][o.e.n.Node ] JVM arguments [-Xms10g, -Xmx20g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-12-19T14:07:34,814][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [aggs-matrix-stats]
[2017-12-19T14:07:34,814][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [ingest-common]
[2017-12-19T14:07:34,814][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-expression]
[2017-12-19T14:07:34,814][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-groovy]
[2017-12-19T14:07:34,814][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-mustache]
[2017-12-19T14:07:34,814][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [lang-painless]
[2017-12-19T14:07:34,814][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [parent-join]
[2017-12-19T14:07:34,814][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [percolator]
[2017-12-19T14:07:34,815][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [reindex]
[2017-12-19T14:07:34,815][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [transport-netty3]
[2017-12-19T14:07:34,815][INFO ][o.e.p.PluginsService ] [-QunLnS] loaded module [transport-netty4]
[2017-12-19T14:07:34,815][INFO ][o.e.p.PluginsService ] [-QunLnS] no plugins loaded
[2017-12-19T14:07:38,092][INFO ][o.e.d.DiscoveryModule ] [-QunLnS] using discovery type [zen]
[2017-12-19T14:07:39,103][INFO ][o.e.n.Node ] initialized
[2017-12-19T14:07:39,104][INFO ][o.e.n.Node ] [-QunLnS] starting …
[2017-12-19T14:07:39,421][INFO ][o.e.t.TransportService ] [-QunLnS] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-12-19T14:07:39,436][WARN ][o.e.b.BootstrapChecks ] [-QunLnS] initial heap size [10737418240] not equal to maximum heap size [21474836480]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2017-12-19T14:07:42,551][INFO ][o.e.c.s.ClusterService ] [-QunLnS] new_master {-QunLnS}{-QunLnS0QWuf7MWifYoJzA}{MXNB2o6WTACaWNayQjNVFQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-12-19T14:07:42,589][INFO ][o.e.h.n.Netty4HttpServerTransport] [-QunLnS] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2017-12-19T14:07:42,590][INFO ][o.e.n.Node ] [-QunLnS] started
[2017-12-19T14:07:43,293][INFO ][o.e.g.GatewayService ] [-QunLnS] recovered [1] indices into cluster_state
[2017-12-19T14:08:13,468][WARN ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][young][30][2] duration [1.3s], collections [1]/[1.7s], total [1.3s]/[1.5s], memory [425.8mb]->[97.6mb]/[19.9gb], all_pools {[young] [399.4mb]->[7.1mb]/[399.4mb]}{[survivor] [26.4mb]->[49.8mb]/[49.8mb]}{[old] [0b]->[41.9mb]/[19.5gb]}
[2017-12-19T14:08:13,470][WARN ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][30] overhead, spent [1.3s] collecting in the last [1.7s]
[2017-12-19T14:10:01,109][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][young][120][3] duration [710ms], collections [1]/[1.3s], total [710ms]/[2.2s], memory [479mb]->[416.2mb]/[19.9gb], all_pools {[young] [387.2mb]->[11.2mb]/[399.4mb]}{[survivor] [49.8mb]->[49.8mb]/[49.8mb]}{[old] [41.9mb]->[355.9mb]/[19.5gb]}
[2017-12-19T14:10:01,110][WARN ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][120] overhead, spent [710ms] collecting in the last [1.3s]
[2017-12-19T14:10:35,295][WARN ][o.e.c.s.ClusterService ] [-QunLnS] cluster state update task [shard-started shard id [[graylog_0][0]], allocation id [QG5vu5D2Simu2dHDH4aRNA], primary term [0], message [after existing recovery][shard id [[graylog_0][0]], allocation id [QG5vu5D2Simu2dHDH4aRNA], primary term [0], message [after existing recovery]]] took [32.4s] above the warn threshold of 30s
[2017-12-19T14:10:35,297][INFO ][o.e.c.r.a.AllocationService] [-QunLnS] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[graylog_0][2], [graylog_0][1], [graylog_0][3]] …]).
[2017-12-19T14:11:19,373][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][181] overhead, spent [654ms] collecting in the last [1.4s]
[2017-12-19T14:11:51,257][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][206] overhead, spent [402ms] collecting in the last [1.1s]
[2017-12-19T14:22:18,944][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][756] overhead, spent [255ms] collecting in the last [1s]
[2017-12-19T14:38:57,418][WARN ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][young][1648][187] duration [1s], collections [1]/[3.2s], total [1s]/[19.1s], memory [1gb]->[682mb]/[19.9gb], all_pools {[young] [380.5mb]->[3.9mb]/[399.4mb]}{[survivor] [24.2mb]->[21.1mb]/[49.8mb]}{[old] [656.7mb]->[656.8mb]/[19.5gb]}
[2017-12-19T14:38:57,551][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][1648] overhead, spent [1s] collecting in the last [3.2s]
[2017-12-19T14:39:26,399][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][1671] overhead, spent [302ms] collecting in the last [1s]
[2017-12-19T14:45:32,218][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][1997] overhead, spent [402ms] collecting in the last [1s]
[2017-12-19T16:01:04,082][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][6203] overhead, spent [466ms] collecting in the last [1.4s]

graylog is more robust than rsyslog + LogAnalyzer?

I want to use Graylog, in the future I intend to see about the enterprise, but I’m having the same problem in the 4 servers as the search does not display results and the web interface is very slow

As you have written before that your server has 32GB RAM and you already allocated 20GB to the Elasticsearch HEAP I would assume that you over provision your RAM and the Server is swapping.

You should check all memory allocation of all applications you have and check the recommendations for them.

personal I would assign ~12GB HEAP to Elasticsearch 1GB HEAP to Graylog - than you should have enough for the System and all remain services.

1 Like

Hi, my lines conf now:

/etc/sysconfig/graylog-server
GRAYLOG_SERVER_JAVA_OPTS="-Xms1g -Xmx1g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEna$
/etc/elasticsearch/jvm.options
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms10g
-Xmx10g

My log:

tail -f /var/log/elasticsearch/graylog.log

[2017-12-19T22:12:22,847][INFO ][o.e.c.s.ClusterService   ] [-QunLnS] new_master {-QunLnS}{-QunLnS0QWuf7MWifYoJzA}{eGw1d8_TQOSr_iFyBDQVvA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-12-19T22:12:22,915][INFO ][o.e.h.n.Netty4HttpServerTransport] [-QunLnS] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2017-12-19T22:12:22,916][INFO ][o.e.n.Node               ] [-QunLnS] started
[2017-12-19T22:12:23,199][INFO ][o.e.g.GatewayService     ] [-QunLnS] recovered [1] indices into cluster_state
[2017-12-19T22:12:52,552][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][30] overhead, spent [367ms] collecting in the last [1s]
[2017-12-19T22:14:46,566][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][young][126][3] duration [758ms], collections [1]/[1.4s], total [758ms]/[1.4s], memory [480.9mb]->[421.3mb]/[9.9gb], all_pools {[young] [397.8mb]->[13.4mb]/[399.4mb]}{[survivor] [49.8mb]->[49.8mb]/[49.8mb]}{[old] [33.2mb]->[358mb]/[9.5gb]}
[2017-12-19T22:14:46,567][WARN ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][126] overhead, spent [758ms] collecting in the last [1.4s]
[2017-12-19T22:15:35,747][INFO ][o.e.c.r.a.AllocationService] [-QunLnS] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[graylog_0][2], [graylog_0][0], [graylog_0][1]] ...]).
[2017-12-19T22:15:55,203][INFO ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][young][186][4] duration [716ms], collections [1]/[1.1s], total [716ms]/[2.1s], memory [785.2mb]->[579.3mb]/[9.9gb], all_pools {[young] [377.2mb]->[1.4mb]/[399.4mb]}{[survivor] [49.8mb]->[49.8mb]/[49.8mb]}{[old] [358mb]->[528.5mb]/[9.5gb]}
[2017-12-19T22:15:55,203][WARN ][o.e.m.j.JvmGcMonitorService] [-QunLnS] [gc][186] overhead, spent [716ms] collecting in the last [1.1s]
[root@log01 ~]# tail -f /var/log/graylog-server/server.log
        at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:951) ~[graylog.jar:?]
        at com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:625) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:106) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:86) ~[graylog.jar:?]
        ... 20 more
2017-12-19T22:20:53.791-02:00 INFO  [Messages] Bulk indexing finally successful (attempt #2).


tail -f /var/log/graylog-server/server.log
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:106) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:86) ~[graylog.jar:?]
        ... 20 more
2017-12-19T22:20:53.791-02:00 INFO  [Messages] Bulk indexing finally successful (attempt #2).
2017-12-19T22:26:03.634-02:00 INFO  [connection] Opened connection [connectionId{localValue:9, serverValue:9}] to localhost:27017
2017-12-19T22:26:03.650-02:00 INFO  [connection] Opened connection [connectionId{localValue:10, serverValue:10}] to localhost:27017

my search still returns no results

Thanks

Your search will only be successful if your ingest is successful.

Playing ping-pong with you - providing only snippets of information is not amusing.

IMHO you should rethink how you ask for help and what information that person might need to help you.

1 Like

Hi, sorry, I’m new here, what do you need information so you can help me?

Did you checked your Elasticsearch health?

Example how you could do this would be:

curl -XGET 'localhost:9200/_cluster/health?pretty'

Did your Graylog give you “all green” when you go to System > Overview?

Your complete Elasticsearch and Graylog Logfile might reveal something that gives an Idea what reason is behind that. But to be honest - personal I do not have the time for that. But here are a few other ppl that might find it useful, just to help you.

And please proper format your Postings.

1 Like

Hello, well I understand friend, thank you very much for your help, I have an exit with the command:
curl -XGET 'localhost:9200/_cluster/health?pretty'
{
“cluster_name” : “graylog”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 1,
“number_of_data_nodes” : 1,
“active_primary_shards” : 4,
“active_shards” : 4,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}

Hi, could someone please help me? we have 4 servers with this same problem with all the same settings, I need to extract some information for xls however my search does not display anything, I searched a lot and I saw the documentation and I did not find what can help me, thanks

did you check if the messages from Graylog are ingested into Elasticsearch? Or are all messages inside the Graylog journal?

1 Like

Hi, thanks for the reply, I have 4 servers, 2 of them are using journal, the images are in the overview, everything is green. thank you





Did you check the connection between Graylog and Elasticsearch? It looks like that is the bottleneck.

If you check System > Overview is Graylog abel to connect to Elasticsearch? Did you have any configured Outputs in Graylog configured and connected to a stream? If yes - delete them!

Are your 4 Server all together in a cluster or are this single server installation - what is your intention?

If you want to have a multi Server installation please look at this Documentation: http://docs.graylog.org/en/2.3/pages/configuration/multinode_setup.html

Do the following steps should be done to verify if all is working well and can work together.

Please note: This is my last posting in this ping-pong with you. If you want to have further support, please contact sales and book professional service from Graylog!

  • Check the Status of your MongoDB
    You should take care that all Servers are running in a Replica Set. Get the status with rs.status() when connected to the mongodb shell. The output should be similar like the following, but include your Hostnames
mongo db status output
gl01:SECONDARY> rs.status()
{
	"set" : "gl01",
	"date" : ISODate("2017-12-21T13:27:44.087Z"),
	"myState" : 2,
	"term" : NumberLong(2497),
	"syncingTo" : "gm-01-c:27017",
	"heartbeatIntervalMillis" : NumberLong(2000),
	"members" : [
		{
			"_id" : 0,
			"name" : "gm-01-u:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 150439,
			"optime" : {
				"ts" : Timestamp(1513862863, 3),
				"t" : NumberLong(2497)
			},
			"optimeDate" : ISODate("2017-12-21T13:27:43Z"),
			"lastHeartbeat" : ISODate("2017-12-21T13:27:43.365Z"),
			"lastHeartbeatRecv" : ISODate("2017-12-21T13:27:43.365Z"),
			"pingMs" : NumberLong(0),
			"electionTime" : Timestamp(1513712442, 4),
			"electionDate" : ISODate("2017-12-19T19:40:42Z"),
			"configVersion" : 211000
		},
		{
			"_id" : 1,
			"name" : "gm-01-d:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 874320,
			"optime" : {
				"ts" : Timestamp(1513862864, 1),
				"t" : NumberLong(2497)
			},
			"optimeDate" : ISODate("2017-12-21T13:27:44Z"),
			"syncingTo" : "gm-01-c:27017",
			"configVersion" : 211000,
			"self" : true
		},
		{
			"_id" : 2,
			"name" : "gm-01-c:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 874319,
			"optime" : {
				"ts" : Timestamp(1513862862, 4),
				"t" : NumberLong(2497)
			},
			"optimeDate" : ISODate("2017-12-21T13:27:42Z"),
			"lastHeartbeat" : ISODate("2017-12-21T13:27:43.053Z"),
			"lastHeartbeatRecv" : ISODate("2017-12-21T13:27:43.527Z"),
			"pingMs" : NumberLong(0),
			"syncingTo" : "gm-01-u:27017",
			"configVersion" : 211000
		}
	],
	"ok" : 1
}
  • Check the Status of your Elasticsearch
    First Check the Cluster Health API curl -XGET 'localhost:9200/_cluster/health?pretty' and watch if number_of_node,number_of_data_nodes and cluster_name matches what you expect. Also watch status and check if is green.
elasticsearch cluster health output
{
    "active_primary_shards": 71,
    "active_shards": 71,
    "active_shards_percent_as_number": 100.0,
    "cluster_name": "gm-01",
    "delayed_unassigned_shards": 0,
    "initializing_shards": 0,
    "number_of_data_nodes": 3,
    "number_of_in_flight_fetch": 0,
    "number_of_nodes": 3,
    "number_of_pending_tasks": 0,
    "relocating_shards": 0,
    "status": "green",
    "task_max_waiting_in_queue_millis": 0,
    "timed_out": false,
    "unassigned_shards": 0
}

When the above is all working and you have no error - continue and check your Logfiles.

  • Elasticsearch will most times only log if something is not working or need your attention. Watch all messages, google them if you do not understand them and fix if needed and return to the beginning.
  • Graylog server.log - That should contain speaking error messages. Check if Graylog is able to connect to elasticsearch and if not, read the reason from the log message.

All information you provide points out that your Graylog has issues with elasticsearch or elasticsearch is slow.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.