Reduce of empty array with no initial value


(Strix) #1

Hi all,

Setup is a single host. Version are:

Host - Ubuntu 18.04 (vm)
Graylog - 3.0.0-12
Java - 1.8.0_191
Elasticsearch - 6.6.1"
MongoDB - 4.0.6
Apache - 2.4.29

I have already gone through the docs - but completely stuck now.

So I have crease the disk size on the host, but keep getting this error
“Reduce of empty array with no initial value”

No data is showing under sources (have checked and the data is there)
Looking in the graylogs logs I’ve got

Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_191]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_191]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_191]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_191]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_191]
at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_191]
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) ~[graylog.jar:?]
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ~[graylog.jar:?]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373) ~[graylog.jar:?]
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381) ~[graylog.jar:?]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237) ~[graylog.jar:?]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) ~[graylog.jar:?]
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111) ~[graylog.jar:?]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[graylog.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[graylog.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) ~[graylog.jar:?]
at io.searchbox.client.http.JestHttpClient.executeRequest(JestHttpClient.java:151) ~[graylog.jar:?]
at io.searchbox.client.http.JestHttpClient.execute(JestHttpClient.java:77) ~[graylog.jar:?]
at org.graylog2.indexer.cluster.jest.JestUtils.execute(JestUtils.java:46) ~[graylog.jar:?]
… 11 more
2019-03-13T14:41:16.453Z ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #8).
2019-03-13T14:41:16.505Z ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #5).
2019-03-13T14:41:16.574Z ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #5).
2019-03-13T14:41:19.157Z ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #6).
2019-03-13T14:41:19.188Z ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #6).
2019-03-13T14:41:19.195Z ERROR [Messages] Caught exception during bulk indexing: io.searchbox.client.config.exception.CouldNotConnectException: Could not connect to http://127.0.0.1:9200, retrying (attempt #6).

No sure why java.net is getting Connection refused or what the exception is for? Can any one point me in the right direction?


(Jan Doberstein) #2

it looks like your Graylog is not able to connect to your Elasticsearch, fix the configuration for Graylog to be able to connect to Elasticsearch or fix Elasticsearch to listen on ip that is configured in Graylog. Either or is needed.


(Strix) #3

Thanks Jan,

So I’ve edited/etc/elasticsearch/elasticsearch.yml and added the servers IP in, restarted Elasticsearch.
I can get into the web GUI and is showing 2 notification,

Uncommited messages deleted from journal (triggered 5 minutes ago)

Some messages were deleted from the Graylog journal before they could be written to Elasticsearch. Please verify that your Elasticsearch cluster is healthy and fast enough. You may also want to review your Graylog journal settings and set a higher limit. (Node: e7c68108-3351-42bb-9d53-82a23d72aa7c )

and

Journal utilization is too high (triggered 19 minutes ago)

Journal utilization is too high and may go over the limit soon. Please verify that your Elasticsearch cluster is healthy and fast enough. You may also want to review your Graylog journal settings and set a higher limit. (Node: e7c68108-3351-42bb-9d53-82a23d72aa7c )

I don’t understand what going wrong. All I’ve done is expanded the harddrive?


(Jan Doberstein) #4

those two messages will pop up if Graylog can’t reach Elasticsearch and is buffering messages in its journal.

I would advice to do one step after another. Fix the elasticsearch issue and than look for other issues.


(Strix) #5

OK thank Jan. So my Elasticsearch look like this

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application


cluster.name: graylog
action.auto_create_index: false



#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

Which is what is stated in https://docs.graylog.org/en/3.0/pages/installation/os/ubuntu.html for Elasticsearch. Nothing in server.conf has changed The only thing that had change to the server is I have given it 100GB more space. This is only a single host, so I shouldn’t need to give it the hosts IP?


(Jan Doberstein) #6

your configuration does not give any indication if elasticsearch is running or not. Nor does it gives the idea if elasticsearch is happy or not.

Sherlock, look into the logs of Elasticsearch, check if the service is running, than return with the results.


(Strix) #7

Right I get you. OK so looking in /var/log/elasticsearch,

So looking at “gc.log.0.current” showing

: 141001K->16882K(153344K), 0.0577689 secs] 769236K->645118K(1031552K), 0.0580168 secs] [Times: user=0.03 sys=0.01, real=0.06 secs]
2019-03-14T11:59:59.293+0000: 13257.169: Total time for which application threads were stopped: 0.0585981 seconds, Stopping threads took: 0.0001169 seconds
2019-03-14T11:59:59.587+0000: 13257.463: [GC (Allocation Failure) 2019-03-14T11:59:59.587+0000: 13257.463: [ParNew
Desired survivor size 8716288 bytes, new threshold 1 (max 6)
- age   1:   14580368 bytes,   14580368 total
: 153202K->17024K(153344K), 0.0766455 secs] 781438K->651316K(1031552K), 0.0768606 secs] [Times: user=0.06 sys=0.00, real=0.07 secs]
2019-03-14T11:59:59.664+0000: 13257.540: Total time for which application threads were stopped: 0.0860870 seconds, Stopping threads took: 0.0087322 seconds
2019-03-14T12:00:00.768+0000: 13258.644: [GC (Allocation Failure) 2019-03-14T12:00:00.768+0000: 13258.644: [ParNew
Desired survivor size 8716288 bytes, new threshold 1 (max 6)
- age   1:   11263336 bytes,   11263336 total
: 152999K->17024K(153344K), 0.0940852 secs] 787292K->661634K(1031552K), 0.0943137 secs] [Times: user=0.04 sys=0.00, real=0.09 secs]
2019-03-14T12:00:00.862+0000: 13258.739: Total time for which application threads were stopped: 0.0949324 seconds, Stopping threads took: 0.0001117 seconds
2019-03-14T12:00:01.105+0000: 13258.981: [GC (Allocation Failure) 2019-03-14T12:00:01.105+0000: 13258.981: [ParNew
Desired survivor size 8716288 bytes, new threshold 6 (max 6)
- age   1:     252472 bytes,     252472 total
: 152883K->11634K(153344K), 0.0865418 secs] 797494K->664318K(1031552K), 0.0868291 secs] [Times: user=0.04 sys=0.01, real=0.09 secs]
2019-03-14T12:00:01.192+0000: 13259.068: Total time for which application threads were stopped: 0.0875636 seconds, Stopping threads took: 0.0000815 seconds
2019-03-14T12:00:01.317+0000: 13259.193: [GC (Allocation Failure) 2019-03-14T12:00:01.317+0000: 13259.194: [ParNew
Desired survivor size 8716288 bytes, new threshold 6 (max 6)
- age   1:     451272 bytes,     451272 total
- age   2:     210896 bytes,     662168 total
: 147954K->2954K(153344K), 0.0241484 secs] 800638K->655639K(1031552K), 0.0244970 secs] [Times: user=0.03 sys=0.00, real=0.03 secs]
2019-03-14T12:00:01.341+0000: 13259.218: Total time for which application threads were stopped: 0.0252452 seconds, Stopping threads took: 0.0001575 seconds
2019-03-14T12:00:01.396+0000: 13259.273: Total time for which application threads were stopped: 0.0009646 seconds, Stopping threads took: 0.0001323 seconds
2019-03-14T12:00:02.141+0000: 13260.017: [GC (Allocation Failure) 2019-03-14T12:00:02.141+0000: 13260.017: [ParNew
Desired survivor size 8716288 bytes, new threshold 6 (max 6)
- age   1:     647240 bytes,     647240 total
- age   2:     411424 bytes,    1058664 total
- age   3:     210896 bytes,    1269560 total
: 139274K->1800K(153344K), 0.0298616 secs] 791959K->654485K(1031552K), 0.0301069 secs] [Times: user=0.03 sys=0.00, real=0.03 secs]

I’ve also got int there graylog_deprecation.log,

graylog_index_indexing_slowlog.log,graylog_index_search_slowlog.log and graylog.log.

Tailing /var/log/elasticsearch/graylog.log I get

[2019-03-14T11:55:31,784][INFO ][o.e.m.j.JvmGcMonitorService] [khOyYWx] [gc][12926] overhead, spent [256ms] collecting in the last [1s]
[2019-03-14T11:55:42,806][INFO ][o.e.m.j.JvmGcMonitorService] [khOyYWx] [gc][12937] overhead, spent [258ms] collecting in the last [1s]
[2019-03-14T11:57:50,970][INFO ][o.e.c.m.MetaDataMappingService] [khOyYWx] [graylog_4/H3oWmL2VRRi0Egoq36nkHw] update_mapping [message]
[2019-03-14T11:58:41,188][INFO ][o.e.m.j.JvmGcMonitorService] [khOyYWx] [gc][13115] overhead, spent [251ms] collecting in the last [1s]
[2019-03-14T12:03:24,046][INFO ][o.e.c.m.MetaDataMappingService] [khOyYWx] [graylog_4/H3oWmL2VRRi0Egoq36nkHw] update_mapping [message]
[2019-03-14T12:03:57,706][INFO ][o.e.c.m.MetaDataMappingService] [khOyYWx] [graylog_4/H3oWmL2VRRi0Egoq36nkHw] update_mapping [message]
[2019-03-14T12:03:58,400][INFO ][o.e.c.m.MetaDataMappingService] [khOyYWx] [graylog_4/H3oWmL2VRRi0Egoq36nkHw] update_mapping [message]
[2019-03-14T12:03:58,622][INFO ][o.e.c.m.MetaDataMappingService] [khOyYWx] [graylog_4/H3oWmL2VRRi0Egoq36nkHw] update_mapping [message]
[2019-03-14T12:04:14,654][INFO ][o.e.c.m.MetaDataMappingService] [khOyYWx] [graylog_4/H3oWmL2VRRi0Egoq36nkHw] update_mapping [message]
[2019-03-14T12:04:14,745][INFO ][o.e.c.m.MetaDataMappingService] [khOyYWx] [graylog_4/H3oWmL2VRRi0Egoq36nkHw] update_mapping [message]

(Jan Doberstein) #8

and did you checked with “ps faux” for example if it runs? On what device/port it is listening.

Come on you run a service on linux and should take your basic linux debug skills.


(Strix) #9

Yea all running.

> elastic+   1086 30.7 19.0 63851384 1548576 ?    Ssl  Mar14 456:09 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSIn
> graylog    1087  0.0  0.0   4628    64 ?        Ss   Mar14   0:00 /bin/sh /usr/share/graylog-server/bin/graylog-server
> graylog    1121 52.1 16.4 3823788 1338936 ?     Sl   Mar14 773:37  \_ /usr/bin/java -Xms1g -Xmx1g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMS

ps -aux | less shows

elastic+ 1086 30.7 19.2 63946516 1564312 ? Ssl Mar14 457:59 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-3755880553445365213 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=oss -Des.distribution.type=deb -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quie

graylog 1121 51.9 16.4 3826140 1338888 ? Sl Mar14 775:00 /usr/bin/java -Xms1g -Xmx1g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -jar -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb /usr/share/graylog-server/graylog.jar server -f /etc/graylog/server/server.conf -np

Systemctl is showing is running

elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-03-14 08:18:45 UTC; 24h ago
Docs: http://www.elastic.co
Main PID: 1086 (java)
Tasks: 48 (limit: 9472)
CGroup: /system.slice/elasticsearch.service
└─1086 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.

As in was ports are the logs coming in on?