Elasticsearch stops running on reboot

Hello everyone, I am new here and also new to Graylog and Elasticsearch.

In short, because the title doesn’t say it all, the elasticsearch service is working sometimes and if i restart it (for some new configuration) or reboot the machine it stops and never stops again.

Here is the specs of my system and configuration:

System

  • HP Proliant DL165 running esxi 6u3
  • Graylog is running on VM Ubuntu 20.04.5 LTS (5.4.0) fully updated up to now
  • VM has 16GB of RAM and 400GB of space
  • VM has 2 interfaces on 2 different subnets to monitor

Configurations
In general i am trying this to work and all i did is the basics and defaults with a little bit of searching for proper functionality of the whole system.
One single VM contains Graylog AND Elasticsearch as well as Mongodb. The assets I am logging are 8 and they do not produce much of a traffic.

Graylog (top down of server.conf) version 4.3.7

  • root_password_sha2 is set
  • http_bind_address = 172.16.64.203:9000 (ipv6 is disabled)
  • elasticsearch_hosts = http://172.16.64.203:9200
  • mongodb_uri = mongodb://localhost/graylog (default)
  • Xms1g and Xmx1g

Elasticsearch (top down of elasticsearch.yml) version 7.10.7

  • node.name: Graylog
  • network.host: 172.16.64.203
  • http.port: 9200
  • discovery.seed_hosts: [“Graylog”]
    I also read that Elastic needs no more no less than half the RAM so the -Xms8g -Xmx8g settings are set to 8g in /etc/elasticsearch/jvm.options.d/jvm.options

Other information
openjdk version “11.0.16” 2022-07-19

Anything not mentioned is default.

After the first install according to Graylog docs everything is working fine. I have installed winlogbeat on a windows server and i am able to receive input. I also receive logs from another Syslog input from a linux server.

This is my second install so this time i left the VM running the whole night to make sure everything is working as expected. I restarted the Elasticsearch service but it never started again. Hitting a curl of http://172.16.64.203:9200 i get a connection refused. I also get a ERR_CONNECTION_REFUSED from the browser when i try to connect to Graylog web. And no, it is not a firewall issue since everything was working fine before the restart.

Any suggestions please? Any help is much appreciated. Thank you in advance.

I am attaching the log of the /var/log/elasticsearch/

[2022-09-22T11:55:50,783][INFO ][o.e.n.Node               ] [Graylog] version[7.10.2], pid[5094], build[oss/deb/747e1cc71def077253878a59143c1f785afa92b9/2021-01-13T00:42:12.435326Z], OS[Linux/5.4.0-126-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/15.0.1/15.0.1+9]
[2022-09-22T11:55:50,789][INFO ][o.e.n.Node               ] [Graylog] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-09-22T11:55:50,790][INFO ][o.e.n.Node               ] [Graylog] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-6532279290430572856, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms8g, -Xmx8g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-6532279290430572856, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=4294967296, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=oss, -Des.distribution.type=deb, -Des.bundled_jdk=true]
[2022-09-22T11:55:52,523][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [aggs-matrix-stats]
[2022-09-22T11:55:52,524][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [analysis-common]
[2022-09-22T11:55:52,524][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [geo]
[2022-09-22T11:55:52,525][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [ingest-common]
[2022-09-22T11:55:52,525][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [ingest-geoip]
[2022-09-22T11:55:52,526][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [ingest-user-agent]
[2022-09-22T11:55:52,526][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [kibana]
[2022-09-22T11:55:52,527][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [lang-expression]
[2022-09-22T11:55:52,527][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [lang-mustache]
[2022-09-22T11:55:52,528][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [lang-painless]
[2022-09-22T11:55:52,528][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [mapper-extras]
[2022-09-22T11:55:52,529][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [parent-join]
[2022-09-22T11:55:52,529][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [percolator]
[2022-09-22T11:55:52,530][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [rank-eval]
[2022-09-22T11:55:52,530][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [reindex]
[2022-09-22T11:55:52,531][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [repository-url]
[2022-09-22T11:55:52,531][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [systemd]
[2022-09-22T11:55:52,532][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [transport-netty4]
[2022-09-22T11:55:52,533][INFO ][o.e.p.PluginsService     ] [Graylog] no plugins loaded
[2022-09-22T11:55:52,604][INFO ][o.e.e.NodeEnvironment    ] [Graylog] using [1] data paths, mounts [[/ (/dev/mapper/ubuntu--vg-ubuntu--lv)]], net usable_space [79.3gb], net total_space [97.8gb], types [ext4]
[2022-09-22T11:55:52,605][INFO ][o.e.e.NodeEnvironment    ] [Graylog] heap size [8gb], compressed ordinary object pointers [true]
[2022-09-22T11:55:52,791][INFO ][o.e.n.Node               ] [Graylog] node name [Graylog], node ID [k6eeD6KWQFuxleWoC0vZCw], cluster name [elasticsearch], roles [master, remote_cluster_client, data, ingest]
[2022-09-22T11:55:59,498][INFO ][o.e.t.NettyAllocator     ] [Graylog] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-09-22T11:55:59,628][INFO ][o.e.d.DiscoveryModule    ] [Graylog] using discovery type [zen] and seed hosts providers [settings]
[2022-09-22T11:56:00,138][WARN ][o.e.g.DanglingIndicesState] [Graylog] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2022-09-22T11:56:00,483][INFO ][o.e.n.Node               ] [Graylog] initialized
[2022-09-22T11:56:00,484][INFO ][o.e.n.Node               ] [Graylog] starting ...
[2022-09-22T11:56:00,706][INFO ][o.e.t.TransportService   ] [Graylog] publish_address {172.16.64.203:9300}, bound_addresses {172.16.64.203:9300}
[2022-09-22T11:56:01,079][INFO ][o.e.b.BootstrapChecks    ] [Graylog] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2022-09-22T11:56:01,083][INFO ][o.e.c.c.Coordinator      ] [Graylog] cluster UUID [mOeSNqxhTlyN6sjIx8QmCw]
[2022-09-22T11:56:01,223][INFO ][o.e.c.s.MasterService    ] [Graylog] elected-as-master ([1] nodes joined)[{Graylog}{k6eeD6KWQFuxleWoC0vZCw}{orTVpafwSAuUh5vFrxdUwg}{172.16.64.203}{172.16.64.203:9300}{dimr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 16, version: 152, delta: master node changed {previous [], current [{Graylog}{k6eeD6KWQFuxleWoC0vZCw}{orTVpafwSAuUh5vFrxdUwg}{172.16.64.203}{172.16.64.203:9300}{dimr}]}
[2022-09-22T11:56:01,317][INFO ][o.e.c.s.ClusterApplierService] [Graylog] master node changed {previous [], current [{Graylog}{k6eeD6KWQFuxleWoC0vZCw}{orTVpafwSAuUh5vFrxdUwg}{172.16.64.203}{172.16.64.203:9300}{dimr}]}, term: 16, version: 152, reason: Publication{term=16, version=152}
[2022-09-22T11:56:01,352][INFO ][o.e.h.AbstractHttpServerTransport] [Graylog] publish_address {172.16.64.203:9200}, bound_addresses {172.16.64.203:9200}
[2022-09-22T11:56:01,352][INFO ][o.e.n.Node               ] [Graylog] started
[2022-09-22T11:56:01,597][INFO ][o.e.g.GatewayService     ] [Graylog] recovered [3] indices into cluster_state```

and /var/log/graylog-server/server.log

2022-09-22T12:05:47.969+03:00 INFO [ImmutableFeatureFlagsCollector] Following feature flags are used: {}
2022-09-22T12:05:49.293+03:00 INFO [CmdLineTool] Loaded plugin: AWS plugins 4.3.7 [org.graylog.aws.AWSPlugin]
2022-09-22T12:05:49.295+03:00 INFO [CmdLineTool] Loaded plugin: Collector 4.3.7 [org.graylog.plugins.collector.CollectorPlugin]
2022-09-22T12:05:49.297+03:00 INFO [CmdLineTool] Loaded plugin: Threat Intelligence Plugin 4.3.7 [org.graylog.plugins.threatintel.ThreatIntelPlugin]
2022-09-22T12:05:49.297+03:00 INFO [CmdLineTool] Loaded plugin: Elasticsearch 6 Support 4.3.7+05bccc7 [org.graylog.storage.elasticsearch6.Elasticsearch6Plugin]
2022-09-22T12:05:49.298+03:00 INFO [CmdLineTool] Loaded plugin: Elasticsearch 7 Support 4.3.7+05bccc7 [org.graylog.storage.elasticsearch7.Elasticsearch7Plugin]
2022-09-22T12:05:49.333+03:00 INFO [CmdLineTool] Running with JVM arguments: -Xms1g -Xmx1g -XX:NewRatio=1 -XX:+ResizeTLAB -XX:-OmitStackTraceInFastThrow -Djdk.tls.acknowledgeCloseNotify=true -Dlog4j2.formatMsgNoLookups=true -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb
2022-09-22T12:05:50.153+03:00 INFO [cluster] Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout=‘30000 ms’, maxWaitQueueSize=5000}
2022-09-22T12:05:50.245+03:00 INFO [cluster] Cluster description not yet available. Waiting for 30000 ms before timing out
2022-09-22T12:05:50.289+03:00 INFO [connection] Opened connection [connectionId{localValue:1, serverValue:9}] to localhost:27017
2022-09-22T12:05:50.306+03:00 INFO [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 0, 28]}, minWireVersion=0, maxWireVersion=7, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=9932882}
2022-09-22T12:05:50.347+03:00 INFO [connection] Opened connection [connectionId{localValue:2, serverValue:10}] to localhost:27017
2022-09-22T12:05:50.393+03:00 INFO [connection] Closed connection [connectionId{localValue:2, serverValue:10}] to localhost:27017 because the pool has been closed.
2022-09-22T12:05:50.396+03:00 INFO [MongoDBPreflightCheck] Connected to MongoDB version 4.0.28
2022-09-22T12:05:50.554+03:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: Failed to connect to /172.16.64.203:9200. - Connection refused (Connection refused).
2022-09-22T12:05:50.556+03:00 INFO [VersionProbe] Elasticsearch is not available. Retry #1
2022-09-22T12:05:55.561+03:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: Failed to connect to /172.16.64.203:9200. - Connection refused (Connection refused).
2022-09-22T12:05:55.562+03:00 INFO [VersionProbe] Elasticsearch is not available. Retry #2
2022-09-22T12:06:00.567+03:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: Failed to connect to /172.16.64.203:9200. - Connection refused (Connection refused).
2022-09-22T12:06:00.568+03:00 INFO [VersionProbe] Elasticsearch is not available. Retry #3
2022-09-22T12:06:05.574+03:00 ERROR [VersionProbe] Unable to retrieve version from Elasticsearch node: Failed to connect to /172.16.64.203:9200. - Connection refused (Connection refused).
2022-09-22T12:06:05.575+03:00 INFO [VersionProbe] Elasticsearch is not available. Retry #4

Hello @miltiadis.p && Welcome.

Since this is default settings, perhaps try this setting in elasticsearch.yml file. I filled in the IP Address from the error in the logs files above. Restart the service and tail elasticsearch log file.

cluster.name: graylog
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 172.16.64.203
http.port: 9200
action.auto_create_index: false
discovery.type: single-node

Also check MongoDb log file. I’m assuming your Mongo.confg file is default?

@miltiadis.p

I just noticed something else after looking back over the logs.

2022-09-22T11:56:00,138][WARN ][o.e.g.DanglingIndicesState] [Graylog] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually

I would cURL elasticsearch and make sure your Indices are good.

specifically dangling_indices

Here are some curl’s you can use.

curl -X GET "localhost:9200/_dangling?pretty"

curl -X GET "localhost:9200/_cat/indices?pretty"

Thank you for your quick reply @gsmith and sorry for replying late.

I forgot to mention that Elasticsearch fails with “(code=killed, signal=ABRT)”

According to your suggestion I added:

cluster.name: graylog
action.auto_create_index: false
discovery.type: single-node

The rest was already there. Still no progress and fails to start with the same error.
Here is the log

[2022-09-23T15:20:15,196][INFO ][o.e.n.Node               ] [Graylog] version[7.10.2], pid[900], build[oss/deb/747e1cc71def077253878a59143c1f785afa92b9/2021-01-13T00:42:12.435326Z], OS[Linux/5.4.0-126-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/15.0.1/15.0.1+9]
[2022-09-23T15:20:15,407][INFO ][o.e.n.Node               ] [Graylog] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-09-23T15:20:15,408][INFO ][o.e.n.Node               ] [Graylog] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-10926118675694118286, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms8g, -Xmx8g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-10926118675694118286, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=4294967296, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=oss, -Des.distribution.type=deb, -Des.bundled_jdk=true]
[2022-09-23T15:20:19,867][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [aggs-matrix-stats]
[2022-09-23T15:20:19,868][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [analysis-common]
[2022-09-23T15:20:19,868][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [geo]
[2022-09-23T15:20:19,869][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [ingest-common]
[2022-09-23T15:20:19,869][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [ingest-geoip]
[2022-09-23T15:20:19,870][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [ingest-user-agent]
[2022-09-23T15:20:19,870][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [kibana]
[2022-09-23T15:20:19,871][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [lang-expression]
[2022-09-23T15:20:19,871][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [lang-mustache]
[2022-09-23T15:20:19,871][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [lang-painless]
[2022-09-23T15:20:19,872][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [mapper-extras]
[2022-09-23T15:20:19,872][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [parent-join]
[2022-09-23T15:20:19,873][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [percolator]
[2022-09-23T15:20:19,873][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [rank-eval]
[2022-09-23T15:20:19,874][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [reindex]
[2022-09-23T15:20:19,874][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [repository-url]
[2022-09-23T15:20:19,875][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [systemd]
[2022-09-23T15:20:19,875][INFO ][o.e.p.PluginsService     ] [Graylog] loaded module [transport-netty4]
[2022-09-23T15:20:19,876][INFO ][o.e.p.PluginsService     ] [Graylog] no plugins loaded
[2022-09-23T15:20:20,027][INFO ][o.e.e.NodeEnvironment    ] [Graylog] using [1] data paths, mounts [[/ (/dev/mapper/ubuntu--vg-ubuntu--lv)]], net usable_space [79.3gb], net total_space [97.8gb], types [ext4]
[2022-09-23T15:20:20,029][INFO ][o.e.e.NodeEnvironment    ] [Graylog] heap size [8gb], compressed ordinary object pointers [true]
[2022-09-23T15:20:20,840][INFO ][o.e.n.Node               ] [Graylog] node name [Graylog], node ID [k6eeD6KWQFuxleWoC0vZCw], cluster name [elasticsearch], roles [master, remote_cluster_client, data, ingest]
[2022-09-23T15:20:31,344][INFO ][o.e.t.NettyAllocator     ] [Graylog] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-09-23T15:20:31,708][INFO ][o.e.d.DiscoveryModule    ] [Graylog] using discovery type [zen] and seed hosts providers [settings]
[2022-09-23T15:20:32,390][WARN ][o.e.g.DanglingIndicesState] [Graylog] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2022-09-23T15:20:32,865][INFO ][o.e.n.Node               ] [Graylog] initialized
[2022-09-23T15:20:32,876][INFO ][o.e.n.Node               ] [Graylog] starting ...
[2022-09-23T15:20:34,688][INFO ][o.e.t.TransportService   ] [Graylog] publish_address {172.16.64.203:9300}, bound_addresses {172.16.64.203:9300}
[2022-09-23T15:20:35,444][INFO ][o.e.b.BootstrapChecks    ] [Graylog] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2022-09-23T15:20:35,448][INFO ][o.e.c.c.Coordinator      ] [Graylog] cluster UUID [mOeSNqxhTlyN6sjIx8QmCw]
[2022-09-23T15:20:35,725][INFO ][o.e.c.s.MasterService    ] [Graylog] elected-as-master ([1] nodes joined)[{Graylog}{k6eeD6KWQFuxleWoC0vZCw}{WD7I0e8zRLmskl1_yuIRHw}{172.16.64.203}{172.16.64.203:9300}{dimr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 17, version: 161, delta: master node changed {previous [], current [{Graylog}{k6eeD6KWQFuxleWoC0vZCw}{WD7I0e8zRLmskl1_yuIRHw}{172.16.64.203}{172.16.64.203:9300}{dimr}]}
[2022-09-23T15:20:35,824][INFO ][o.e.c.s.ClusterApplierService] [Graylog] master node changed {previous [], current [{Graylog}{k6eeD6KWQFuxleWoC0vZCw}{WD7I0e8zRLmskl1_yuIRHw}{172.16.64.203}{172.16.64.203:9300}{dimr}]}, term: 17, version: 161, reason: Publication{term=17, version=161}
[2022-09-23T15:20:35,942][INFO ][o.e.h.AbstractHttpServerTransport] [Graylog] publish_address {172.16.64.203:9200}, bound_addresses {172.16.64.203:9200}
[2022-09-23T15:20:35,943][INFO ][o.e.n.Node               ] [Graylog] started
[2022-09-23T15:20:36,377][INFO ][o.e.g.GatewayService     ] [Graylog] recovered [3] indices into cluster_state

Here is the MongoDB log file. Yes config is default and unchanged by me. The service is running though. Graylog service is also running.

2022-09-23T15:19:22.379+0300 I CONTROL  [main] ***** SERVER RESTARTED *****
2022-09-23T15:19:22.680+0300 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2022-09-23T15:19:30.524+0300 I CONTROL  [initandlisten] MongoDB starting : pid=906 port=27017 dbpath=/var/lib/mongodb 64-bit host=Graylog
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten] db version v4.0.28
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten] git version: af1a9dc12adcfa83cc19571cb3faba26eeddac92
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1f  31 Mar 2020
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten] allocator: tcmalloc
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten] modules: none
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten] build environment:
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten]     distmod: ubuntu1804
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten]     distarch: x86_64
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten]     target_arch: x86_64
2022-09-23T15:19:30.525+0300 I CONTROL  [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1", port: 27017 }, processManagement: { timeZoneInfo: "/usr/share/zoneinfo" }, storage: { dbPath: "/var/lib/mongodb", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongod.log" } }
2022-09-23T15:19:30.843+0300 I STORAGE  [initandlisten] Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2022-09-23T15:19:30.844+0300 I STORAGE  [initandlisten] 
2022-09-23T15:19:30.844+0300 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2022-09-23T15:19:30.844+0300 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2022-09-23T15:19:30.844+0300 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=7477M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2022-09-23T15:19:34.979+0300 I STORAGE  [initandlisten] WiredTiger message [1663935574:979233][906:0x7f555af77c80], txn-recover: Main recovery loop: starting at 5/18048 to 6/256
2022-09-23T15:19:35.288+0300 I STORAGE  [initandlisten] WiredTiger message [1663935575:288921][906:0x7f555af77c80], txn-recover: Recovering log 5 through 6
2022-09-23T15:19:35.488+0300 I STORAGE  [initandlisten] WiredTiger message [1663935575:488069][906:0x7f555af77c80], txn-recover: Recovering log 6 through 6
2022-09-23T15:19:35.632+0300 I STORAGE  [initandlisten] WiredTiger message [1663935575:632522][906:0x7f555af77c80], txn-recover: Set global recovery timestamp: 0
2022-09-23T15:19:36.121+0300 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2022-09-23T15:19:36.196+0300 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
2022-09-23T15:19:36.272+0300 I CONTROL  [initandlisten] 
2022-09-23T15:19:36.273+0300 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2022-09-23T15:19:36.273+0300 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2022-09-23T15:19:36.273+0300 I CONTROL  [initandlisten] 
2022-09-23T15:19:37.310+0300 I STORAGE  [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
2022-09-23T15:19:37.474+0300 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/diagnostic.data'
2022-09-23T15:19:37.953+0300 I NETWORK  [initandlisten] waiting for connections on port 27017
2022-09-23T15:20:07.474+0300 I NETWORK  [listener] connection accepted from 127.0.0.1:51058 #1 (1 connection now open)
2022-09-23T15:20:07.522+0300 I NETWORK  [conn1] received client metadata from 127.0.0.1:51058 conn1: { driver: { name: "mongo-java-driver|legacy", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.4.0-126-generic" }, platform: "Java/Ubuntu/11.0.16+8-post-Ubuntu-0ubuntu120.04" }
2022-09-23T15:20:07.666+0300 I NETWORK  [listener] connection accepted from 127.0.0.1:51066 #2 (2 connections now open)
2022-09-23T15:20:07.667+0300 I NETWORK  [conn2] received client metadata from 127.0.0.1:51066 conn2: { driver: { name: "mongo-java-driver|legacy", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.4.0-126-generic" }, platform: "Java/Ubuntu/11.0.16+8-post-Ubuntu-0ubuntu120.04" }
2022-09-23T15:20:08.002+0300 I NETWORK  [conn2] end connection 127.0.0.1:51066 (1 connection now open)
2022-09-23T15:20:08.004+0300 I NETWORK  [conn1] end connection 127.0.0.1:51058 (0 connections now open)
2022-09-23T15:21:17.261+0300 I NETWORK  [listener] connection accepted from 127.0.0.1:40330 #3 (1 connection now open)
2022-09-23T15:21:17.263+0300 I NETWORK  [conn3] received client metadata from 127.0.0.1:40330 conn3: { driver: { name: "mongo-java-driver|legacy", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.4.0-126-generic" }, platform: "Java/Ubuntu/11.0.16+8-post-Ubuntu-0ubuntu120.04" }
2022-09-23T15:21:17.279+0300 I NETWORK  [listener] connection accepted from 127.0.0.1:40332 #4 (2 connections now open)
2022-09-23T15:21:17.279+0300 I NETWORK  [conn4] received client metadata from 127.0.0.1:40332 conn4: { driver: { name: "mongo-java-driver|legacy", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.4.0-126-generic" }, platform: "Java/Ubuntu/11.0.16+8-post-Ubuntu-0ubuntu120.04" }

I will try your next post now…

1 Like

The Elasticsearch service is down so I get the same error as before if i follow the curl commands your proposed.

curl: (7) Failed to connect to localhost port 9200: Connection refused

I also restarted the service in case it solves anything by luck but still nothing.

I have to mention that I increased the startup wait time for Elasticsearch service to 180 seconds

could you post the results of current status:

sudo systemctl status elasticsearch.service

and service logs:

journalctl -eu elasticsearch

There should be something in there about why Elasticsearch is unhappy.

Also it would be helpful to post your full (Obfuscated) ElasticSearch config:

cat /etc/elasticsearch/elasticsearch.yml    | egrep -v "^\s*(#|$)"

Thank you @tmacgbay for the reply.

Result of status

● elasticsearch.service - Elasticsearch
     Loaded: loaded (/etc/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
     Active: failed (Result: signal) since Fri 2022-09-23 22:01:06 EEST; 1min 56s ago
       Docs: https://www.elastic.co
    Process: 910 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=killed, signal=ABRT)
   Main PID: 910 (code=killed, signal=ABRT)

Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  scopes data    [0x00007f79a8cea098,0x00007f79a8cea1c0] = 296
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  scopes pcs     [0x00007f79a8cea1c0,0x00007f79a8cea410] = 592
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  dependencies   [0x00007f79a8cea410,0x00007f79a8cea418] = 8
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  nul chk table  [0x00007f79a8cea418,0x00007f79a8cea480] = 104
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: #
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: # If you would like to submit a bug report, please visit:
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: #   https://github.com/AdoptOpenJDK/openjdk-support/issues
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: #
Sep 23 22:01:06 Graylog systemd[1]: elasticsearch.service: Main process exited, code=killed, status=6/ABRT
Sep 23 22:01:06 Graylog systemd[1]: elasticsearch.service: Failed with result 'signal'.

This is the journal’s result

Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # Java VM: OpenJDK 64-Bit Server VM AdoptOpenJDK (15.0.1+9, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # Problematic frame:
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # J 7107 c2 com.fasterxml.jackson.core.sym.ByteQuadsCanonicalizer.calcHash([II)I (115 bytes) @ 0x00007f79b024c7c6 [0x00007f79b024c680+0x0000000000000146]
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: #
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E" (or dumping t>
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: #
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # An error report file with more information is saved as:
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # /var/log/elasticsearch/hs_err_pid910.log
Sep 23 22:01:02 Graylog systemd-entrypoint[910]: [thread 1580 also had an error]
Sep 23 22:01:02 Graylog systemd-entrypoint[910]: [thread 1581 also had an error]
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: Compiled method (c2)   75154 7107       4       com.fasterxml.jackson.core.sym.ByteQuadsCanonicalizer::calcHash (115 bytes)
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  total in heap  [0x00007f79b024c510,0x00007f79b024c9c8] = 1208
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  relocation     [0x00007f79b024c668,0x00007f79b024c680] = 24
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  main code      [0x00007f79b024c680,0x00007f79b024c8a0] = 544
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  stub code      [0x00007f79b024c8a0,0x00007f79b024c8b8] = 24
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  oops           [0x00007f79b024c8b8,0x00007f79b024c8c0] = 8
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  metadata       [0x00007f79b024c8c0,0x00007f79b024c8c8] = 8
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  scopes data    [0x00007f79b024c8c8,0x00007f79b024c940] = 120
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  scopes pcs     [0x00007f79b024c940,0x00007f79b024c9b0] = 112
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  dependencies   [0x00007f79b024c9b0,0x00007f79b024c9b8] = 8
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  nul chk table  [0x00007f79b024c9b8,0x00007f79b024c9c8] = 16
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: Compiled method (c2)   75162 7107       4       com.fasterxml.jackson.core.sym.ByteQuadsCanonicalizer::calcHash (115 bytes)
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  total in heap  [0x00007f79b024c510,0x00007f79b024c9c8] = 1208
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  relocation     [0x00007f79b024c668,0x00007f79b024c680] = 24
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  main code      [0x00007f79b024c680,0x00007f79b024c8a0] = 544
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  stub code      [0x00007f79b024c8a0,0x00007f79b024c8b8] = 24
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  oops           [0x00007f79b024c8b8,0x00007f79b024c8c0] = 8
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  metadata       [0x00007f79b024c8c0,0x00007f79b024c8c8] = 8
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  scopes data    [0x00007f79b024c8c8,0x00007f79b024c940] = 120
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  scopes pcs     [0x00007f79b024c940,0x00007f79b024c9b0] = 112
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  dependencies   [0x00007f79b024c9b0,0x00007f79b024c9b8] = 8
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  nul chk table  [0x00007f79b024c9b8,0x00007f79b024c9c8] = 16
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: Compiled method (c1)   75162 5907       3       com.fasterxml.jackson.core.sym.ByteQuadsCanonicalizer::findName (218 bytes)
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  total in heap  [0x00007f79a8ce9590,0x00007f79a8cea480] = 3824
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  relocation     [0x00007f79a8ce96e8,0x00007f79a8ce97d0] = 232
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  main code      [0x00007f79a8ce97e0,0x00007f79a8ce9fe0] = 2048
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  stub code      [0x00007f79a8ce9fe0,0x00007f79a8cea080] = 160
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  oops           [0x00007f79a8cea080,0x00007f79a8cea088] = 8
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  metadata       [0x00007f79a8cea088,0x00007f79a8cea098] = 16
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  scopes data    [0x00007f79a8cea098,0x00007f79a8cea1c0] = 296
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  scopes pcs     [0x00007f79a8cea1c0,0x00007f79a8cea410] = 592
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  dependencies   [0x00007f79a8cea410,0x00007f79a8cea418] = 8
Sep 23 22:01:03 Graylog systemd-entrypoint[910]:  nul chk table  [0x00007f79a8cea418,0x00007f79a8cea480] = 104
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: #
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: # If you would like to submit a bug report, please visit:
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: #   https://github.com/AdoptOpenJDK/openjdk-support/issues
Sep 23 22:01:03 Graylog systemd-entrypoint[910]: #
Sep 23 22:01:06 Graylog systemd[1]: elasticsearch.service: Main process exited, code=killed, status=6/ABRT
Sep 23 22:01:06 Graylog systemd[1]: elasticsearch.service: Failed with result 'signal'.

And finally my current settings of elasticsearch.yml

cluster.name: Graylog
node.name: Graylog
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 172.16.64.203
http.port: 9200
discovery.seed_hosts: ["Graylog"]
discovery.type: single-node
action.auto_create_index: false

I can’t figure out what is going on, honestly.

Looks like Elsticsearch has an issue with java?

An error report file with more information is saved as: /var/log/elasticsearch/hs_err_pid910.log

Check out that log file for anything interesting

1 Like

Hello @miltiadis.p

I think the curl command your using is incorrect, If you configured Elasticsearch with the IP Address of network.host: 172.16.64.203 then that’s the part of the curl commend you need to use.

Your trying to use localhost which is incorrect.

In your case it should have been like this.

curl -X GET "172.16.64.203:9200/_dangling?pretty"

I’m just giving suggestions, so copy & paste probably will not work.

As for this setting, I’m not sure

discovery.seed_hosts: [“Graylog”]

If you just have one server I believe this setting is not needed.

Below is an example , remember it is a YAML file so indents and space can be tricky.

discovery.seed_hosts:
   - 192.168.1.10:9300
   - 192.168.1.11 
   - seeds.mydomain.com

EDIT:
After going back through your logs here is couple more suggestion, not sure if it will work.

1.Ensure elasticsearch is running before Graylog service. If Graylog is started put it in a stopped state while working on elasticsearch.
2.The configuration I showed above will work for default configuration setting in your environment. This should be the only uncomment/added lines. Remember to make a copy of elasticsearch.yaml file before making changes!! :wink: The settings below should be the only lines that are visible and every thing else is commented.

cluster.name: graylog
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 172.16.64.203
http.port: 9200
action.auto_create_index: false
discovery.type: single-node

3.make sure /var/lib/elasticsearch is owned by the elasticsearch user:

chown -R elasticsearch:elasticsearch /var/lib/elasticsearch/

4.Execute restart for elasticsearch

sudo systemctl restart elasticsearch

5.Tail -f elasticsearch log file NOT the GC file.
6.If Elasticsearch status does stay started, run those cURL commands above to make sure you don’t have issues, also note it may take a few minutes for elasticsearch to start up unless it falls flat on its face, then you have other issues

If all else fails try re-install elasticsearch-7.10, ensure Graylog is in a stopped position first.

sudo apt autoclean
sudo apt-get --reinstall install PackageNameHere

I got the same issue now after reboot. I have never installed Elasticsearch and not intending to. Earlier the version probe just kept trying but didn’t prevent Graylog server to start. I think the Elastic check is now too harsh.

Uploaded to WeTransfer. I can’t figure out the issue though despite looking at it.

@gsmith curl failed on address too instead of localhost. I am getting the same error, Connection refused

Below is an example , remember it is a **YAML** file so indents and space can be tricky.
discovery.seed_hosts:
   - 192.168.1.10:9300
   - 192.168.1.11 
   - seeds.mydomain.com

I just changed the default setting of “node-1” to Graylog…

cluster.name: graylog
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 172.16.64.203
http.port: 9200
action.auto_create_index: false
discovery.type: single-node

These are now my only options enabled in elasticsearch.yml file and still elasticsearch won’t start. Exits with the same error (graylog service is stopped.

1.Ensure elasticsearch is running before Graylog service. If Graylog is started put it in a stopped state while working on elasticsearch.

Started the VM, made sure ALL 3 services are stopped (MongoDB, Elastic and Graylog) and started then with the aforementioned order. Elastic still fails :frowning:

2.The configuration I showed above will work for default configuration setting in your environment. This should be the only uncomment/added lines. Remember to make a copy of elasticsearch.yaml file before making changes!! :wink: The settings below should be the only lines that are visible and every thing else is commented.

Mentioned just above

3.make sure /var/lib/elasticsearch is owned by the elasticsearch user:

It was. By default i guess (didn’t installed anything as root)

drwxr-s---  3 elasticsearch elasticsearch 4096 Sep 19 23:29 .
drwxr-xr-x 45 root          root          4096 Sep 19 23:47 ..
drwxr-sr-x  3 elasticsearch elasticsearch 4096 Sep 19 23:29 nodes

5 and 6.

I did restart it but it doesn’t even start, it remains to failed. In fact, by luck, I made a quick systemctl status right after starting it and saw it started. Couldn’t believe my eyes and when I did it again I saw that it failed… again.

I will try to re-install Elasticsearch now and will get back to you.

Thank you people for keeping up the support on this. I really appreciate and I hope I can find the issue. This is happening for the 2nd time which means if i don’t find what’s wrong I will have to quit Graylog which I definitely don’t want to.

Here is the only thing I found that was pertinent… something about Java and older Opteron CPUs (your logs say you have Opteron) and Java and Elasticsearch. There is a fix in there but it’s not clear it is really the issue you are having…

Could you post versions:

$ dpkg -l | grep -E ".*(elasticsearch|graylog|mongo).*"

$ java -version

Some of that is in the logs but just so it’s clear. :smiley:

1 Like

@tmacgbay yes the hypervisor has 2 Opterons 6128.

Here are the packages:

ii  elasticsearch-oss                     7.10.2                            amd64        Distributed RESTful search engine built for the cloud
ii  graylog-4.3-repository                1-5                               all          Package to install Graylog 4.3 GPG key and repository
ii  graylog-server                        4.3.7-1                           all          Graylog server
ii  mongodb-org                           4.0.28                            amd64        MongoDB open source document-oriented database system (metapackage)
ii  mongodb-org-mongos                    4.0.28                            amd64        MongoDB sharded cluster query router
ii  mongodb-org-server                    4.0.28                            amd64        MongoDB database server
ii  mongodb-org-shell                     4.0.28                            amd64        MongoDB shell client
ii  mongodb-org-tools                     4.0.28                            amd64        MongoDB tools

Java version

openjdk version "11.0.16" 2022-07-19
OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu120.04)
OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Ubuntu-0ubuntu120.04, mixed mode, sharing)

I checked the support matrices of Elasticsearch for Ubuntu 20.04 LTS according to this so i decided to install openjdk11 headless.

Here is the only thing I found that was pertinent

According to CPU-world Opteron 6128 supports up to SSE-4a. I edited the /etc/elasticsearch/jvm.options.d/jvm.options and added the -XX:UseSSE=3 parameter. I began with SSE=3 but Elasticsearch never started. But if I make it SSE=2 it works! I have logged into Graylog.

It seems that it is working for the time being. I will keep this running all day and night and restart it tomorrow to check if that was the solution. @tmacgbay honestly, how come you thought of this? :astonished:

Hopefully it is continuing to work!! It was just some lucky Googling.

When we looked at the journalctl results, there was These lines:

Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # Java VM: OpenJDK 64-Bit Server VM AdoptOpenJDK (15.0.1+9, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # Problematic frame:
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # J 7107 c2 com.fasterxml.jackson.core.sym.ByteQuadsCanonicalizer.calcHash([II)I (115 bytes) @ 0x00007f79b024c7c6 [0x00007f79b024c680+0x0000000000000146]
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: #
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E" (or dumping t>
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: #
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # An error report file with more information is saved as:
Sep 23 22:01:01 Graylog systemd-entrypoint[910]: # /var/log/elasticsearch/hs_err_pid910.log

Which was pretty much repeated in the /var/log/elasticsearch/hs_err_pid910.log file. As per usual, there is a lot of fluff in there that means something to someone but not me… so I picked out something that looked unique and relevant and googled on it… specifically just this:

com.fasterxml.jackson.core.sym.ByteQuadsCanonicalizer.calcHash

Figuring that someone else would have a similar error message that posted somewhere on the internets.

So in the end it was a semi-wile guess on something that is reasonably close… and fortunately not to hard to upgrade.

Fingers crossed that this fixed it! :slight_smile:

Yeap, it’s up for 1day and 4 hours so i think problem solved!

Thank you @tmacgbay and @gsmith . Your help was really precious.

Tbh i first went down the @gsmith solution cause i thought it was definitely something really hidden or very strange. @tmacgbay i also saw that fasterxml thing but I thought it was too common description to bother.

Anyways, thanks everybody, I really liked this place. Shows there are some guys who really know what’s going on. Thanks again!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.