Problem with upgrade Graylog from 3.3 to 4.0.13

Description of your problem

I have problem with upgrade Graylog 3.3.14 to 4.0.13.

Environmental information

Graylog - 3.3.14 - only base plugins
ElasticSearch - 6.8.18
MongoDB - 4.2.15

Operating system information

CentOS Linux 7 (Core)

My logs, service is running but GUI is not available.

2021-10-19T22:21:25.659+02:00 ERROR [ServerBootstrap] Unable to shutdown properly on time. {STOPPING=[JobSchedulerService [STOPPING]], TERMINATED=[UserSessionTerminationService [TERMINATED], InputSetupService [TERMINATED], PeriodicalsService [TERMINATED], MongoDBProcessingStatusRecorderService [TERMINATED], UrlWhitelistService [TERMINATED], GracefulShutdownService [TERMINATED], StreamCacheService [TERMINATED], OutputSetupService [TERMINATED], EtagService [TERMINATED], ConfigurationEtagService [TERMINATED], JournalReader [TERMINATED], KafkaJournal [TERMINATED], BufferSynchronizerService [TERMINATED], LookupTableService [TERMINATED]], FAILED=[JerseyService [FAILED]]}
2021-10-19T22:21:25.659+02:00 ERROR [ServerBootstrap] Graylog startup failed. Exiting. Exception was:
java.lang.IllegalStateException: Expected to be healthy after starting. The following services are not running: {STARTING=[LookupTableService [STARTING]], FAILED=[JerseyService [FAILED]]}
        at com.google.common.util.concurrent.ServiceManager$ServiceManagerState.checkHealthy(ServiceManager.java:773) ~[graylog.jar:?]
        at com.google.common.util.concurrent.ServiceManager$ServiceManagerState.awaitHealthy(ServiceManager.java:585) ~[graylog.jar:?]
        at com.google.common.util.concurrent.ServiceManager.awaitHealthy(ServiceManager.java:316) ~[graylog.jar:?]
        at org.graylog2.bootstrap.ServerBootstrap.startCommand(ServerBootstrap.java:161) [graylog.jar:?]
        at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:212) [graylog.jar:?]
        at org.graylog2.bootstrap.Main.main(Main.java:50) [graylog.jar:?]
        Suppressed: com.google.common.util.concurrent.ServiceManager$FailedService: JerseyService [FAILED]
        Caused by: java.security.GeneralSecurityException: org.bouncycastle.pkcs.PKCSException: unable to read encrypted data: JCE cannot authenticate the provider BC
                at org.graylog2.shared.security.tls.PemKeyStore.buildKeyStore(PemKeyStore.java:88) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.buildSslEngineConfigurator(JerseyService.java:357) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.startUpApi(JerseyService.java:177) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.startUp(JerseyService.java:151) ~[graylog.jar:?]
                at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) ~[graylog.jar:?]
                at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) ~[graylog.jar:?]
                at java.lang.Thread.run(Unknown Source) ~[?:1.8.0_301]
        Caused by: org.bouncycastle.pkcs.PKCSException: unable to read encrypted data: JCE cannot authenticate the provider BC
                at org.bouncycastle.pkcs.PKCS8EncryptedPrivateKeyInfo.decryptPrivateKeyInfo(Unknown Source) ~[graylog.jar:?]
                at org.graylog2.shared.security.tls.PemKeyStore.generateKeySpec(PemKeyStore.java:68) ~[graylog.jar:?]
                at org.graylog2.shared.security.tls.PemKeyStore.doBuildKeyStore(PemKeyStore.java:99) ~[graylog.jar:?]
                at org.graylog2.shared.security.tls.PemKeyStore.buildKeyStore(PemKeyStore.java:85) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.buildSslEngineConfigurator(JerseyService.java:357) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.startUpApi(JerseyService.java:177) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.startUp(JerseyService.java:151) ~[graylog.jar:?]
                at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) ~[graylog.jar:?]
                at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) ~[graylog.jar:?]
                at java.lang.Thread.run(Unknown Source) ~[?:1.8.0_301]
        Caused by: java.lang.SecurityException: JCE cannot authenticate the provider BC
                at javax.crypto.Cipher.getInstance(Cipher.java:660) ~[?:1.8.0_271]
                at javax.crypto.Cipher.getInstance(Cipher.java:599) ~[?:1.8.0_271]
                at org.bouncycastle.jcajce.util.NamedJcaJceHelper.createCipher(Unknown Source) ~[graylog.jar:?]
                at org.bouncycastle.openssl.jcajce.JceOpenSSLPKCS8DecryptorProviderBuilder$1.get(Unknown Source) ~[graylog.jar:?]
                at org.bouncycastle.pkcs.PKCS8EncryptedPrivateKeyInfo.decryptPrivateKeyInfo(Unknown Source) ~[graylog.jar:?]
                at org.graylog2.shared.security.tls.PemKeyStore.generateKeySpec(PemKeyStore.java:68) ~[graylog.jar:?]
                at org.graylog2.shared.security.tls.PemKeyStore.doBuildKeyStore(PemKeyStore.java:99) ~[graylog.jar:?]
                at org.graylog2.shared.security.tls.PemKeyStore.buildKeyStore(PemKeyStore.java:85) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.buildSslEngineConfigurator(JerseyService.java:357) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.startUpApi(JerseyService.java:177) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.startUp(JerseyService.java:151) ~[graylog.jar:?]
                at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) ~[graylog.jar:?]
                at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) ~[graylog.jar:?]
                at java.lang.Thread.run(Unknown Source) ~[?:1.8.0_301]
        Caused by: java.util.jar.JarException: file:/usr/share/graylog-server/graylog.jar has unsigned entries - netflow_v9.proto
                at javax.crypto.JarVerifier.verifySingleJar(JarVerifier.java:510) ~[?:1.8.0_271]
                at javax.crypto.JarVerifier.verifyJars(JarVerifier.java:371) ~[?:1.8.0_271]
                at javax.crypto.JarVerifier.verify(JarVerifier.java:297) ~[?:1.8.0_271]
                at javax.crypto.JceSecurity.verifyProviderJar(JceSecurity.java:164) ~[?:1.8.0_271]
                at javax.crypto.JceSecurity.getVerificationResult(JceSecurity.java:190) ~[?:1.8.0_271]
                at javax.crypto.Cipher.getInstance(Cipher.java:656) ~[?:1.8.0_271]
                at javax.crypto.Cipher.getInstance(Cipher.java:599) ~[?:1.8.0_271]
                at org.bouncycastle.jcajce.util.NamedJcaJceHelper.createCipher(Unknown Source) ~[graylog.jar:?]
                at org.bouncycastle.openssl.jcajce.JceOpenSSLPKCS8DecryptorProviderBuilder$1.get(Unknown Source) ~[graylog.jar:?]
                at org.bouncycastle.pkcs.PKCS8EncryptedPrivateKeyInfo.decryptPrivateKeyInfo(Unknown Source) ~[graylog.jar:?]
                at org.graylog2.shared.security.tls.PemKeyStore.generateKeySpec(PemKeyStore.java:68) ~[graylog.jar:?]
                at org.graylog2.shared.security.tls.PemKeyStore.doBuildKeyStore(PemKeyStore.java:99) ~[graylog.jar:?]
                at org.graylog2.shared.security.tls.PemKeyStore.buildKeyStore(PemKeyStore.java:85) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.buildSslEngineConfigurator(JerseyService.java:357) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.startUpApi(JerseyService.java:177) ~[graylog.jar:?]
                at org.graylog2.shared.initializers.JerseyService.startUp(JerseyService.java:151) ~[graylog.jar:?]
                at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) ~[graylog.jar:?]
                at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) ~[graylog.jar:?]
                at java.lang.Thread.run(Unknown Source) ~[?:1.8.0_301]
2021-10-19T22:21:25.662+02:00 INFO  [Server] SIGNAL received. Shutting down.
2021-10-19T22:21:25.664+02:00 INFO  [GracefulShutdown] Graceful shutdown initiated.
2021-10-19T22:21:25.665+02:00 INFO  [GracefulShutdown] Node status: [Halting [LB:DEAD]]. Waiting <3sec> for possible load balancers to recognize state change.
2021-10-19T22:21:29.666+02:00 INFO  [GracefulShutdown] Goodbye.

There is first error:
JCE cannot authenticate the provider BC

And second:
java.util.jar.JarException: file:/usr/share/graylog-server/graylog.jar has unsigned entries - netflow_v9.proto

  • we are not using netflow inputs, I do not understand the cause of the error?

Thanks for your help.

Hello,

I might be able to help but first how did you perform you upgrade? And what documentation did you use?

  • What version of java do you have?
  • Did you make sure all your Plugin are the same version?
  • Does Graylog have access to directories needed?

chown graylog:graylog -R /etc/graylog/

  • Did you check all the services?
systemctl status graylog-server
systemctl status elasticsearch
systemctl status mongodb
bouncycastle.pkcs.PKCSException: unable to read encrypted data: JCE cannot authenticate the provider BC
bouncycastle.pkcs.PKCS8EncryptedPrivateKeyInfo.decryptPrivateKeyInfo(Unknown Source)
graylog2.shared.security.tls.PemKeyStore.generateKeySpec(PemKeyStore.java:68
graylog2.shared.security.tls.PemKeyStore.doBuildKeyStore(PemKeyStore.java:99) ~[graylog.jar:?]     at org.graylog2.shared.security.tls.PemKeyStore.buildKeyStore(PemKeyStore.java:85)

After looking over your logs it seams like a problem with Keystore or maybe you certificates. Or even maybe access to either one might be the problem.

I cant tell for sure unless your able to share more information of your graylog configurations.

I’m assuming at this point maybe you have HTTPS enabled?
Again, I am assuming you may have an LDAP/AD authentication service enabled?

I’m not sure about that, judging from your log file it shows a shutdown

2021-10-19T22:21:25.662+02:00 INFO  [Server] SIGNAL received. Shutting down.
2021-10-19T22:21:25.664+02:00 INFO  [GracefulShutdown] Graceful shutdown initiated.
2021-10-19T22:21:25.665+02:00 INFO  [GracefulShutdown] Node status: [Halting [LB:DEAD]]. Waiting <3sec> for possible load balancers to recognize state change.
2021-10-19T22:21:29.666+02:00 INFO  [GracefulShutdown] Goodbye.

I found this post maybe there is something in there that might help

1 Like

java -version
java version “1.8.0_301”
Java™ SE Runtime Environment (build 1.8.0_301-b09)
Java HotSpot™ 64-Bit Server VM (build 25.301-b09, mixed mode)

Update through “yum update graylog-server”

Did you make sure all your Plugin are the same version? yes

ll /usr/share/graylog-server/plugin/
graylog-plugin-aws-3.3.14.jar
graylog-plugin-collector-3.3.14.jar
graylog-plugin-threatintel-3.3.14.jar

  • after upgrade all plugins were 4.0.13.

Does Graylog have access to directories needed?
chown graylog:graylog -R /etc/graylog/

  • it is running below roor, and all folders are with root.

Did you check all the services?
yes

I’m assuming at this point maybe you have HTTPS enabled?
yes, i have https enabled, and it is running with versions below 4.0.13.

Judging from the errors in the log file It seams that graylog configuration file might be misconfigured OR Graylog can not access your keystore/certificates. I’ going to say its access to your keystore and certificates but I may be wrong.

Here is a couple things you can try.

Graylog normally run as user graylog, so your certificate should be readable by graylog user. If the directory or certs are own by ROOT this can become a problem. The simplest way is to change owner to graylog:

sudo chown graylog:graylog -R /etc/graylog/

You can find your running user with this command.

ps -ef |grep graylog

Here are Graylog default file location, make sure Graylog user can access those.

https://docs.graylog.org/docs/file-locations

Hope that helps

Please and this error:
Caused by: java.util.jar.JarException: file:/usr/share/graylog-server/graylog.jar has unsigned entries - netflow_v9.proto
I do not understand the cause of the error?

In order for the JVM to pick up the new trust store, it has to be started with the JVM parameter -Djavax.net.ssl.trustStore=/path/to/cacerts.jks. If you’ve been using another password to encrypt the JVM trust store than the default changeit, you additionally have to set the JVM parameter -Djavax.net.ssl.trustStorePassword=secret

It from here.

Might be permissions or the way you performed your upgrade.

I will try to change permission and cacerts file. Certificate is valid.
i have this configuration.

cat /etc/sysconfig/graylog-server
# Path to the java executable.
JAVA=/usr/bin/java

# Default Java options for heap and garbage collection.
GRAYLOG_SERVER_JAVA_OPTS="-Xms3g -Xmx3g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Djavax.net.ssl.trustStore=/etc/graylog/server/cacerts.jks"

# Pass some extra args to graylog-server. (i.e. "-d" to enable debug mode)
GRAYLOG_SERVER_ARGS=""

# Program that will be used to wrap the graylog-server command. Useful to
# support programs like authbind.
GRAYLOG_COMMAND_WRAPPER="authbind"

But what with this error?
Caused by: java.util.jar.JarException: file:/usr/share/graylog-server/graylog.jar has unsigned entries - netflow_v9.proto
Where to change permission, upgrade is only yum update, and with this was not problem to upgrade to 4.0.

Hello,

It seams you have a few errors that need to be attended to. I would first check your Graylog configuration file. A misconfiguration in this file will hinder Graylog’s ability to function (i.e. Web UI) correctly, hence this section of your server.config file http_enable_cors = true. That error you are seeing is probably because of the upgrade procedure. Its hard to give you a direct answer because of the lack of information your posting.

That is part of Graylog installment. I’m assuming that during your upgrade process something went wrong.
From my understanding on upgrading Graylog I don’t think you can just run YUM update to install 4.0 from 3.3 you will need the package. Not only that you need to install version 4.0 first before a newer version of Graylog.

So, what Ill try to do is sum it up on how to upgrade Graylog.
Here are the steps I execute in order.

  1. Before Upgrading make sure your system is fully updated.
    yum update -y

  2. Stop graylog service before you upgrade.
    systemctl stop graylog-server

  3. Install new Graylog 4.0 package.
    rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-4.0-repository_latest.rpm

  4. Make sure your repository cache is clean from Old version of Graylog.
    yum clean all

  5. Perform Graylog upgrade
    yum upgrade graylog-server

NOTE: At this point make SURE you have the correct Graylog Configuration file (server.conf) , some settings have changed in the newer versions. To insure you have the correct server.conf there should be a file called server.conf.rpmnew in the Graylog directory ( /etc/graylog/).

  1. Check the permission on Graylog directories. For testing purposes I put my Certificates here So Graylog can access them. If everything is correct then in Production I move them to the correct destination.
    ls -al /etc/graylog

  1. What I was tell you earlier was to execute this command since you keystore is within your Graylog directory
    chown graylog:graylog -R /etc/graylog/

  2. Once everything is done I started Graylog service
    systemctl start graylog-server

  3. Now I will TAIL my Graylog file and look for errors, warnings, etc… Sometimes the error’s are not at the end of the log file.
    tail -f /var/log/graylog-server/server.log

I don’t think that is the right way to upgrade graylog, please take a look here.

HowTo Upgrade Graylog to a Major Version

I’m really not sure what happened during your upgrade but this is all the information I have for a correct upgrade procedure. If you do execute those steps or use the documentation and it still does not work please show your full log files and Graylog configuration file.

Hope that helps

1 Like

Hi All
thanks for your support.

Solution for this error is:
From this article: encryption - JCE cannot authenticate the provider BC in java swing application - Stack Overflow
and bouncycastle.org

cat /usr/java/jre1.8.0_301-amd64/lib/security/java.security | grep provider
vi /usr/java/jre1.8.0_301-amd64/lib/security/java.security
add “security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider”
cp /tmp/bcprov-jdk15to18-169.jar /usr/java/jre1.8.0_301-amd64/lib/ext/
-rw-r–r-- 1 root root

  • but this was not solution
    cat server.conf | grep cors
    http_enable_cors = true

I have one mere question for Graylog 4.0.13 after upgrade: Has dmin user got access to read dashboard or create new dashboard?
My admin account has not read for dashbaords and after click on “Create new dashboard”
https://fqdn/dashboards/new

User with role “Dashboard Creator” can create dashboards.
Thanks

Hello

Yes they do so long as they have a admin role.

EDIT:

I have never seen that location of JAVA on CentOS. Its normally locate here

/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre/lib/security/cacerts

Also the correct package from the documentation states this.

Hi, thanks.
after upgrade to 4.0.13 admin account or account with admin role has not access to dashboard, can not create new dashboards.
Only account with share dashboard has access to dashboards or with role Dashboard Creator can create new dashboards.
I have not error logs about this behavior.

Some users after upgrade have errors in shared entities in User Details.
[ERROR: View for grn::::dashboard:592544833449fe58787f6236 not found!] dashboard
[ERROR: Stream for grn::::stream:57b427b33892be158adb87aa not found!] stream

If what your showing us is from the ADMIN account then It seams your upgrade didn’t go well. You may need to recreate your dashboard. One of the reason this could happen is the new feature for the current Role-based Access Control (RBAC) and how user access Web UI sections.

Since you have GL 4.0 installed have you thought about upgrading to 4.1 instead? There were a few fixes between minor version? Just an idea.

here are Graylogs change logs.
https://docs.graylog.org/docs/changelog

Hi, thanks for your help.
I have upgraded to 4.1.7.

These errors were in Users Details after upgrade. For example two errors.
ERROR: View for grn::::dashboard:57f217203892be03cfa03d83 not found! dashboard
ERROR: Stream for grn::::stream:580b82293892be743dbf4271 not found!

I have deleted these documents from mondodb - Robo3T.
For example - db.getCollection(‘grants’).find({“target” : “grn::::stream:57b427b33892be158adb87aa”})

Now i have all users without errors.

But i have problem with Admin Role.
Admin user or different user with Admin Role has problem to access dashboards or to see logs in global search.
Admin user can create new dashbaord but can not access to dashbaords or sometimes there is page unresponsive. In global search is still Loading for user with Admin Role. I have chagen admin username to different but same behavior.
Users without Admin role can access to Global Search without problems.
Users and Users with admin role can access to Stream logs without problems.
It seems, that problem is with Admin role.

Loading global search for admin/user with admin Role

Dashboard loading for admin/user with admin Role

Sometimes dashboard Page Unresponsive

Some advise please?

Can you post your Elasticsearch and Graylog log files here?
Can you post your elasticsearch and Graylog Configuration files here?

Hi,
my Graylog configuration: /etc/graylog/server/server.conf

############################
# GRAYLOG CONFIGURATION FILE
############################
is_master = true

node_id_file = /etc/graylog/server/node-id

password_secret = hash

root_username = admin

root_password_sha2 = hash

root_email = mail

root_timezone = Europe/Bratislava

plugin_dir = /usr/share/graylog-server/plugin

http_bind_address = host_fqdn:9000
http_publish_uri = https://host_fqdn:9000/

http_enable_cors = true

#http_enable_gzip = true

http_enable_tls = true
http_tls_cert_file = /etc/graylog/server/certificate.pem
http_tls_key_file = /etc/graylog/server/key.pem
http_tls_key_password = password

#http_max_header_size = 8192

#http_max_initial_line_length = 4096

#http_thread_pool_size = 16

#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128



#http_external_uri = $http_publish_uri
#http_external_uri = https://LB_fqdn/

elasticsearch_hosts = http://host_fqdn:9200

#elasticsearch_connect_timeout = 10s

#elasticsearch_socket_timeout = 60s

#elasticsearch_idle_timeout = -1s

#elasticsearch_max_total_connections = 20

#elasticsearch_max_total_connections_per_route = 2

#elasticsearch_max_retries = 2

#elasticsearch_discovery_enabled = true

#elasticsearch_discovery_filter = rack:42

# elasticsearch_discovery_frequency = 30s

#elasticsearch_compression_enabled = true

rotation_strategy = count

elasticsearch_max_docs_per_index = 20000000
#elasticsearch_max_size_per_index = 1073741824
#elasticsearch_disable_version_check = true

#no_retention = false

elasticsearch_max_number_of_indices = 20

retention_strategy = delete

elasticsearch_shards = 1
elasticsearch_replicas = 0

elasticsearch_index_prefix = graylog

#elasticsearch_template_name = graylog-internal

allow_leading_wildcard_searches = false
##allow_leading_wildcard_searches = true

allow_highlighting = false
elasticsearch_analyzer = standard

#elasticsearch_request_timeout = 1m

#elasticsearch_index_optimization_timeout = 1h
#elasticsearch_index_optimization_jobs = 20
#index_ranges_cleanup_interval = 1h
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

#processbuffer_processors = 5
processbuffer_processors = 7
outputbuffer_processors = 4

#outputbuffer_processor_keep_alive_time = 5000
#outputbuffer_processor_threads_core_pool_size = 3
#outputbuffer_processor_threads_max_pool_size = 30

#udp_recvbuffer_sizes = 1048576

processor_wait_strategy = blocking

ring_size = 65536

inputbuffer_ring_size = 65536
#inputbuffer_processors = 2
inputbuffer_processors = 4
inputbuffer_wait_strategy = blocking

message_journal_enabled = true

message_journal_dir = /var/lib/graylog-server/journal

#message_journal_max_age = 12h
#message_journal_max_size = 5gb
#message_journal_flush_age = 1m
#message_journal_flush_interval = 1000000
#message_journal_segment_age = 1h
#message_journal_segment_size = 100mb

#async_eventbus_processors = 2

lb_recognition_period_seconds = 3

#lb_throttle_threshold_percentage = 95

#output_module_timeout = 10000

#stale_master_timeout = 2000

#shutdown_timeout = 30000

mongodb_uri = mongodb://127.0.0.1/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5


transport_email_enabled = true
transport_email_hostname = fqdn
transport_email_port = 25
transport_email_use_auth = false
transport_email_use_tls = false
transport_email_use_ssl = false
transport_email_subject_prefix = [graylog]
transport_email_from_email = mail
transport_email_web_interface_url = https://host_fqdn:9000

#http_connect_timeout = 5s
#http_read_timeout = 10s
#http_write_timeout = 10s

#http_proxy_uri =

#disable_index_optimization = true

#index_optimization_max_num_segments = 1

#gc_warning_threshold = 1s

#ldap_connection_timeout = 2000

#disable_sigar = false

#dashboard_widget_default_cache_time = 10s

#content_packs_loader_enabled = true

content_packs_dir = /usr/share/graylog-server/contentpacks

content_packs_auto_load = grok-patterns.json

proxied_requests_thread_pool_size = 32

#TODO DELETE dns_
#dns_resolver_enabled = true
#dns_resolver_run_before_extractors = true
#dns_resolver_timeout = 2s


#Prometheus
metrics_prometheus_enabled=true
metrics_prometheus_report_interval=5s
metrics_prometheus_address=ip:9091
metrics_prometheus_job_name=graylog

##https://docs.graylog.org/en/3.1/pages/configuration/server.conf.html#others
#processing_status_persist_interval = 1s
#processing_status_update_threshold = 1m
#processing_status_journal_write_rate_threshold= 1
#default_events_index_prefix = gl-events
#default_system_events_index_prefix = gl-system-events

My elasticsearch configuration: /etc/elasticsearch/elasticsearch.yml


# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: my-application

cluster.name: graylog2
path.repo: /tmp/backupes
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
# node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
#node.master: false
#node.data: false
#node.ingest: false

# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
path.data: /var/elasticsearch/data/graylog2/
#
# Path to log files:
#
# path.logs: /path/to/logs
path.logs:  /var/elasticsearch/log
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: fqdn.sk
#
# Set a custom port for HTTP:
#
# http.port: 9200
http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["fqdn.sk"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1

#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: "*"

script.painless.regex.enabled: true
indices.query.bool.max_clause_count: 10240

xpack.security.enabled: false
xpack.monitoring.enabled: false
xpack.graph.enabled: false
xpack.watcher.enabled: false

Hello,
Thanks for the add information. I cant point to the main cause of your issue. It could be a combination of things. Since you can log in the Web UI, and I’m assuming you have HTTPS for your URL? what I normally see when Widgets and/or Dashboards loading continuously like that is has something to do either with your certificates, your permission for your certificates, or Graylog and elasticsearch not playing nice. So, my guess would be towards Elasticsearch configuration or Permission.
You have a lot going on and I’m not sure where to start. Here are some things I found in your configurations that doesn’t look right, or should I say, I have not seen the combinations of settings like yours before.
Below are a couple things I’m not sure of and maybe some suggestions.

  • How many logs are you ingest per hour/day?
  • What is your Graylog resources CPU, Memory, Disk?
  • Are you ingesting a lot of logs and if so, how much?

The reason I ask this is because of these settings below you have I would assume you have somewhere around 15 CPU cores. If you don’t have that much and you have a high intake on logs maybe increasing your resource to match those settings below.

processbuffer_processors = 7
outputbuffer_processors = 4
inputbuffer_processors = 4

It could be these setting but unfortunately, I don’t use Prometheus on graylog. So, I’m not going to be any help with that. I know other people here use Prometheus, they would be able to identify if that maybe the problem.

#Prometheus
metrics_prometheus_enabled=true
metrics_prometheus_report_interval=5s
metrics_prometheus_address=ip:9091
metrics_prometheus_job_name=graylog

These should be the same unless you have multiple Graylog Nodes. Not sure if you’re trying to make a cluster of something. They are normally the same name.

  • Your Elasticsearch config
cluster.name: graylog2
  • Your Graylog config
elasticsearch_index_prefix = graylog

I’m not sure with these settings below. I have used Xpack on Open Distro for Elasticsearch but not for Graylog. Just a note on these settings, you may run into problems when upgrading elasticsearch 7.x
To be honest if you don’t need them I would comment # them out

xpack.security.enabled: false
xpack.monitoring.enabled: false
xpack.graph.enabled: false
xpack.watcher.enabled: false

If you don’t have multiply Elasticsearch servers, you really don’t need this setting/s, or do you have three node Elasticsearch cluster?

discovery.zen.ping.unicast.hosts: ["fqdn.sk"]
discovery.zen.minimum_master_nodes: 1

Just to give you another idea on GL setup. I have posted my lab configuration for you.
Below is CentOS 7 with 14 CPU cores, 12 GB RAM, and 500GB disk. It runs 30 GB logs a day with 30 Days retention using TCP/TLS for Web UI and INPUTS. No problems.
If you only have one Graylog node with Elasticsearch and MongoDb I know these settings work, it a good starter to make sure all setting and configuration work.

Here is my Elasticsearch YAML file settings for Elasticsearch 6.8.

cluster.name: graylog
network.host: ipaddress
http.port: 9200
action.auto_create_index: false

And here is my graylog config looks very similar to yours.

is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret =hash
root_password_sha2 = hash
root_email = "greg.smith@domain.com"
root_timezone = America/Chicago
bin_dir = /usr/share/graylog-server/bin
data_dir = /var/lib/graylog-server
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = graylog.domain.com:9000
http_publish_uri = https://graylog.domain.com:9000/ 
http_enable_cors = true
http_enable_tls = true
http_tls_cert_file = /etc/graylog/graylog-certificate.pem
http_tls_key_file = /etc/graylog/graylog-key.pem
http_tls_key_password = my_secret
elasticsearch_hosts = http://ipaddress:9200
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = true
allow_highlighting = false
elasticsearch_analyzer = standard
elasticsearch_index_optimization_timeout = 1h
output_batch_size = 5000
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 6
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
message_journal_max_size = 12gb
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost:27017/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
transport_email_enabled = true
transport_email_hostname = localhost
tansport_email_port = 25
transport_email_subject_prefix = [graylog]
transport_email_from_email = root@domain.com
transport_email_web_interface_url = https://ipaddress:9000
http_connect_timeout = 10s
proxied_requests_thread_pool_size = 32

Do you TAIL your GrayLog log file when you restart the service?
Did you see anything in your MongoDb/elasticsearch logs?

Not sure if you can post your log files but what would help is tail’ing your Graylog log file while you try to access the Dashboards and post the full log file here. Maybe we can see what’s going on.

EDIT: I just noticed this in your configuration file, my apologies you have a lot to look over.

It should be

mongodb_uri = mongodb://127.0.0.1:27017/graylog

or

mongodb_uri = mongodb://localhost:27017/graylog

Hope that helps

1 Like

Hi,
thanks for your message.

How many logs are you ingest per hour/day? - 40G per day
What is your Graylog resources CPU, Memory, Disk? cpu-6, memory 25g, disk 700G
Are you ingesting a lot of logs and if so, how much? I have 12 dashboards.

I have modified settings to:

processbuffer_processors = 5
outputbuffer_processors = 3
inputbuffer_processors = 3

#Prometheus
I have deleted prometheus settings.

I have changed to graylog2:

Your Elasticsearch config
cluster.name: graylog2
Your Graylog config
elasticsearch_index_prefix = graylog2

I have changed to

#discovery.zen.ping.unicast.hosts: ["fqdn.sk"]
#discovery.zen.minimum_master_nodes: 1

Thanks for mongodb_uri, I have changed to

mongodb_uri = mongodb://127.0.0.1:27017/graylog

I upgrade graylog to 4.2.1.

But the behavior is the same. Admin account can not access to search logs, page is only loading. User with read has access.

I have these logs in Graylog after restart 4.2.1 and by access admin account to global search:

2021-11-13T23:29:09.450+01:00 INFO  [InputStateListener] Input [Raw/Plaintext UDP/5928105c3449fe43ed86a078] is now RUNNING
2021-11-13T23:29:09.453+01:00 INFO  [InputStateListener] Input [Beats/5cc03bcf30c3326d22a91ecb] is now RUNNING
2021-11-13T23:29:09.459+01:00 INFO  [InputStateListener] Input [GELF TCP/60ae4ded12727461c5e25d7f] is now RUNNING
2021-11-13T23:29:13.573+01:00 INFO  [Log] Rolled new log segment for 'messagejournal-0' in 48 ms.
2021-11-13T23:29:35.873+01:00 INFO  [Log] Scheduling log segment 3270496907 for log messagejournal-0 for deletion.
2021-11-13T23:30:35.883+01:00 INFO  [Log] Deleting segment 3270496907 from log messagejournal-0.
2021-11-13T23:30:35.913+01:00 INFO  [OffsetIndex] Deleting index /var/lib/graylog-server/journal/messagejournal-0/00000000003270496907.index.deleted
2021-11-13T23:30:51.239+01:00 ERROR [ServerRuntime$Responder] An I/O error has occurred while writing a response message entity to the container output stream.
org.glassfish.jersey.server.internal.process.MappableException: java.io.IOException: Connection is closed
        at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:67) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:139) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1116) ~[graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:635) [graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:373) [graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:363) [graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:258) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors.process(Errors.java:292) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors.process(Errors.java:274) [graylog.jar:?]
        at org.glassfish.jersey.internal.Errors.process(Errors.java:244) [graylog.jar:?]
        at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) [graylog.jar:?]
        at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234) [graylog.jar:?]
        at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) [graylog.jar:?]
        at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:356) [graylog.jar:?]
        at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:200) [graylog.jar:?]
        at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:180) [graylog.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_301]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_301]
        at java.lang.Thread.run(Unknown Source) [?:1.8.0_301]
Caused by: java.io.IOException: Connection is closed
        at org.glassfish.grizzly.nio.NIOConnection.assertOpen(NIOConnection.java:441) ~[graylog.jar:?]
        at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:663) ~[graylog.jar:?]
        at org.glassfish.grizzly.http.server.NIOOutputStreamImpl.write(NIOOutputStreamImpl.java:59) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:200) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:276) ~[graylog.jar:?]
        at com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2085) ~[graylog.jar:?]
        at com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeStartArray(UTF8JsonGenerator.java:290) ~[graylog.jar:?]
        at com.fasterxml.jackson.core.JsonGenerator.writeStartArray(JsonGenerator.java:750) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.impl.StringCollectionSerializer.serialize(StringCollectionSerializer.java:80) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.impl.StringCollectionSerializer.serialize(StringCollectionSerializer.java:22) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:727) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:719) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:155) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:727) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:719) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:155) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serializeContents(CollectionSerializer.java:145) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:107) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:25) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:400) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1392) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:913) ~[graylog.jar:?]
        at com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:625) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:242) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:227) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:139) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:85) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:139) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:61) ~[graylog.jar:?]
        ... 20 more
Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:1.8.0_301]
        at sun.nio.ch.SocketDispatcher.write(Unknown Source) ~[?:1.8.0_301]
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source) ~[?:1.8.0_301]
        at sun.nio.ch.IOUtil.write(Unknown Source) ~[?:1.8.0_301]
        at sun.nio.ch.SocketChannelImpl.write(Unknown Source) ~[?:1.8.0_301]
        at org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(TCPNIOUtils.java:125) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.transport.TCPNIOUtils.writeCompositeBuffer(TCPNIOUtils.java:64) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:105) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:82) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:236) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:145) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:47) ~[graylog.jar:?]
        at org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleWrite(TCPNIOTransportFilter.java:102) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.TransportFilter.handleWrite(TransportFilter.java:167) ~[graylog.jar:?]
        at org.glassfish.grizzly.ssl.SSLBaseFilter$SSLTransportFilterWrapper.handleWrite(SSLBaseFilter.java:1125) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:87) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:260) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:177) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:109) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:88) ~[graylog.jar:?]
        at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:53) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:866) ~[graylog.jar:?]
        at org.glassfish.grizzly.ssl.SSLBaseFilter.handleWrite(SSLBaseFilter.java:370) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:87) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:260) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:177) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:109) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:88) ~[graylog.jar:?]
        at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:53) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:866) ~[graylog.jar:?]
        at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:834) ~[graylog.jar:?]
        at org.glassfish.grizzly.http.io.OutputBuffer.flushBuffer(OutputBuffer.java:1068) ~[graylog.jar:?]
        at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:695) ~[graylog.jar:?]
        at org.glassfish.grizzly.http.server.NIOOutputStreamImpl.write(NIOOutputStreamImpl.java:59) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:200) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:276) ~[graylog.jar:?]
        at com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2085) ~[graylog.jar:?]
        at com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeFieldName(UTF8JsonGenerator.java:261) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:725) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:719) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:155) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:727) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:719) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:155) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serializeContents(CollectionSerializer.java:145) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:107) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:25) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:400) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1392) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:913) ~[graylog.jar:?]
        at com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:625) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:242) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:227) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:139) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:85) ~[graylog.jar:?]
        at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:139) ~[graylog.jar:?]
        at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:61) ~[graylog.jar:?]
        ... 20 more
2021-11-13T23:31:02.002+01:00 INFO  [connection] Opened connection [connectionId{localValue:10, serverValue:39}] to 127.0.0.1:27017
2021-11-13T23:31:02.248+01:00 INFO  [connection] Opened connection [connectionId{localValue:12, serverValue:40}] to 127.0.0.1:27017
2021-11-13T23:31:02.251+01:00 INFO  [connection] Opened connection [connectionId{localValue:11, serverValue:38}] to 127.0.0.1:27017
2021-11-13T23:33:29.528+01:00 INFO  [Log] Rolled new log segment for 'messagejournal-0' in 2 ms.
2021-11-13T23:33:35.868+01:00 INFO  [Log] Scheduling log segment 3270582111 for log messagejournal-0 for deletion.
2021-11-13T23:34:35.874+01:00 INFO  [Log] Deleting segment 3270582111 from log messagejournal-0.
2021-11-13T23:34:35.927+01:00 INFO  [OffsetIndex] Deleting index /var/lib/graylog-server/journal/messagejournal-0/00000000003270582111.index.deleted

Since you have

If you haven’t already, I would put it back to graylog that’s for your default index set when your server was created.

Don’t forget any changes made to graylog.conf you need to restart Graylog service.

What have you done for troubleshooting?

Have you tried to disable HTTPS and see if you can log in with just IP Address using HTTP.
Example:

http:192.168.1.23:9000

  • Have you check your Elasticsearch logs and status? If so what do you see?
  • Have you check MongoDb? If so what do you see?

Hi,
thanks for your help. I have check all logs…

Solution for me was to create new graylog node and migration - new configuration.
New configuration of Graylog is running correct.

Sorry we couldn’t help fix your old one but maybe there was a lot to learn from this situation.