Graylog won't start after upgrading from 3.0.0 to 4.0.0

Hello
Urgent Please
I have upgraded the graylog from 3.0.0 to 4.0.0, but after the upgrade graylog won’t start, and I have trued to strat it manually it won’t work,

when i made service

● graylog-server.service - Graylog server
Loaded: loaded (/usr/lib/systemd/system/graylog-server.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Mon 2021-03-15 10:07:26 CET; 9s ago
Docs: http://docs.graylog.org/
Process: 11286 ExecStart=/usr/share/graylog-server/bin/graylog-server (code=exited, status=1/FAILURE)
Main PID: 11286 (code=exited, status=1/FAILURE)

Please to help me fixing this problem

Hi @abouhoun can you paste here the last, I don’t know, 200 lines of you graylog.log?

Normally you can find it inside /var/log.

this is last logs i Have in /var/log/graylog-server/server.log:

Created by NetworkManager

Merged from /etc/dhcp/dhclient.conf

Configuration file for /sbin/dhclient.

This is a sample configuration file for dhclient. See dhclient.conf’s

man page for more information about the syntax of this file

and a more comprehensive list of the parameters understood by

dhclient.

Normally, if the DHCP server provides reasonable information and does

not leave anything out (like the domain name, for example), then

few changes must be made to this file, if any.

option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
#send dhcp-client-identifier 1:0:a0:24:ab:fb:9c;
#send dhcp-lease-time 3600;
#supersede domain-name “fugue._com home.vix._com”;
#prepend domain-name-servers 127.0.0.1;
#require subnet-mask, domain-name-servers;
#retry 60;
#reboot 10;
#select-timeout 5;
#initial-interval 2;
#script “/sbin/dhclient-script”;
#media “-link0 -link1 -link2”, “link0 link1”;
#reject 192.33.137.209;
#alias {

interface “eth0”;

fixed-address 192.5.5.213;

option subnet-mask 255.255.255.255;

#}
#lease {

interface “eth0”;

fixed-address 192.33.137.200;

medium “link0 link1”;

option host-name “andare.swiftmedia.com”;

option subnet-mask 255.255.255.0;

option broadcast-address 192.33.137.255;

option routers 192.33.137.250;

option domain-name-servers 127.0.0.1;

renew 2 2000/1/12 00:00:01;

rebind 2 2000/1/12 00:00:01;

expire 2 2000/1/12 00:00:01;

#}
send host-name “HB-PROD-GRAYLOG”; # added by NetworkManager

option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
option ms-classless-static-routes code 249 = array of unsigned integer 8;
option wpad code 252 = string;

request; # override dhclient defaults
also request subnet-mask;
also request broadcast-address;
also request time-offset;
also request routers;
also request domain-name;
also request domain-name-servers;
also request domain-search;
also request host-name;
also request dhcp6.name-servers;
also request dhcp6.domain-search;
also request dhcp6.fqdn;
also request dhcp6.sntp-servers;
also request netbios-name-servers;
also request netbios-scope;
also request interface-mtu;
also request rfc3442-classless-static-routes;
also request ntp-servers;
also request ms-classless-static-routes;
also request static-routes;
also request wpad;

2021-03-15T10:09:52.047+01:00 INFO [CmdLineTool] Loaded plugin: AWS plugins 4.0.5 [org.graylog.aws.AWSPlugin]
2021-03-15T10:09:52.050+01:00 INFO [CmdLineTool] Loaded plugin: Integrations 4.0.5 [org.graylog.integrations.IntegrationsPlugin]
2021-03-15T10:09:52.051+01:00 INFO [CmdLineTool] Loaded plugin: Collector 4.0.5 [org.graylog.plugins.collector.CollectorPlugin]
2021-03-15T10:09:52.051+01:00 INFO [CmdLineTool] Loaded plugin: Threat Intelligence Plugin 4.0.5 [org.graylog.plugins.threatintel.ThreatIntelPlugin]
2021-03-15T10:09:52.052+01:00 INFO [CmdLineTool] Loaded plugin: Elasticsearch 6 Support 4.0.5+d95b909 [org.graylog.storage.elasticsearch6.Elasticsearch6Plugin]
2021-03-15T10:09:52.052+01:00 INFO [CmdLineTool] Loaded plugin: Elasticsearch 7 Support 4.0.5+d95b909 [org.graylog.storage.elasticsearch7.Elasticsearch7Plugin]
2021-03-15T10:09:52.069+01:00 ERROR [CmdLineTool] Invalid configuration
com.github.joschi.jadconfig.ParameterException: Couldn’t convert value for parameter “root_timezone”
at com.github.joschi.jadconfig.JadConfig.processClassFields(JadConfig.java:141) ~[graylog.jar:?]
at com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:99) ~[graylog.jar:?]
at org.graylog2.bootstrap.CmdLineTool.processConfiguration(CmdLineTool.java:353) [graylog.jar:?]
at org.graylog2.bootstrap.CmdLineTool.readConfiguration(CmdLineTool.java:346) [graylog.jar:?]
at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:180) [graylog.jar:?]
at org.graylog2.bootstrap.Main.main(Main.java:50) [graylog.jar:?]
Caused by: com.github.joschi.jadconfig.ParameterException: Couldn’t convert value “UTC+1” to DateTimeZone.
at com.github.joschi.jadconfig.jodatime.converters.DateTimeZoneConverter.convertFrom(DateTimeZoneConverter.java:26) ~[graylog.jar:?]
at com.github.joschi.jadconfig.jodatime.converters.DateTimeZoneConverter.convertFrom(DateTimeZoneConverter.java:12) ~[graylog.jar:?]
at com.github.joschi.jadconfig.JadConfig.convertStringValue(JadConfig.java:167) ~[graylog.jar:?]
at com.github.joschi.jadconfig.JadConfig.processClassFields(JadConfig.java:139) ~[graylog.jar:?]
… 5 more
Caused by: java.lang.IllegalArgumentException: The datetime zone id ‘UTC+1’ is not recognised
at org.joda.time.DateTimeZone.forID(DateTimeZone.java:247) ~[graylog.jar:?]
at com.github.joschi.jadconfig.jodatime.converters.DateTimeZoneConverter.convertFrom(DateTimeZoneConverter.java:24) ~[graylog.jar:?]
at com.github.joschi.jadconfig.jodatime.converters.DateTimeZoneConverter.convertFrom(DateTimeZoneConverter.java:12) ~[graylog.jar:?]
at com.github.joschi.jadconfig.JadConfig.convertStringValue(JadConfig.java:167) ~[graylog.jar:?]
at com.github.joschi.jadconfig.JadConfig.processClassFields(JadConfig.java:139) ~[graylog.jar:?]
… 5 more

Hi @abouhoun ,

can you try to comment the line corresponding to the option “root_timezone” in your config file and see if it starts now?

Hello
thank you
I changed UTC+1 to : Africa/Algiers

  • Now, the interface of graylog works but only on the server and only with 127.0.0.1:9000 and not with localhost:9000 nor with a client pc.
  • Another thing: when I enter the interface internally I find that the graylog does not receive any log in the streams, although I see numbers that change on the throughtput of inputs.
    till now i haven’t found any solution for that.

Hi,

in your config file, add or change the option http_bind_address as it shown below:

http_bind_address = 0.0.0.0:9000

This setting will allow you to bind graylog to all interfaces, so you can make access with any IP address.

If you have some streams other than ones Graylog give you by default, you probably will have to recreate it.

Hi there,

While folks are helping, please ensure that you format your code snippets to be more readable (see FAQ - Graylog Community) for how to do this. While it seems like an extra thing to do when posting, it really does help make things more readable for those who assist.

1 Like

Good morning,

unfortunately, when I uncomment the line of http_bind_address and I write
“http_bind_address = 0.0.0.0:9000”

the interface doesn’t work completely neither locally nor by access via client.

for the second part, I did not understand well, you wanted to say that I must remove all the streams that I left before the upgrade, and to redo them completely?

this is my configuration file :

    ############################
# GRAYLOG CONFIGURATION FILE
############################

# * Entries are generally expected to be a single line of the form, one of the following:
#
# propertyName=propertyValue
# propertyName:propertyValue
#

# 
# name=Stephen
# name = Stephen
#
# 
# path=c:\\docs\\doc1
#

# If you are running more than one instances of Graylog server you have to select one of these
# instances as master. The master will perform some periodical tasks that non-masters won't perform.
is_master = true

# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea
# to use an absolute file path here if you are starting Graylog server from init scripts or similar.
node_id_file = /etc/graylog/server/node-id

# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.
# Generate one by using for example: pwgen -N 1 -s 96
# ATTENTION: This value must be the same on all Graylog nodes in the cluster.
# Changing this value after installation will render all user sessions and encrypted values in the database invalid. (e.g. encrypted access tokens)
password_secret = Adminlog@2019

# The default root user is named 'admin'
root_username = admin

# You MUST specify a hash password for the root user (which you only need to initially set up the
# system and in case you lose connectivity to your authentication backend)
# This password cannot be changed using the API or via the web interface. If you need to change it,
# modify it in this file.
# Create one by using for example: echo -n yourpassword | shasum -a 256
# and put the resulting hash value into the following line
root_password_sha2 = 47a6703f5934c80ebb2ddb0d377ab377c3c260e43981649fd603e96cc83f3116

# The email address of the root user.
# Default is empty
#root_email = ""

# Default is UTC
root_timezone = Africa/Algiers

# Set the bin directory here (relative or absolute)
# This directory contains binaries that are used by the Graylog server.
# Default: bin
bin_dir = /usr/share/graylog-server/bin

# Set the data directory here (relative or absolute)
# This directory is used to store Graylog server state.
# Default: data
data_dir = /var/lib/graylog-server

# Set plugin directory here (relative or absolute)
plugin_dir = /usr/share/graylog-server/plugin

###############
# HTTP settings
###############

#### HTTP bind address
#
# The network interface used by the Graylog HTTP interface.
#
# This network interface must be accessible by all Graylog nodes in the cluster and by all clients
# using the Graylog web interface.

# If the port is omitted, Graylog will use port 9000 by default.

# Default: 127.0.0.1:9000
http_bind_address = 0.0.0.0:9000
#http_bind_address = [2001:db8::1]:9000

#### HTTP publish URI

# If $http_bind_address contains a wildcard IPv4 address (0.0.0.0), the first non-loopback IPv4 address of this machine will be used.
# This configuration setting *must not* contain a wildcard address!

# Default: http://$http_bind_address/
http_publish_uri = http://127.0.0.1:9000/

#### External Graylog URI

# This setting can be overriden on a per-request basis with the "X-Graylog-Server-URL" HTTP request header.

# Default: $http_publish_uri
http_external_uri = http://127.0.0.1:9000/

#### Enable CORS headers for HTTP interface
#
# This allows browsers to make Cross-Origin requests from any origin.
# This is disabled for security reasons and typically only needed if running graylog
# with a separate server for frontend development.
#
# Default: false
#http_enable_cors = false

#### Enable GZIP support for HTTP interface
#
# This compresses API responses and therefore helps to reduce
# overall round trip times. This is enabled by default. Uncomment the next line to disable it.
#http_enable_gzip = false

# The maximum size of the HTTP request headers in bytes.
#http_max_header_size = 8192

# The size of the thread pool used exclusively for serving the HTTP interface.
#http_thread_pool_size = 16

################
# HTTPS settings
################

#### Enable HTTPS support for the HTTP interface
#
# This secures the communication with the HTTP interface with TLS to prevent request forgery and eavesdropping.
#
# Default: false
#http_enable_tls = true

# The X.509 certificate chain file in PEM format to use for securing the HTTP interface.
#http_tls_cert_file = /path/to/graylog.crt

# The PKCS#8 private key file in PEM format to use for securing the HTTP interface.
#http_tls_key_file = /path/to/graylog.key

# The password to unlock the private key used for securing the HTTP interface.
#http_tls_key_password = secret


# Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For
# header. May be subnets, or hosts.
#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128

# List of Elasticsearch hosts Graylog should connect to.
# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.
# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that
# requires authentication.
#
# Default: http://127.0.0.1:9200
#elasticsearch_hosts = http://node1:9200,http://user:password@node2:19200

# Maximum amount of time to wait for successfull connection to Elasticsearch HTTP port.
#
# Default: 10 Seconds
#elasticsearch_connect_timeout = 10s

# Maximum amount of time to wait for reading back a response from an Elasticsearch server.
# (e. g. during search, index creation, or index time-range calculations)
#
# Default: 60 seconds
#elasticsearch_socket_timeout = 60s

# Maximum idle time for an Elasticsearch connection. If this is exceeded, this connection will
# be tore down.
#
# Default: inf
#elasticsearch_idle_timeout = -1s

# Maximum number of total connections to Elasticsearch.
#
# Default: 200
#elasticsearch_max_total_connections = 200

# Maximum number of total connections per Elasticsearch route (normally this means per
# elasticsearch server).
#
# Default: 20
#elasticsearch_max_total_connections_per_route = 20

# Maximum number of times Graylog will retry failed requests to Elasticsearch.
#
# Default: 2
#elasticsearch_max_retries = 2

# Enable automatic Elasticsearch node discovery through Nodes Info,
# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster-nodes-info.html
#
# WARNING: Automatic node discovery does not work if Elasticsearch requires authentication, e. g. with Shield.
#
# Default: false
#elasticsearch_discovery_enabled = true

# Filter for including/excluding Elasticsearch nodes in discovery according to their custom attributes,
# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster.html#cluster-nodes
#
# Default: empty
#elasticsearch_discovery_filter = rack:42

# Frequency of the Elasticsearch node discovery.
#
# Default: 30s
# elasticsearch_discovery_frequency = 30s

# Set the default scheme when connecting to Elasticsearch discovered nodes
#
# Default: http (available options: http, https)
#elasticsearch_discovery_default_scheme = http

# Enable payload compression for Elasticsearch requests.
#
# Default: false
#elasticsearch_compression_enabled = true

# Enable use of "Expect: 100-continue" Header for Elasticsearch index requests.
# If this is disabled, Graylog cannot properly handle HTTP 413 Request Entity Too Large errors.
#
# Default: true
#elasticsearch_use_expect_continue = true

# Graylog will use multiple indices to store documents in. You can configured the strategy it uses to determine
# when to rotate the currently active write index.
# It supports multiple rotation strategies:
#   - "count" of messages per index, use elasticsearch_max_docs_per_index below to configure
#   - "size" per index, use elasticsearch_max_size_per_index below to configure
# valid values are "count", "size" and "time", default is "count"
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
rotation_strategy = count

# (Approximate) maximum number of documents in an Elasticsearch index before a new index
# is being created, also see no_retention and elasticsearch_max_number_of_indices.
# Configure this if you used 'rotation_strategy = count' above.
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
elasticsearch_max_docs_per_index = 30000000

# (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also see
# no_retention and elasticsearch_max_number_of_indices. Default is 1GB.
# Configure this if you used 'rotation_strategy = size' above.
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
#elasticsearch_max_size_per_index = 1073741824

# (Approximate) maximum time before a new Elasticsearch index is being created, also see
# no_retention and elasticsearch_max_number_of_indices. Default is 1 day.
# Configure this if you used 'rotation_strategy = time' above.
# Please note that this rotation period does not look at the time specified in the received messages, but is
# using the real clock value to decide when to rotate the index!
# Specify the time using a duration and a suffix indicating which unit you want:
#  1w  = 1 week
#  1d  = 1 day
#  12h = 12 hours
# Permitted suffixes are: d for day, h for hour, m for minute, s for second.
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
#elasticsearch_max_time_per_index = 1d

# Disable checking the version of Elasticsearch for being compatible with this Graylog release.
# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!
#elasticsearch_disable_version_check = true

# Disable message retention on this node, i. e. disable Elasticsearch index rotation.
#no_retention = false

# How many indices do you want to keep?
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
elasticsearch_max_number_of_indices = 20

# Decide what happens with the oldest indices when the maximum number of indices is reached.
# The following strategies are availble:
#   - delete # Deletes the index completely (Default)
#   - close # Closes the index and hides it from the system. Can be re-opened later.
#
# ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these
#            to your previous 1.x settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
retention_strategy = delete

# How many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices.
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
elasticsearch_shards = 4
elasticsearch_replicas = 0

# Prefix for all Elasticsearch indices and index aliases managed by Graylog.
#
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
elasticsearch_index_prefix = graylog

# Name of the Elasticsearch index template used by Graylog to apply the mandatory index mapping.
# Default: graylog-internal
#
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
#elasticsearch_template_name = graylog-internal

# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only
# be enabled with care. See also: http://docs.graylog.org/en/2.1/pages/queries.html
allow_leading_wildcard_searches = false

# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and
# should only be enabled after making sure your Elasticsearch cluster has enough memory.
allow_highlighting = false

# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.
# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom
# Elasticsearch documentation: https://www.elastic.co/guide/en/elasticsearch/reference/2.3/analysis.html
# Note that this setting only takes effect on newly created indices.
#
# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
elasticsearch_analyzer = standard

# Global timeout for index optimization (force merge) requests.
# Default: 1h
#elasticsearch_index_optimization_timeout = 1h

# Maximum number of concurrently running index optimization (force merge) jobs.
# If you are using lots of different index sets, you might want to increase that number.
# Default: 20
#elasticsearch_index_optimization_jobs = 20

# Time interval for index range information cleanups. This setting defines how often stale index range information
# is being purged from the database.
# Default: 1h
#index_ranges_cleanup_interval = 1h

# Time interval for the job that runs index field type maintenance tasks like cleaning up stale entries. This doesn't
# need to run very often.
# Default: 1h
#index_field_type_periodical_interval = 1h

    # that every outputbuffer processor manages its own batch and performs its own batch write calls.
# ("outputbuffer_processors" variable)
output_batch_size = 500

# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two
# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages
# for this time period is less than output_batch_size * outputbuffer_processors.
output_flush_interval = 1

   # not be tried again for an also configurable amount of seconds.
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

# The number of parallel running processors.
# Raise this number if your buffers are filling up.
processbuffer_processors = 5
outputbuffer_processors = 3

# The following settings (outputbuffer_processor_*) configure the thread pools backing each output buffer processor.
# See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html for technical details

# When the number of threads is greater than the core (see outputbuffer_processor_threads_core_pool_size),
# this is the maximum time in milliseconds that excess idle threads will wait for new tasks before terminating.
# Default: 5000
#outputbuffer_processor_keep_alive_time = 5000

# The number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set
# Default: 3
#outputbuffer_processor_threads_core_pool_size = 3

# The maximum number of threads to allow in the pool
# Default: 30
#outputbuffer_processor_threads_max_pool_size = 30

# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).
#udp_recvbuffer_sizes = 1048576

# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)
# Possible types:
#  - yielding
#     Compromise between performance and CPU usage.
#  - sleeping
#     Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.
#  - blocking
#     High throughput, low latency, higher CPU usage.
#  - busy_spinning
#     Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.
processor_wait_strategy = blocking

# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.
# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.
# Must be a power of 2. (512, 1024, 2048, ...)
ring_size = 65536

inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking

# Enable the disk based message journal.
message_journal_enabled = true


message_journal_dir = /var/lib/graylog-server/journal

# Journal hold messages before they could be written to Elasticsearch.
# For a maximum of 12 hours or 5 GB whichever happens first.
# During normal operation the journal will be smaller.
message_journal_max_age = 24h
message_journal_max_size = 30gb

#message_journal_flush_age = 1m
#message_journal_flush_interval = 1000000
#message_journal_segment_age = 1h
#message_journal_segment_size = 100mb

# Number of threads used exclusively for dispatching internal events. Default is 2.
#async_eventbus_processors = 2

# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual
# shutdown process. Set to 0 if you have no status checking load balancers in front.
lb_recognition_period_seconds = 3

# Journal usage percentage that triggers requesting throttling for this server node from load balancers. The feature is
# disabled if not set.
#lb_throttle_threshold_percentage = 95

# Every message is matched against the configured streams and it can happen that a stream contains rules which
# take an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking.
# This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting other
# streams, Graylog limits the execution time for each stream.
# The default values are noted below, the timeout is in milliseconds.
# If the stream matching for one stream took longer than the timeout value, and this happened more than "max_faults" times
# that stream is disabled and a notification is shown in the web interface.
#stream_processing_timeout = 2000
#stream_processing_max_faults = 3

# Since 0.21 the Graylog server supports pluggable output modules. This means a single message can be written to multiple
# outputs. The next setting defines the timeout for a single output module, including the default output module where all
# messages end up.
#
# Time in milliseconds to wait for all message outputs to finish writing a single message.
#output_module_timeout = 10000

# Time in milliseconds after which a detected stale master node is being rechecked on startup.
#stale_master_timeout = 2000

# Time in milliseconds which Graylog is waiting for all threads to stop on shutdown.
#shutdown_timeout = 30000

# MongoDB connection string
# See https://docs.mongodb.com/manual/reference/connection-string/ for details
mongodb_uri = mongodb://localhost/graylog

# Authenticate against the MongoDB server
# '+'-signs in the username or password need to be replaced by '%2B'
#mongodb_uri = mongodb://grayloguser:secret@localhost:27017/graylog

# Use a replica set instead of a single host
#mongodb_uri = mongodb://grayloguser:secret@localhost:27017,localhost:27018,localhost:27019/graylog?replicaSet=rs01

# DNS Seedlist https://docs.mongodb.com/manual/reference/connection-string/#dns-seedlist-connection-format
#mongodb_uri = mongodb+srv://server.example.org/graylog

# Increase this value according to the maximum connections your MongoDB server can handle from a single client
# if you encounter MongoDB connection problems.
mongodb_max_connections = 1000

# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5
# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5,
# then 500 threads can block. More than that and an exception will be thrown.
# http://api.mongodb.com/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultiplier
mongodb_threads_allowed_to_block_multiplier = 5


# Email transport
transport_email_enabled = true
transport_email_hostname = 192.168.0.212
transport_email_port = 25
transport_email_use_auth = false
#transport_email_auth_username = you@example.com
#transport_email_auth_password = secret
#transport_email_subject_prefix = [graylog]
transport_email_from_email = LOG-PROD-NOTIF@graylog.hbt

# Encryption settings
#
# ATTENTION:
#    Using SMTP with STARTTLS *and* SMTPS at the same time is *not* possible.

# Use SMTP with STARTTLS, see https://en.wikipedia.org/wiki/Opportunistic_TLS
transport_email_use_tls = false

# Use SMTP over SSL (SMTPS), see https://en.wikipedia.org/wiki/SMTPS
# This is deprecated on most SMTP services!
transport_email_use_ssl = false

   # This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.
#transport_email_web_interface_url = https://graylog.example.com

# The default connect timeout for outgoing HTTP connections.
# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).
# Default: 5s
#http_connect_timeout = 5s

# The default read timeout for outgoing HTTP connections.
# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).
# Default: 10s
#http_read_timeout = 10s

# The default write timeout for outgoing HTTP connections.
# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).
# Default: 10s
#http_write_timeout = 10s

# HTTP proxy for outgoing HTTP connections
# ATTENTION: If you configure a proxy, make sure to also configure the "http_non_proxy_hosts" option so internal
#            HTTP connections with other nodes does not go through the proxy.
# Examples:
#   - http://proxy.example.com:8123
#   - http://username:password@proxy.example.com:8123
#http_proxy_uri =

   # Any host matching one of these patterns will be reached through a direct connection instead of through a proxy.
# Examples:
#   - localhost,127.0.0.1
#   - 10.0.*,*.example.com
#http_non_proxy_hosts =

# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
#disable_index_optimization = true

# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch
# on heavily used systems with large indices, but it will decrease search performance. The default is 1.

# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these
#            to your previous settings so they will be migrated to the database!
#            This configuration setting is only used on the first start of Graylog. After that,
#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.
#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.
#index_optimization_max_num_segments = 1

# The threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notification
# will be generated to warn the administrator about possible problems with the system. Default is 1 second.
#gc_warning_threshold = 1s

# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.
#ldap_connection_timeout = 2000

# Disable the use of SIGAR for collecting system stats
#disable_sigar = false

# The default cache time for dashboard widgets. (Default: 10 seconds, minimum: 1 second)
#dashboard_widget_default_cache_time = 10s

# For some cluster-related REST requests, the node must query all other nodes in the cluster. This is the maximum number
# of threads available for this. Increase it, if '/cluster/*' requests take long to complete.
# Should be http_thread_pool_size * average_cluster_size if you have a high number of concurrent users.
proxied_requests_thread_pool_size = 32

# The server is writing processing status information to the database on a regular basis. This setting controls how
# often the data is written to the database.
# Default: 1s (cannot be less than 1s)
#processing_status_persist_interval = 1s

# Configures the threshold for detecting outdated processing status records. Any records that haven't been updated
# in the configured threshold will be ignored.
# Default: 1m (one minute)
#processing_status_update_threshold = 1m

# Configures the journal write rate threshold for selecting processing status records. Any records that have a lower
# one minute rate than the configured value might be ignored. (dependent on number of messages in the journal)
# Default: 1
#processing_status_journal_write_rate_threshold = 1

# Configures the prefix used for graylog event indices
# Default: gl-events
#default_events_index_prefix = gl-events

# Configures the prefix used for graylog system event indices
# Default: gl-system-events
#default_system_events_index_prefix = gl-system-events

# Automatically load content packs in "content_packs_dir" on the first start of Graylog.
#content_packs_loader_enabled = false

# The directory which contains content packs which should be loaded on the first start of Graylog.
#content_packs_dir = data/contentpacks

# A comma-separated list of content packs (files in "content_packs_dir") which should be applied on
# the first start of Graylog.
# Default: empty
#content_packs_auto_install = grok-patterns.json

# The allowed TLS protocols for system wide TLS enabled servers. (e.g. message inputs, http interface)
# Setting this to an empty value, leaves it up to system libraries and the used JDK to chose a default.
# Default: TLSv1.2,TLSv1.3  (might be automatically adjusted to protocols supported by the JDK) 
#enabled_tls_protocols= TLSv1.2,TLSv1.3

Hi,

First of all, you better keep in mind this is a public forum, so avoid to add sensitive data, like passwords, in your posts.

About your configuration, the UI problem is here:

# Default: http://$http_bind_address/
http_publish_uri = http://127.0.0.1:9000/
# Default: $http_publish_uri
http_external_uri = http://127.0.0.1:9000/

Once you define “http_bind_address”, you should comment or remove “http_publish_uri” and “http_external_uri” from you config file.

Sorry to not made myself clear, it’s hard to know what’s wrong with your streams once I don’t have access to your end.
Something should broke during migration, and I think it’s a better idea to create a new one step-by-step to see if it works.

1 Like

Honestly I thank you so much for your support, for the access to the interface, thanks to you I have solved the problem.

Another thing I tried to create new stream, in order to collect the messages, but what I did not understand is as follows:

  • on the stream the numbers change as shown on the picture (1), I even tested the rules of the stream to make sure that many messages are suitable with this stream, but when I open the stream to read the messages, no message appears as there is nothing as shown in picture (2).

opensessionaudit
Picture (1)


Picture (2)

:slight_smile: for the passwords and the secret that I put on the file server.conf on this page are false because I changed them by other summarized before putting them here .

Honestly I thank you so much for your support, for the access to the interface, thanks to you I have solved the problem.

My pleasure :wink:

About the stream, as I said before, it’s pretty difficult to know what’s wrong because I don’t know the rules inside your stream nor the format of your messages.

It was working before the upgrade?

Yes it was working before the upgrade , now they are not working , and there is some messages in the overview indicate problems of journal utilization and watermarked about elasticsearch , i don’t know if there is a relashion with this probelem .

Yes, It can be related.

If the filesystem used by your elasticsearch cluster is running out of space it won’t write any message you receive.

You have to fix that in order to put it back online.

In graylog v4 you have to use Elasticsearch Curator to make it happens. There’s more about this issue in another thread:

Hello reimlima

I have completely installed a new machine to fix the problem:
when I installed graylog 4.0.5 and when I finished adding all the inputs, I encountered a problem that of journalization after one day,
It seems to me that many messages I miss because of this.
When I enter to know the use of the resources of the machine, I find that elasticsearch uses only 10% of the ram memory (22 Gb), but for the cpu usage it exceeds 115%,

I don’t know if it has a relation with the journalization and if I can solve this problem completely

Hi @abouhoun.

I’m pretty sure you can solve it once for all.

You have to keep in mind this: journal in Graylog is a temporary way which Graylog handle received messages until Elasticsearch get ready to receive them. If your journal is full, you will face message lost, sadly.

If your Elasticsearch is the bottleneck here, you should take a look at “Architectural Considerations” on Graylog’s doc page.

The most important thing there related to your issue is:
“Elasticsearch nodes should have as much RAM as possible and the fastest disks you can get. Everything depends on I/O speed here.”

Dedicate a server to Elasticsearch is something important to consider too.

@abouhoun
Hello,
Something caught my eye with this post and wanted to ask you about the “Must Match 25 configured rules” for your stream. That seams like a lot of rules. You sure that the messages you tested against this stream matched all 25 rules?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.