Before you post: Your responses to these questions will help the community help you. Please complete this template if you’re asking a support question. Don’t forget to select tags to help index your topic!
1. Describe your incident:
Hi folks, I have been able to get Graylog operational and have successfully received the logs I am trying to receive. However, I am noticing that the log data seems to be growing in the wrong directory. I had some help from another team in creating the back-end environment as I am very green on any type of Linux administration. The two relevant filesystems are:
/dev/mapper/ubuntu–vg-ubuntu–lv 50565648 14612636 33354716 31% /
/dev/sdb2 411797928 73796 390736280 1% /data
2. Describe your environment:
OS Information:
Ubuntu 20.04.4 LTS
5.4.0-109-generic kernel
Package Version:
Graylog 4.0
Service logs, configurations, and environment variables:
3. What steps have you already taken to try and solve the problem?
There is a config file in /etc/graylog/server called log4j2.xml that seems to have a rolling file name line:
When I tried editing this log4j2.xml to point to my /data/ directory on /dev/sdb2, I couldn’t even get Elasticsearch to start.
4. How can the community help?
If anyone has done this before, I would greatly appreciate some guidance. I was hoping it would be as simple as moving the data directory in the Graylog GUI but it didn’t look like that was the case.
Thank you for the response. I edited the elasticsearch.yml file to point to my new directories I created. When I tried to restart elasticsearch, it failed and did not appear to give me any logs in the log destination I specified:
root@graylog-prd:/etc/elasticsearch# systemctl restart elasticsearch.service
Job for elasticsearch.service failed because the control process exited with error code.
See “systemctl status elasticsearch.service” and “journalctl -xe” for details.
root@graylog-prd:/etc/elasticsearch# systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2022-05-06 10:19:55 EDT; 4s ago
Docs: https://www.elastic.co
Process: 21776 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 21776 (code=exited, status=1/FAILURE)
May 06 10:19:55 graylog-prd systemd-entrypoint[21776]: at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161)
May 06 10:19:55 graylog-prd systemd-entrypoint[21776]: at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
May 06 10:19:55 graylog-prd systemd-entrypoint[21776]: at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127)
May 06 10:19:55 graylog-prd systemd-entrypoint[21776]: at org.elasticsearch.cli.Command.main(Command.java:90)
May 06 10:19:55 graylog-prd systemd-entrypoint[21776]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126)
May 06 10:19:55 graylog-prd systemd-entrypoint[21776]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
May 06 10:19:55 graylog-prd systemd-entrypoint[21776]: For complete error details, refer to the log at /data/elasticsearch/log/graylog.log
May 06 10:19:55 graylog-prd systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
May 06 10:19:55 graylog-prd systemd[1]: elasticsearch.service: Failed with result ‘exit-code’.
May 06 10:19:55 graylog-prd systemd[1]: Failed to start Elasticsearch.
root@graylog-prd:/etc/elasticsearch# more /data/elasticsearch/log/graylog.log
more: stat of /data/elasticsearch/log/graylog.log failed: No such file or directory
root@graylog-prd:/etc/elasticsearch# cd /data/elasticsearch//log/
root@graylog-prd:/data/elasticsearch/log# ls -ll
total 0
Did you move the existing Elasticsearch files to the new directories before starting Elasticsearch? Can you post up your changed elasticsearch.yml (with obfuscation where needed and using the </> forum tool to make it pretty)
I did not move any files over, as I thought I might only have to update the log directory and new files would be written in the destination. Is that not the case?
Below is what I have in the current .yml file
root@graylog-prd:/etc/elasticsearch# more elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: graylog
action.auto_create_index: false
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/elasticsearch/data
#
# Path to log files:
#
path.logs: /data/elasticsearch/log
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
root@graylog-prd:/etc/elasticsearch#
I was reading through this and it looks like it should write new data there - no sure if it would be unhappy about missing existing data though. As the article states, also make sure your directory permissions are correct.
Permission for elasticsearch Data directory should look something like this.
[root@graylog graylog_user]# cd /var/lib/elasticsearch
[root@graylog elasticsearch]# ls -al
total 2.2G
drwxr-s---. 3 elasticsearch elasticsearch 70 Jan 12 2021 .
drwxr-xr-x. 58 root root 4.0K Dec 7 21:11 ..
drwxr-xr-x. 3 elasticsearch elasticsearch 14 Jan 5 2018 nodes
-rw-------. 1 elasticsearch elasticsearch 1.1G Jan 24 2019 java_pid24483.hprof
-rw-------. 1 elasticsearch elasticsearch 1.1G Jan 24 2019 java_pid61896.hprof
[root@graylog elasticsearch]#
It looks like someone used a mount point for /data directory.
Since your using the following…
Check with this command
root# lsblk
If /dev/sdb2 directory is not shown mounted on a different partition check your FSTAB file here /etc/fstab, if you see the /dev/sdb2 directory in that file you can use mount -a which will auto mount those configuration without rebooting.
I have seen some members before change /data directory and server rebooted then partition didn’t remount.
I appreciate your time in responding. I was able to get things working with a colleague of mine who is a bit more versed in Ubuntu than I am. We read through the comments in this thread together before diving in, so the info you all provided was extremely helpful.
We ended up switching around the configuration a bit to:
A) Allow elasticsearch to have the correct permissions it needed (it did not, which is why elasticsearch was not starting after started mucking around)
B) Create elasticsearch under my new data directory using an approach that looks more similar to a standard installation
Under my /data/ directory, we created /lib/ and /log/, then migrated elasticsearch from it’s original locations to these two directories.
We reverted the original yaml file to contain /var/log/elasticsearch and /var/lib/elasticsearch as a default, and we set up symbolic links in those locations:
ln -s /data/lib/elasticsearch/ elasticsearch
ln -s /data/log/elasticsearch/ elasticsearch
After restarting the services, things appear to be working as intended with the log data saving in the right location.