Log redirection HDD

I have two disk SSD and HDD. On SSD installed grayloga and elasticsearch wants logs to be on HDD as a way to redirect the path ???

I’d say: “go for it!”. Sounds like a good plan.

However, helping you setup your server, its storage and its operating system is outside the scope of these forums.

Please help me, where the path to indicate the logs will be :rozczarowany:

have you checked the docs?

But you needed not clear. You can set the SSD as zfs cache, or LVM cache, or just mount the hdd under a folder.

May I suggest https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html and the good old phrase “RTFM”?

You are evil
He/she has almost a million solution for his/her problem, and you suggest only one.
But my first one almost RTFM, second, UTFG

I wouldn’t call myself evil - I just have no patience for stupid questions :smiley:

Thanks for the help. All paths are listed on the page. I only need this where the database is written, and where to find them in the config greylog or config elasticsearch

The database-writing location is NOT defined in Graylog, that’s all in ElasticSearch. The default location depends on your distro, oddly I believe it’s under /var/lib/ for me :confused: I don’t know from the top of my head where Elastic configures its datastore location. As Ben suggests: RTFM.

For sh*ts and giggles you could just put all of /var on the HDD :slight_smile:

Or you can put the logs to /dev/null, and you won’t have any disk usage issues. It is also faster than the regular HDD.

me too

I’m sorry I’m not a linux specialist. All it takes is that in the configuration elasticsearch will change these two paths? where do you want to save your logs? and will it work?
|Data files|/var/lib/elasticsearch/data|
|Log files|/var/log/elasticsearch/|

Yup that sounds alright, yes.

Of course, if you already had a running Elastic beforehand, then you will need to move all the data and log files. If you’re setting this before running ES the first time, you’ll be fine.

Two important things to keep in mind:

  • File permissions! If you’re switching directories. then these need to be set up with the correct access permissions so Elastic can access them!
  • SELinux! If you’re running SELinux, it is possible that the context for the new directories is not correct, which would block ES from working with them.

As I suggested earlier: why not just move the whole of /var onto that HDD? That would save you a lot of trouble.


You can put the logs on a different disk by following the example in my post here



1 Like

Hi, I started the first method I loaded the second disk drive and copied the directory / var after this operation the system does not start (screen1) on the second (screen2) shows drive C and new drive D mounted - I am using KVM

  • Are the permissions set up correctly?
  • Were you able to mount the new file system on /var?
  • Did you update /etc/fstab?
  • Did you ensure that all SELinux configurations were restored?

I did it as https://linuxconfig.org/how-to-move-var-directory-to-another-partition

1 Like

It is very hard to tell whether the article you linked to was the correct approach for your specific situation. But generally speaking, that article does show one good approach, yes.

So now the question is: which parts of your system don’t work after the change? I see a system that is booting “mostly fine”, but which has some issues resolving hostnames. Problem is that if a host can’t resolve its own hostname, it may at times lead to weird situations.

When you boot the VM into single-user mode, can you access all the partitions?

I can not access. The system hangs on what I sent to screen1 :frowning: I did not change the hostname as I did in this tutorial above, I’ve lost nerves, is there any other solution to log on HDD - maybe all virtual machine on HDD

You can always reboot a system, whether it’s hanging or not. You can then boot it into single-user mode from the boot loader.

I see that it’s a VM running in KVM/QEMU. Does that mean your physical host system has both an SSD and an HDD? And does that mean you actually made the disk images on separate drives? I mean, you could have always just moved the whole VM onto the HDD. That way you wouldn’t have had any issues like this.

To get a better view of all this: is this a testbed system you’re building for yourself? Or are you actually planning on running production on this thing?

Thanks for the help of friends. I installed it on SSD - Proxmox on it I created a container for HDD and on this HDD I put a virtual machine. Everything works beautifully. I will tell you that KVM does not work very well. Proxmox much, much better