I have a small issue I hope someone can help with.
On my Unraid device, I have Graylog setup as a Syslog server receiving data from numerous devices.
Graylog then passes this to Elasticsearch which is also where the data is stored.
The issue is I don’t want the Elasticsearch docker filling up with logs.
My QNAP can also function as a Syslog server so I would like to be able to have Graylog read the received log files on the QNAP which are stored in folders.
Is it possible to achieve this or any other solutions?
So what I understand is that you have Docker containers Graylog, Elasticsearch, & MongoDb, is this correct? Or is it just Elasticsearch?
As you stated below
Then I would look into the Elasticsearch file to reloate you indices as shown below
[root@graylog graylog-server]# cat /etc/elasticsearch/elasticsearch.yml | egrep -v "^\s*(#|$)"
cluster.name: graylog
path.data: /var/lib/elasticsearch <---THIS IS WHERE THE DATA IS LOCATED i.e "INDICES".
path.logs: /var/log/elasticsearch
network.host: 10.200.6.70
http.port: 9200
action.auto_create_index: false
discovery.type: single-node
Graylog does not hold logs/messages its more like a Web UI in a way, MongoDb holds all metadata.
Example:
How to change the data path:
Double check to ensure you have a recent snapshot of all indices on the node
Temporarily stop shard relocation using:
curl -XPUT localhost:9200/_cluster/settings -d ‘{
“transient” : {
“cluster.routing.allocation.enable” : “none”
}
}’
Stop the Elasticsearch node.
Move the entire data directory to its new location.
Modify the path in elasticsearch.yml
Start the Elasticsearch node.
Please be extra careful when taking the above steps and make sure they fit your system, as misusing them can lead to loss of production data.
It is recommended to use RPM or Debian packages to avoid this, and other installation issues.
RPM and Debian packages by default store data separately from program files.