Decommission a server and disk journal?

I’m looking to replace a working Graylog host with a new instance type. What is the best way to decommission a host? My primary concern is draining the disk journal. I can easily remove the host from the load balancer, but the journal never reaches zero. Are these messages from Graylog itself?

Thanks for any advice,

Hello && Welcome

Could you explain in greater detail about this? I was going to say just delete it, but you probably want to save some logs.
Are you looking for a full upgrade or creating a new Graylog server?
If your going to create a new host, were you thinking about transporting old log to the new server?

You could stop your INPUTs and let Graylog finish ingesting the log/s in the Journal.

The Graylog journal is the component sitting in front of all message processing that writes all incoming messages to disk. Graylog then reads messages from this journal to parse, process, and store them. If anything in the Graylog processing chain, from input parsing, over extractors, stream matching and pipeline stages to Elasticsearch is too slow, messages will start to queue up in the journal.”

Hope that helps

Intent is to build a new node on a current gen AWS instance type that has SSD instance storage. Beyond the better pricing and better networking, I figured I can use the SSD as a faster journal too. The old hosts would eventually be deleted. The root question is how does one gracefully delete a node with assurance that all data has been written to Elasticsearch?

With a host removed from ELB, it’s still constantly receiving and processing messages. I then thought it might be the AWS CloudTrail plugin which does outbound “fetches” of CloudTrail logs. Even with that turned off globally, this host’s journal still appears to be taking in messages.

It would seem either being able to stop an input on 1 node or specifically drain a node’s buffers and journal are ideas that have come before, but not yet implemented.

From the architectural definition of the journal, I’d think it actually shouldn’t see any messages as all my buffers are running near zero (at night). Yet I see ~100 messages (average) in 1 segment all the time.


I Stopped my INPUT.

Then I make sure there are no Unprocessed massages here.

Then I would click Graceful shutdown.


It might be a little over kill but it works.

If all your inputs are stopped. I wouldnt think you would see messages in the Journal. You could check the new processed messages to see where there coming from.

@gsmith Thanks for the reply and effort to do screen shots.

Stopping inputs will be global across all nodes, right? I’d be relying on log senders to buffer/spool, which is part of our design, but seems heavy-handed for the removal of 1 node.

Also, is it possible to query on the node that processed a message? I looked for queries but could only find the known ability to search based on the input identifier. I figure it’s a pretty meta-graylog concern and not really useful to most real world use cases.


Yes, if not then you have another issue.

Not sure what you mean. If you have a cluster they should work as ONE. You can navigate to “System → Nodes” there should be all you Graylog nodes listed and click the which node you want to see. I think what you wanted will be shown there. Maybe click on the “Details” button.

hope that helps

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.