Sending NGINX logs securely


I have 2 AWS servers.

Server 1 running NGINX as proxy to application server
Server 2 running Graylog 3.0.2

I am able to connect to graylog and send the nginx logs.
I am able to see the logs perfectly in Graylog UI.

I want to ask how much secure is this communication?
The application logs are very critical and in no condition can be leaked.

For my another logger of Payment gateway, I am thinking of using GELF only if it is secure.

If it is not secure, How can I make this communication secure?

use TLS for all connections, authenticate via certificates.

Make Graylog use of HTTPS to access the information.

Hi Jan,

Thank you for the reply.

I am new to graylog. Can you please point me to some guide that I can follow for this?

Can someone please help me in this?

I am not using beats to transfer the logs. it is getting sent directly from nginx logs using settings in given link

nginx does not allow to send his logs in a secure way to a remote site native

Then what are my options? is there any other way to get this done?

yes - sure.

Write the nginx log locally and use a collector that is able to use a secure connection.

Another option - use some kind of tunnel (SSH/OpenVPN/IPSec/whatever)

Can you please suggest me some collectors?

I personal would use filebeat to collect any kind of files from a host to transfer that over to Graylog.

But filebeat will transfer file from local system in which application is writing the logs whereas I want to directly send the logs to graylog.


you might want to describe what your goal is briefly, add also what you already have take into account for that and why you then discarded that solution. That will help you to get a nice answer from someone here.

Your first question was how to secure the transfer with NGINX direct in a secure way - as that is not possible, because NGINX does not have this abilities you either could make a feature request at NGINX or use some other shipping of logfiles.

As you pointed out you want a secure connection - my personal suggestion is filebeat to read the NGINX Log and transfer it over to Graylog. Now your comment look like you do not want to have a local logfile. Is that right?

Did you have any more restrictions you want to the shipper?

Hi Jan,

Yes you understand correctly. I dont want to have a local logfile.
Our server will be configured to autoscale, hence each node will have its own nginx running.

As nodes will get created and destroyed based on threshold limit, We may not get time to copy the logfile before the node get destroyed.

That is why I want to use Graylog for my NGINX logs and my application logs as well, which is currently in Django.

Please suggest me some solution for this.

Thank you.

If you use a shipper like filebeat the logs are instand ingested from the local disk to Graylog. Or at least can be configured like that.

I do not see any problems with that - but as you did not share how you create your autoscale group no additional help is possible.

I am not sure about autoscaling method because I am not handling it.

I can get the answers for you from my Devops team regarding it. Please tell me your questions.


I talked to DevOps and we want to try filebeat settings to ship the nginx logs. can you please guide me how can I do that or point me to any tutorial.

Because there are not many tutorials for graylog3 settings and documentation is confusing me.

I followed the steps here upto step by step guide and gave path of nginx logs.

This is my configuration of filebeat-cnf

# Needed for Graylog
fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}

- input_type: log
    - /var/log/nginx/*.log
  type: log
   hosts: ["<internal AWS IP>:5044"]
  data: /var/lib/graylog-sidecar/collectors/filebeat/data
  logs: /var/lib/graylog-sidecar/collectors/filebeat/log

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.