Outgoing Traffic increased after activating WinFileBeat

After I added the Import of DNS-Logs (Windows DNS debug Log), the amount of Outgoing traffic increased fomr less than 0,5 GB per day to more than 13GB per day - even the text based source Log is just about 280MB. So, where does this traffic come from?
One possibility would be, that winfilebeats is transmitting the whole Logfile on each tranfer. But how is it possible to trouble shoot this? btw. the Logs are appearing only once - but they might be filtered somehow/somewhere (in case of duplicate submissions).
Is there any possibility to find out, what’s written to the Database?

Forgot to add my sidecar config:

# Needed for Graylog
fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}

   hosts: [""]
  data: C:\Program Files\Graylog\sidecar\cache\filebeat\data
  logs: C:\Program Files\Graylog\sidecar\logs
 - windowsDNS
- type: log
    - c:\dns-log\dns-outgoing-log.txt
      - drop_event.when.or:
        - contains.message: (4)arpa(0)

Please use the </> button on the editor to format your post properly, right now the contents of your pasted file are… unreadable.

As far as de-duplication, there’s no such thing. Graylog accepts whatever you give it, even if you repeat the same message 2345234 times, it’ll just store it. Are you 100% sure it’s winlogbeat that is causing the traffic? No other services/things that got deployed recently that may cause it?

I’m not at all up to date on winlogbeat (we only use *nix at my place of work) so I haven’t got much to offer except maybe turn winlogbeat off and see if traffic goes down.

Thank für for the hint with the </>-Button - looks better now.
I’m very sure, that winfilebeats causes the problem somehow, simply because it’s the only thing I changed. But you are right, to be 100% sure, I just deactivated it.

ok, stopping the filebeat stopped the massive increase of traffic. In detail, the growth was just 0,09 GB in about 4 hours. So, it has to be some issure with filebeat.

One issue could be, that the imported Logfiles does not contain any timestamp-data. Is filebeat somehow depending on such flags inside the logfile?

No, filebeat actually keeps track off the offset inside the file at which it was reading, so it won’t send the same file more than once. Config wise (given in your earlier post) I’d say that it’s legitimate traffic, if it’s only sending new requests. See if you can tail the file and figure out whether it is indeed growing at that rate or not.

// If filebeat can’t write the file where track the position, or you delete the file, it will resend the full log file.
And if an application rename the file (eg. logrotate at night…) filebeat will recognise as an unsent file, so it will send it again.

The 280MB is for a day? If the dns server rotate the log it is possible it give 13GB/day but it stores the last 280MB.
check the first and the last line of the file.
For debug, choose 3-5 unique log messages, and find the messages of this message in graylog. if you find it multiple time, you got it more than once.

@macko003 Good hint! max. size of the Logfile is configured with 500MB. I’ll keep an eye on it.

One more thing: I configured a dedicated input for the DNS-Log. The Throughput/Metrics are showing 82MB, but the increase of the Outgoing traffic is more than one GB in the same timeframe. Very strange…

A speciality of the DNS-Logs is, that there is allways a blank line between each Log-Line. I tried to “fix” this using the following multiline configuration - but no difference:

  - type: log
    multiline.pattern: '^[0-9]'
    multiline.negate: true
    multiline.match: after
      - c:\dns-log\dns-outgoing-log.txt
      - drop_event.when.or:
        - contains.message: (4)arpa(0)
        - contains.message: (3)sqm(9)microsoft(3)com(0)

@macko003 : Thank you for the hint with the size of the logfile. I’ve completeley overseen, that it’s rotating several times per day (I did not expect this high amount of logs :wink: ).
At least some hosts are causing very high amount of queries for sqm.microsoft.com which completely overfloods the log. As mentionod in my previous post, I excluded them in filebeats.
Now everything is fine.
Thank’s for your assistance.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.