Graylog 4.2 - Enterprise License Violation </Resolved>

I’ve been running Graylog with Free Enterprise license, and my daily ingestion hovers around 130-180 MB, much below the 5 GB threshold.
Last Saturday (Dec04) , i noticed that the Graylog interface started to report a License error :
*"Graylog Enterprise License Violation

  • At least one term of your Graylog Enterprise license has been violated. Go to the Licenses page for more information or contact your Graylog account manager."*

I spent the weekend reading over the previous posts in this community regarding the identical/similar error, and followed the suggestions to troubleshoot the issue. These steps helped me direct my troubleshooting approach and I was able to resolve it finally , so thought of sharing my findings in this forum in case someone else runs into the same issue.
I might state a few things incorrectly, so mods / other contributors - feel free to correct my understanding wherever you feel so.

Environment :

  • Graylog 4.2 , running within a Docker
  • Free Enterprise License
  • DNS lookups go through pi-hole
  • Internet is direct, and NOT proxied

As in the image, you can see that the data ingestion, as i stated, was well below the 5 GB mark.
So essentially, the license violations arose

  • either out of connectivity failures (Very likely)
  • or Root/Intermediate Certificate missing in Java Keystore (unlikely, since I’m not using TLS)

From what I inferred off the other posts, license checks here are a 2 step process

  • Talk to,

  • Then send some 5 different data metrics along with existing license - and this is encrypted so the certs need to added to client machine (If TLS is configured)

To verify connectivity, I did a simple curl to this url from the machine hosting graylog, and it was successful in reaching + establishing ssl connection.
Now I’m running graylog in docker, which creates its own virtual network interfaces (172.xxxx/ 169.xxxxx) used for the app running within the container.
But the this curl test from same host uses the host NIC (192.xxxxx) and NOT the docker interface, so a successful curl perhaps DOES NOT equate to Graylog reaching the same url from same host.

How I found this out : I ran a wireshark and compared process startup traffic to curl traffic : process startup had the dns queries originating from the docker interface, and were never answered back, so the dns server (pihole) in my case was dropping DNS queries from docker interfaces.

Now i don’t have the wireshark files or screenshots, but i can post DNS lookup records to substantiate the findings.

The report was generated to show all lookups to domain starting Dec 02 00:00 hrs to Dec 7 15:00 Hrs ( Time is SGT - GMT +8)

  • But based on the screenshots, the first seen lookup is on Dec05 1:07 AM, when i started troubleshooting the issue and tan the curl commands
  • Fair assumption that there were NO successful DNS lookups to domain even through Graylog processes were running, which then explains the connectivity issue.


Here are the 2 Possible solutions:
a) Reconfigure the pi-hole to allow DNS resolution to occur at all interfaces and all sources - which would then allow lookups to occur off docker interfaces as well.
The below screenshot shows the successful lookup from Docker IP

b) Or bypass the pi-hole for queries originating from docker, set an exclusion or something

Summarily, when Graylog is running within a container, the docker interfaces also need to be accounted for when configuring the upstream network devices - which could be DNS filtering devices, Proxy, Firewall, etc

Hope this helps.

1 Like


Thank you for the post and details are on point :smiley:

1 Like

Thanks , appreciate you reading through

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.