Unifi dream machine pro VPN log

I know very little about logs, and how to sort/filter or anything like that. I have graylog setup and my UDMP dumps or forwards logs to it, I have input setup, pushed into a stream and all that - I can search the logs for the user that logs in, the connection being established, l2tp that type of thing…
I need someway to pull connection/disconnect and length of connection if that’s possible?

Some example messages
udm-1.12.22.4309 charon: 06[IKE] IKE_SA lns-l2tp-server[20] established between x.x.185.124[x.1x.185.124]…x.x.149.73[192.168.2.240]

(this is from the radius on the UDMP, when I go live this will be coming from NPS instead)
message
4105ace6e6d5,udm-1.12.22.4309 radiusd[2170]: (1) Login OK: [user] (from client client-client-600ee7ac4cc298050bffb2e8 port 100000

When I use Teleport it shows when a user connects/disconnects - but I still need a way to format that data into something to show metrics or something along those lines. (I won’t be using teleport for my users as we have utilize MFA)
4105ace6e6d5,udm-1.12.22.4309 ubios-udapi-server: res-vpn-teleport: Starting Teleport daemon
4105ace6e6d5,udm-1.12.22.4309 ubios-udapi-server: res-vpn-teleport: Stopping Teleport daemon

Hello,

I’m unfamiliar with Unifi dream machine. After reading this post you may want to create a field and dump the data you need in there. You can do this with a Pipeline/Extractor or by using a different INPUT such as GELF.

FYI, when posting logs/configuration and/or command please use the markdown in the test box. This helps others help you. For more information on this you can find it here.

I have several GROC pattern extractors entered in my input - how does it use those to apply them? or how do you use them - is automatic? I assume it is.

As far as GELF - right now I’m set to syslog UDP, if I change it from that to GELF UDP is there anything else I need to do other than that?

Hello

Soon as the message arrived it will apply those configuration on that input.

You can try to see if it works for you but from what Im seeing in this post it may not be in GELF format, I could be wrong.

I’m not sure what exactly you want to see. I’m confused what you want to pull. Right now I’m assuming you want this…

Is that correct? If so I did a mockup with REGEX extractor.

If you could demo what you want that would be great.

I need to see when a user connects - and the length of time they stay connected .
inside ssh if I run ipsec statusall below is what I see (I hope I did it right this time) I’m assuming this information is written to the syslog

Status of IKE charon daemon (strongSwan 5.7.1, Linux 4.19.152-al-linux-v10.2.0-v1.12.22.4309-4105ace, aarch64):
  uptime: 40 minutes, since Jul 11 13:40:12 2022
  malloc: sbrk 2678784, mmap 0, used 639280, free 2039504
  worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 2
  loaded plugins: charon pkcs11 aes des rc2 sha2 sha1 md5 random nonce x509 revocation constraints pubkey pkcs1 pkcs7 pkcs8 pkcs12 pgp dnskey sshkey pem openssl fips-prf curve25519 xcbc cmac hmac attr kernel-netlink resolve socket-default stroke vici updown xauth-generic led counters
Listening IP addresses:
 x.x.x.x
  172.16.1.1
  10.10.10.1
  10.1.0.1
  172.16.5.1
  10.255.255.0
Connections:
lns-l2tp-server:  x.x.x.x...%any  IKEv1, dpddelay=15s
lns-l2tp-server:   local:  [x.x.x.x] uses pre-shared key authentication
lns-l2tp-server:   remote: uses pre-shared key authentication
lns-l2tp-server:   child:  0.0.0.0/0 === 0.0.0.0/0 TRANSPORT, dpdaction=clear
Security Associations (1 up, 0 connecting):
lns-l2tp-server[1]: ESTABLISHED 39 minutes ago, x.x.x.x[x.x.x.x]...x.x.x.x192.168.2.240]
lns-l2tp-server[1]: IKEv1 SPIs: 7de99e14ecf96ab4_i 093d100be5d48307_r*, pre-shared key reauthentication in 9 minutes
lns-l2tp-server[1]: IKE proposal: AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
lns-l2tp-server{1}:  INSTALLED, TRANSPORT, reqid 1, ESP in UDP SPIs: cac588c9_i 8e9c8f56_o
lns-l2tp-server{1}:  AES_CBC_256/HMAC_SHA1_96, 19352862 bytes_i (75274 pkts, 0s ago), 76133752 bytes_o (97874 pkts, 3s ago), rekeying in 3 minutes
lns-l2tp-server{1}:   x.x.x.x/32[udp/1701] === x.x.x.x/32[udp/1701]

looking at the system logs - I’m guessing here on which is connect/disconnect
Somehow I need to turn these into metrics or something of that nature to show what I need, a spreadsheet, table, something

4105ace6e6d5,udm-1.12.22.4309 charon: 07[IKE] IKE_SA lns-l2tp-server[3] established between x.x.x.x[x.x.x.x]...x.x.x.x[192.168.2.240]

4105ace6e6d5,udm-1.12.22.4309 charon: 16[IKE] deleting IKE_SA lns-l2tp-server[1] between x.x.x.x[x.x.x.x]...x.x.x.x[192.168.2.240]

4105ace6e6d5,udm-1.12.22.4309 charon: 16[IKE] closing CHILD_SA lns-l2tp-server{1} with SPIs cac588c9_i (22399477 bytes) 8e9c8f56_o (83504173 bytes) and TS x.x.x.x/32[udp/1701] === x.x.x.x/32[udp/1701]


Hello

Question does that message arrive like that and is it all under message field? Or does it come in as single log lines?

I’m not sure, I do know you can create specific fields for IP address and ports either using Extractors or Pipelines. Once the fields are created you should be able create a widget or alert s from there.

This is 1 message - and yes this is direct copy from Graylog from the syslog of the UDM. Minus the public IP’s that I edited out

4105ace6e6d5,udm-1.12.22.4309 charon: 07[IKE] IKE_SA lns-l2tp-server[3] established between x.x.x.x[x.x.x.x]...x.x.x.x[192.168.2.240]

I think I finally have a better answer for you - I need to extract 4 fields from the established/deleting tunnel messages I posted earlier.
I need date, time, (established/deleting) for connect/dc, and source address.

I believe these are the two entries I’m looking for - I’m trying to verify that currently

4105ace6e6d5,udm-1.12.22.4309 charon: 07[IKE] IKE_SA lns-l2tp-server[3] established between x.x.x.x[x.x.x.x]...x.x.x.x[192.168.2.240]

4105ace6e6d5,udm-1.12.22.4309 charon: 16[IKE] deleting IKE_SA lns-l2tp-server[1] between x.x.x.x[x.x.x.x]...x.x.x.x[192.168.2.240]

Hello,

You can use a pipeline to extract either the word/s Delete, or Established or pretty much what every you want to have displayed /alerted.

This is an example:

NOTE: I uploaded your messages above into Graylog.

Pipeline

rule "Extract 2  fields"
when
  has_field("message") 
then
  let vpn_established = regex("(established)", to_string($message.message));
  let vpn_disconnect = regex("(deleting)", to_string($message.message));
  set_field("vpn_status", vpn_established["0"]);
  set_field("vpn_status", vpn_disconnect["0"]);
  debug (vpn_established);
  debug (vpn_disconnect);
end

Results

Widget

Using the pipeline I create above, sent few messages to Graylog for testing. When the pipeline is working as expected, you can remove the debug() configuration.

Widget

That’s absolutely wonderful - we’re getting somewhere here! Took me a bit to figure out how to add it, and then get it to show up as a widget or what I did was add from sources|fields (on left menu) then select VPN_Status that you made there - which now that I went back and deleted it, that is in fact a widget but mine looks way different than yours.

I started to write another pipeline when I realized - if you have set field - message - my brain is running in circles trying to make this work.
How do I associate the timestamp with each of those events? And then the 2nd ip listed 10.10.10.5 as the source address (the device connecting to the VPN)
So essentially this ip connected/disconnected at this time so somehow from there I can extrapolate how long they were connect - what a mess

4105ace6e6d5,udm-1.12.22.4309 charon: 15[IKE] IKE_SA lns-l2tp-server[130] established between 54.182.181.136[54.182.181.136]...10.10.10.5[192.168.8.137]

and thank you a ton! This is very helpful, and I really appreciate you assisting me (holding my hand basically here)

1 Like

Hello,

What you need if the fields. Or create a Lookup tables and attach it to INPUT.

From what I do not see is a timestamp in those message, Bad but Elasticsearch puts a timestamp on it for you when it indices them. You could use that field.

All your howto’s are in Graylog Docs.

My best advice would be try it out, if it does not work Post it here what you have tried ( screenshots, Commands, etc…), so we can see what you did and give suggestions to help you. I think I gave you a running start. To be honest this forum does hold a lot a solutions.
Graylog does have a Event Correlation but its a paid version.
Actual this remind me, someone post here… They have a VPN dashboard.

well - good news… I’m getting somewhere after hours of banging my head on the desk! First I wrote a pipeline to add a new corrected time. Then I started thinking about what you said about the time being applied to the message and realized the timezone must be off elsewhere - so I changed the host OS timezone to CST instead of UTC.
I didn’t realize that fixed it, so I applied the pipeline, which - worked after a little trial and error… then I realized I don’t even need it as I have the time and the field I need. At least I learned some on creating these… still a ways to go yet though!

Nice, way to work the issue :+1:

how were you able to input my log message and apply the pipeline you created? My server is down that I normally work off of - I have another up, I have my pipeline, and the log message but not sure how to test it against it until the other server is back up unless I can input both?

Hello,

On my Graylog Docker i used Nxlog
1.Create a file in /var/log called (test.log)
2.Create a input on nxlog configuration file.

<Input old_log_file>
  Module   im_file
  File    "/var/log/text.log"
  SavePos TRUE //This saves the Posistion after NXLOg restarts or starts. Set to FALSE to scan all of the Log file. When done sending set it back if need be.
  ReadFromLast TRUE //If ReadFromLast is FALSE, the module will read all logs from the file. .
  PollInterval 1
</Input>

3.Save & Exit
4.In the text file copy and paste logs.

5.Restart nxlog service.

systemctl restart nxlog

Done.

As for pipelines I used these instructions.

I’m very excited - your pipeline helped me get going on this but I completely changed how that was done in the end. At first I was going to create separate rules to process each thing, but after researching and learning more about extracting this data I realized I could do it all in 1 go, so I created a grok pattern to extract all of it.
Once I did that I even added Geo IP location to my pipeline, I have all the data I needed extracted. Now to figure out how to put that in some sort of report… 1 step at a time, but I’m getting there!

1 Like

@d2freak82

That’s great :+1:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.