GRE monitoring works but hides REAL IP addresses

752 views
Skip to first unread message

Daniel

unread,
Feb 27, 2016, 7:30:03 AM2/27/16
to security-onion
HI,

I"m using a GRE tunnel to monitor a mirror port, this is very similar to cisco ERSPAN but this is just a normal GRE connection from a HP 5900 Switch (Comware 7)

Everything is working on the event detection, the problem is the security tools don't show the REAL IP address but only the tunnel endpoint IP addresses of the GRE tunnel

Is there something I can config to get this working as expected ?

Regards, Daniel

steve baker

unread,
Feb 28, 2016, 2:54:11 PM2/28/16
to securit...@googlegroups.com

I got this working by creating a gretap device manually and configuring security onion to listen on that device.

It was a little involved to set up the first time but I could write up some of the details later tonight if you would like.

Steve

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

Pete

unread,
Apr 5, 2016, 4:27:51 PM4/5/16
to security-onion
Steve,

I'd like to see what you've done. I'm looking into something similar, and am concerned that I won't be able to run multiple snorts or bro workers on a virtual interface due to PF_RING expecting a physical NIC...

I've found a page (http://brezular.com/2015/05/03/decapsulation-erspan-traffic-with-open-source-tools/) describing setting up a virtual interface with decapped content, but have seen conflicting info on whether I can (http://www.ntop.org/pfring_api/pfring_8h.html#ad4bfa3d7c55f3ade36d89723624fe6a8) or cannot (https://github.com/ntop/PF_RING/blob/dev/doc/UsersGuide.pdf middle of page 7) tie pf_ring to virtual devices.

What kind of bandwidth are you handling on your setup? Do you have multiple snort/bro processes per interface?

Thanks,
--
Pete

steve baker

unread,
Apr 7, 2016, 3:58:26 PM4/7/16
to security-onion
Apologies about the delay in responding, been a busy week.

I had some of the same concerns with PF_RING on virtual interfaces when setting things up. As far as I can tell everything is working correctly with multiple snort/bro processes per virtual interface. I am not processing any kind of serious bandwidth yet with the system, just our internet egress taps coming in as an L2GRE tunnel coming from Gigamon. Here is what I have configured:

I skipped configuring network interfaces in the setup wizard. This is what my interfaces have been manually configured to look like:

user@sensor1:~$ cat /etc/network/interfaces
...
auto eth3
iface eth3 inet static
address 10.1.1.100
netmask 255.255.255.0

auto mon0
iface mon0 inet manual
pre-up ip link add name $IFACE type gretap local 10.1.1.100 remote 10.1.1.200 dev eth3 key 10
up ip link set dev $IFACE up
down ip link set dev $IFACE down
post-down ip link delete $IFACE

auto mon1
iface mon1 inet manual
pre-up ip link add name $IFACE type gretap local 10.1.1.100 remote 10.1.1.201 dev eth3 key 20
up ip link set dev $IFACE up
down ip link set dev $IFACE down
post-down ip link delete $IFACE


Snort and Bro are configured to run multiple processes:

user@sensor1:~$ grep IDS_LB_PROCS /etc/nsm/sensor1-mon0/sensor.conf
IDS_LB_PROCS=2

user@sensor1:~$ grep -A6 mon /opt/bro/etc/node.cfg
[sensor2-mon0]
type=worker
host=localhost
interface=mon0
lb_method=pf_ring
lb_procs=2

[sensor1-mon1]
type=worker
host=localhost
interface=mon1
lb_method=pf_ring
lb_procs=2


As far as I can tell the virtual gretap interfaces are getting all the traffic they are supposed to through PF_RING:


user@sensor1:~$ sudo sostat
...

=========================================================================
pf_ring stats
=========================================================================
PF_RING Version : 6.2.0 (unknown)
Total rings : 8

Standard (non DNA/ZC) Options
Ring slots : 65534
Slot version : 16
Capture TX : Yes [RX+TX]
IP Defragment : No
Socket Mode : Standard
Total plugins : 0
Cluster Fragment Queue : 0
Cluster Fragment Discard : 0

/proc/net/pf_ring/7036-mon0.3
Appl. Name : bro-mon0
Tot Packets : 272255
Tot Pkt Lost : 0
TX: Send Errors : 0
Reflect: Fwd Errors: 0
Min Num Slots : 65534
Num Free Slots : 65534

/proc/net/pf_ring/7038-mon0.5
Appl. Name : bro-mon0
Tot Packets : 406104
Tot Pkt Lost : 0
TX: Send Errors : 0
Reflect: Fwd Errors: 0
Min Num Slots : 65534
Num Free Slots : 65534

/proc/net/pf_ring/7044-mon1.4
Appl. Name : bro-mon1
Tot Packets : 573486
Tot Pkt Lost : 0
TX: Send Errors : 0
Reflect: Fwd Errors: 0
Min Num Slots : 65534
Num Free Slots : 65534

/proc/net/pf_ring/7045-mon1.6
Appl. Name : bro-mon1
Tot Packets : 697801
Tot Pkt Lost : 0
TX: Send Errors : 0
Reflect: Fwd Errors: 0
Min Num Slots : 65534
Num Free Slots : 65534

/proc/net/pf_ring/7733-mon0.8
Appl. Name : snort-cluster-55-socket-0
Tot Packets : 269610
Tot Pkt Lost : 0
Reflect: Fwd Errors: 0
Min Num Slots : 65538
Num Free Slots : 65538

/proc/net/pf_ring/7749-mon0.7
Appl. Name : snort-cluster-55-socket-0
Tot Packets : 402114
Tot Pkt Lost : 0
Reflect: Fwd Errors: 0
Min Num Slots : 65538
Num Free Slots : 65513

/proc/net/pf_ring/7942-mon1.10
Appl. Name : snort-cluster-56-socket-0
Tot Packets : 694354
Tot Pkt Lost : 0
Reflect: Fwd Errors: 0
Min Num Slots : 65538
Num Free Slots : 65520

/proc/net/pf_ring/7957-mon1.9
Appl. Name : snort-cluster-56-socket-0
Tot Packets : 570069
Tot Pkt Lost : 0
Reflect: Fwd Errors: 0
Min Num Slots : 65538
Num Free Slots : 65474

...

Been running with this for at least a month now and everything seems to be working great.

Let me know if there is anything else you would like to see. There was a lot of experimenting involved in getting this setup.

Steve

armiofone

unread,
Sep 13, 2016, 11:02:05 PM9/13/16
to security-onion
Hi Steve,

My /etc/network/interfaces is setup as:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.8.192.62
netmask 255.255.255.0
gateway 10.8.192.1
dns-search domain.com
dns-nameservers X.X.X.X X.X.X.X

auto eth1
iface eth1 inet static
address 10.8.210.93
netmask 255.255.255.0

auto mon1
iface mon1 inet manual

pre-up ip link add name $IFACE type gretap local 10.8.210.93 remote 10.8.192.2 dev eth1


up ip link set dev $IFACE up
down ip link set dev $IFACE down

post-down ip ink delete $IFACE

auto mon2
iface mon2 inet manual
pre-up ip link add name $IFACE type gretap local 10.8.210.93 remote 10.8.192.3 dev eth1


up ip link set dev $IFACE up
down ip link set dev $IFACE down
post-down ip link delete $IFACE


The GRE traffic is coming from a pair of Cisco Nexus 7k switches using ERSPAN. I'm seeing the traffic counters on eth1 incrementing but not on mon1 or mon2. When I ran the SO setup I chose mon1 and mon2 as the monitor interfaces. Is there something I'm missing here to get the traffic to come through the mon1/2 interfaces?

Thanks,
Andrew

Reply all
Reply to author
Forward
0 new messages