Re: [security-onion] Offline / batch PCAP analysis

2,779 views
Skip to first unread message

Jeremy Hoel

unread,
Feb 28, 2013, 1:58:18 AM2/28/13
to securit...@googlegroups.com
Some tools will be easy.. snort alerts to sguil/snorby will work
fine. Pulling the packet data again from inside that would require
renaming the pcap and putting it into the folder that the sguil pcap
agent will look at. and then you need cstracket/sancp to read the
pcap and upload it too.

i don't know about bro or how that works with pcaps.. but the snort
part isn't to hard.

On Thu, Feb 28, 2013 at 12:34 AM, Dave V <jdve...@gmail.com> wrote:
> Hi everyone,
>
> Looking for some feedback on an idea I've got...
>
> For my home setup, I have a small router running optware and a CIFS share from my NAS mounted. I've got tcpdump running on the router dumping all PCAP to the NAS. What I would like to be able to do is fire up an SO VM on my desktop machine and run the saved PCAPs through it ad-hoc (once per day, per week, as time allows, etc).
>
> The general problem here is batch PCAP analysis. My googling has turned up a few other posts about people trying to do this with SO, with the general response being to just replay the PCAP.
>
> I'm not a fan of that since it kills all the timestamps and duplicates the data. I would rather write a script (or scripts) on the SO VM so that when it boots up it checks the NAS share for new PCAPs, manually runs the tools on them and pumps the output into ELSA/Sguil/Snorby, as applicable.
>
> There are two ways I'm thinking about going about this:
>
> 1) Hacked together: Just do a small script that kills the running processes and manually runs them on my PCAPs. I would definitely be able to share this, but I don't think it would be widely applicable without intermediate to advanced knowledge
>
> 2) Elegant: build a new 'type' of sensor (ie/ /usr/sbin/nsm_sensor_add --sensor-interface=pcap_file), building out the requisite configuration around that. Also duplicate/modify pcap_agent to read from the stored PCAPs. Ideally this would be easier for others to use and could potentially be pushed back into the mainline security onion distro (obviously not my call...)
>
>
> I'm curious how much demand there is for something like (2), and definitely wouldn't mind any feedback from the main devs about the proper SO architecture to do this?
>
> --
> You received this message because you are subscribed to the Google Groups "security-onion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
> To post to this group, send email to securit...@googlegroups.com.
> Visit this group at http://groups.google.com/group/security-onion?hl=en-US.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Matt Gregory

unread,
Feb 28, 2013, 6:07:32 AM2/28/13
to securit...@googlegroups.com

You could easily manually process the pcaps by reading them into Bro and Snort.  This would give you their resulting logs, but wouldn't put any alerts in the Squil, Snorby, or ELSA databases.

I don't know that their is a way to actually replay packets on the wire and not get current timestamps.  Could you instead send your traffic to a full-time running SO box instead of your NAS?

Matt

Seth Hall

unread,
Feb 28, 2013, 8:10:00 AM2/28/13
to securit...@googlegroups.com

On Feb 28, 2013, at 1:51 AM, Richard Bejtlich <taose...@gmail.com> wrote:

> You could do batch analysis with Bro and put the results in ELSA though.


Additionally, if you wanted to run the packets through Bro with all of the same configuration that you would have when normally running with BroControl you can use the "process" command in BroControl. I think it works, but I only say that because our test harness for BroControl isn't quite complete and the process command hasn't been used heavily by anyone yet.

From the broctl inline help:
process <trace> [Bro options] - runs Bro offline on trace file

I'd be interested to hear anyone's experiences with using the process command.

.Seth

--
Seth Hall
International Computer Science Institute
(Bro) because everyone has a network
http://www.bro-ids.org/

Martin Holste

unread,
Feb 28, 2013, 3:16:06 PM2/28/13
to security-onion
You can replay everything through Bro/Suricata/Snort, and then replay those logs using the ELSA import.pl script, e.g.:

/opt/elsa/node/import.pl -f bro -d "description of this import" "/path/to/bro_conn.log"

The import script will honor given times, and when the results are displayed, instead of the host that generated the log showing (which is nonsensical for locally imported logs), you'll see the import information, including the name of the file, the description you gave at the command line, and the date it was imported.  You can keep doing this and the logs won't roll until you hit the "import_log_size_limit" (default is 50% of total disk available as per log_size_limit), and then it will start rolling the earliest imported logs.  So, you can probably make that into a very repeatable process with a three or four line shell script.


Martin Holste

unread,
Mar 1, 2013, 11:18:07 AM3/1/13
to security-onion
There are a number of other backend changes to accommodate imports, like the imports table in the syslog database, so just changing the syslog-ng config won't be enough.  If you want to play with import.pl, my recommendation is to standup a new VM running Ubuntu and load a stock ELSA instance on it (should take about 2 minutes of actual work, though the script will take about a half hour), then have your SO syslog-ng forward its messages to the test box.  You can then play with things there, getting the automation scripts ready for doing your workflow, and when SO catches up to ELSA with regards to the import stuff, you'll be all set.


On Thu, Feb 28, 2013 at 10:07 PM, Dave V <jdve...@gmail.com> wrote:
Progress so far:

Managed to get ELSA updated (created /etc/elsa_vars.sh to point to where things are installed in SO, ran sh install.sh node update and web update), but it wasn't very pretty. added the following lines to /etc/syslog-ng/syslog-ng.conf manually to try and get the import to work

template t_db_parsed_import { template("$S_UNIXTIME\t$HOST\t$PROGRAM\t${.classifier.class}\t$MSGONLY\t${i0}\t${i1}\t${i2}\t${i3}\t${i4}\t${i5}\t${s0}\t${s1}\t${s2}\t${s3}\t${s4}\t${s5}\n"); };

destination d_elsa_import { program("perl /opt/elsa/node/elsa.pl -c /etc/elsa_node.conf -f __IMPORT__" template(t_db_parsed_import)); };

log {
        source(s_import);
        rewrite(r_cisco_program);
        rewrite(r_snare);
        rewrite(r_pipes);
        parser(p_db);
        rewrite(r_extracted_host);
        destination(d_elsa_import);
};


Also had to modify import.pl to point to the right pipe.


However, this didn't seem to work out. import.pl claimed to have inserted the records, syslog-ng read them from the pipe, the data is in syslog_data.syslogs_import_1, but it won't show up in any web queries. Maybe it's an SO specific config somewhere? I'll try a clean install of ELSA on a non-SO install and see if it works there...

Vivek Rajagopalan

unread,
Mar 1, 2013, 12:56:41 PM3/1/13
to securit...@googlegroups.com
> I'm not a fan of that since it kills all the timestamps and duplicates the data. I would rather write a script (or scripts) on the SO VM so that when it boots up it checks the NAS share for new PCAPs, manually runs the tools on them and pumps the output into ELSA/Sguil/Snorby, as applicable.
>

Have you considered patching the libpcap library to offset timestamps
appropriately ?

Just a thought..

Vivek Rajagopalan

unread,
Mar 1, 2013, 11:53:11 PM3/1/13
to securit...@googlegroups.com
Hi,

Apologies for not explaining the idea properly in my previous email. I
was just wondering if it is possible to salvage the replay method for
processing pcaps.

The two main goals
1. You want to process the pcaps as fast as possible rather than at
observed rate. This way a capture file with 6 hrs of traffic doesn't
take 6 hours to finish.

2. The original timestamps are maintained through all the tools
transparently. So if a piece of software only timed off packet
timestamps, it would be indistinguishable to it from reading off a
live network. Except time itself would appear to move faster within
the software to an external observer such as logfiles which included
both the system time as well as packet times.

We could do this is there was some way to transmit the actual
timestamp along with each packet. So we create a new link layer shim,
say FTSP (Fake Timestamp Protocol!!) The sending application would
stuff the timestamp in each packet by means of a custom Ethertype. On
the receiving side, the receiver would pull out the timestamp and
present that instead of the linux time.

The shim protocol is very simple (0xA0A0 is the arbitrary ethertype)

[ DST MAC | SRC MAC | OUR_ETHERTYPE = 0xA0A0]
[ TV_SEC | TV_NSEC | ACTUAL_ETHERTYPE] (10 extra bytes *see note )
[ rest of original packet... ]


On the receiving side, the application would write a special handler
for "protocol 0xA0A0" which would extract the timestamp, present
*that* to the application, then continue processing the actual
ethertype.

*Note:
The immediate problems are :
1. Adding the 10 extra bytes could cross the MTU of 1500. So you may
want to use GigE cards and bump up the MTU.
2. The unrecognized Ethertype means you cant have routers between the
replay system and the Sec-O box even if you rewrote the MACs.

We use something like this to test our software, it works because we
have modified the application itself to handle the "Fake timestamp
protocol" without messing with the lower level libraries. It should be
similarly possible to modify individual tools like Bro/Snort to handle
this case. Since Sec-O has so many tools you can consider patching the
lower level library (libpcap) which sees the packet before others do.
This could be a bit complicated (impossible?) to do for capture
mechanisms like RXRING,PF_RING,and other non-buffering methods.

I think the attractions are many if we could pull this off.
1. You no longer have to worry about the details of where and in what
format each tool stores its logs. Practically maintenance free !
2. You can see things as they happen. I will just look like everything
is in fast-forward mode, still great feedback.

Would love to hear your comments.


Vivek

Rene T

unread,
Oct 23, 2013, 7:08:07 AM10/23/13
to securit...@googlegroups.com

Hi Dave.

I'm curious if you succeeded with your project? I am looking at a similar project and want to use SO.

cheers

Dave V

unread,
Oct 25, 2013, 8:21:36 PM10/25/13
to securit...@googlegroups.com

Hi Rene,

I got as far as my earlier post, haven't played too much with this since then.

Good luck!

Mike Westmacott

unread,
Feb 7, 2014, 9:51:17 AM2/7/14
to securit...@googlegroups.com
I needed precisely this functionality - turns out the syslog-ng configuration doesn't actually specify the appropriate pipe and other gubbins to allow you to dump stuff onto /nsm/elsa/data/elsa/tmp/import
I've attached an updated syslog-ng.conf file that contains some extra bits from the core elsa syslog-nf.conf file. This should probably be sanity checked by someone :)
You can now use the stock import.pl to drop Bro logs and the timestamps are saved.

e.g. /opt/elsa/node/import.pl -f bro -d "DAILY_IMPORT" http.bro

I don't like to run Bro live as on a lively connection it sometimes isn't fast enough, so I have a job that regularly runs to fire off offline Bro jobs. I find this ok as often it's only peak times that Bro can't keep up (especially on segments with *lots* of HTTP traffic) but at night there's no traffic.
Also I don't actually need to have Bro running live :)

Mike

syslog-ng.conf
Reply all
Reply to author
Forward
0 new messages