>> tshark bug @ pkt

20 views
Skip to first unread message

X

unread,
Mar 25, 2011, 12:50:49 PM3/25/11
to pcapr...@googlegroups.com
Hello all,

I keep getting this on my capture files, I have manually ran tshark -T pdml -r /file and it finished OK with no errors.

I am running debian 6.0 i386 on a HP Proliant DL385 G1 with 16gig of ram using the hugemem kernel

tshark -v
TShark 1.2.11

Copyright 1998-2010 Gerald Combs <ger...@wireshark.org> and contributors.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Compiled (32-bit) with GLib 2.24.2, with libpcap 1.1.1, with libz 1.2.3.4, with
POSIX capabilities (Linux), with libpcre 8.2, with SMI 0.4.8, with c-ares 1.7.3,
with Lua 5.1, with GnuTLS 2.8.6, with Gcrypt 1.4.5, with MIT Kerberos, with
GeoIP.

Running on Linux 2.6.32-5-686-bigmem, with libpcap version 1.1.1, GnuTLS 2.8.6,
Gcrypt 1.4.5.

Built using gcc 4.4.5.



heres one

indexing...56.7%   >> tshark bug @ pkt 3909980
  >> aborting...

  #pkts processed: 3909980
  packets/sec:     3030.99 pkts/s


a different file had the same at 92.2%





pcapr

unread,
Mar 25, 2011, 11:24:12 PM3/25/11
to pcapr...@googlegroups.com, X
We've seen tshark crash when outputting PSML and PDML for very large
files since it keeps state across packets. One possible work around is
to split the pcap into two chunks and index both these pcaps in one
go. xtractr will analyze flows across pcap boundaries.

Thanks,
The Pcapr Team

> --
> To post to this group, send email to pcapr...@googlegroups.com
> To unsubscribe from this group, send email to
> pcapr-forum...@googlegroups.com
>
> http://www.pcapr.net/

X

unread,
Mar 28, 2011, 5:35:09 AM3/28/11
to pcapr-forum
Hi,

I split a 500mb file into 73mb parts when I first ran xtractr it seems
that the database ends up much bigger than running it against one
file?

s -altrh
total 13G
-rw-r--r-- 1 root root 73M Mar 23 11:11
00084split_00000_20110322214656
-rw-r--r-- 1 root root 73M Mar 23 11:11
00084split_00001_20110322214804
-rw-r--r-- 1 root root 73M Mar 23 11:11
00084split_00002_20110322214912
-rw-r--r-- 1 root root 73M Mar 23 11:11
00084split_00003_20110322215020
-rw-r--r-- 1 root root 73M Mar 23 11:11
00084split_00004_20110322215129
-rw-r--r-- 1 root root 73M Mar 23 11:11
00084split_00005_20110322215237
-rw-r--r-- 1 root root 64M Mar 23 11:11
00084split_00006_20110322215345
-rw-r--r-- 1 root root 21K Mar 23 11:46 stp00
-rw-r--r-- 1 root root 22K Mar 23 12:04 stp01
-rw-r--r-- 1 root root 22K Mar 23 12:24 stp03
-rw-r--r-- 1 root root 21K Mar 23 12:37 stp004
-rw-r--r-- 1 root root 22K Mar 23 12:55 stp005
-rw-r--r-- 1 root root 21K Mar 23 13:13 stp006
-rw-r--r-- 1 root root 21K Mar 23 13:16 stp02
-rw------- 1 root root 148K Mar 23 13:19 merge01
drwxr-xr-x 3 root root 4.0K Mar 23 15:03 .
drwx------ 2 root root 4.0K Mar 23 18:32 terms.db
drwxr-xr-x 5 root root 4.0K Mar 24 01:23 ..
-rw------- 1 root root 12G Mar 24 10:01 packets.db
<--------------


heres one 500mb file which worked OK I believe


ls -alh
total 718M
drwxr-xr-x 3 root root 4.0K Mar 24 17:45 .
drwxr-xr-x 6 root root 4.0K Mar 24 11:55 ..
-rw------- 1 root root 717M Mar 24 19:30 packets.db
drwx------ 2 root root 4.0K Mar 24 19:34 terms.db


X

unread,
May 25, 2011, 10:10:57 AM5/25/11
to pcapr-forum
/bump

No one knows?

Regards,

X.

kowsik

unread,
May 25, 2011, 1:21:18 PM5/25/11
to pcapr...@googlegroups.com
Sorry for the late response. Somehow missed this one.

The size of the files is too small which indicates something going
awry during the indexing process. 4K for the terms.db for a 500MB file
doesn't look right. One thing that we've seen from time to time is
that tshark would crash because of state accumulation on very large
pcaps. Splitting them up and letting xtractr stitch the flows across
the pcaps seems to help since we are restarting tshark for each of the
pcap segments. Maybe that's the reason?

Thanks,
The Pcapr Team
---
http://www.pcapr.net
http://twitter.com/pcapr
http://labs.mudynamics.com

X

unread,
May 27, 2011, 5:16:20 AM5/27/11
to pcapr-forum
Hi,

thanks for replying.

So I have a bunch of 500mb cap files (15gb in total ish) captures go
to through. How would you recommend I do this?

IE

Split each cap file into X mb
make one db per 500mb
make a large db

Thanks

x

kowsik

unread,
May 27, 2011, 3:35:21 PM5/27/11
to pcapr...@googlegroups.com

Unfortunately the answer is "it depends". Meaning, if tshark doesn't
crash on you, xtractr will happily eat up all of the 500MB of pcaps.
Maybe the simplest way is to install pcapr.Local and dump all these
files into the specified directory and see what happens with the
indexing process. Then selectively you can split them up and
pcapr.Local will automatically discover and index these pcaps.

https://github.com/pcapr-local/pcapr-local

Thanks,
The Pcapr Team

Reply all
Reply to author
Forward
0 new messages