I compared TCP performance between FreeBSD and Linux by running test tools
Netperf and Iperf with Intel NIC.
The kernels are full version and default values are used in the testing
except TCP Congestion Control algorithm set to Reno.
>From the test results we can see Linux TCP performance in throughput is
better than FreeBSD. The worst case (send msg size 128) shows that FreeBSD
throughput is only 43% of Linux's.
I like to get some feedback if anyone did similar comparison test, or knows
any issues with kernels or drivers. Thanks lot.
FreeBSD and Linux Sysctl captures are attached for reference.
Regards,
Hongtao
Test Environments:
PC: Dell Precision T3400 (same 4 PCs)
CPU: Intel Core 2 Duo CPU E4...@2.4Ghz
FreeBSD: V7.1 (full version) (TCP CC: newReno)
Linux: V2.6.31.1 (full version) (TCP CC: Reno)
Ethernet card: Intel Pro/1000 PWLA8492 MT Dual Port Server Adapter (Gigabit)
chip 82546EB (only one port used for each PC)
Switch: Netgear ProSafe 8 port Gigabit Switch (model GS108)
Iperf: V2.0.4
Netperf: V2.4.4
Setup:
----------
| switch |
----------
---------------------| | | |--------------------
| | | |
| --------| |-------- |
| | | |
| | | |
-------------- -------------- -------------- --------------
| PC1 | | PC2 | | PC3 | | PC4 |
| FreeBSD | | FreeBSD | | Linux | | Linux |
|192.168.1.10| |192.168.1.20| |192.168.1.30| |192.168.1.40|
-------------- -------------- -------------- --------------
================================
Netperf Test Results
================================
TCP Throughput Test
-------------------
PC2/4: #netserver -p 22113
PC1/3: #netperf -H 192.168.1.20 -p 22113 -l 10
Recv Send Send Elapsed
Throughput
Socket Socket Message Time
10^6 bits/sec
Size Size Size Sec.
bytes bytes bytes
FreeBSD: 65536 32768 32768 10.34
598.11
Linux: 87380 16384 16384 10.04
779.02
PC1/3: #netperf -t TCP_STREAM -H 192.168.1.20 -p 22113 -- -m
64/128/256/512/1024/2048/4096
Recv Send Send Elapsed
Throughput
Socket Socket Message Time
10^6 bits/sec
Size Size Size Sec.
bytes bytes bytes
FreeBSD: 65536 32768 64 10.19
417.10
65536 32768 128 10.35
336.63
65536 32768 256 10.36
576.99
65536 32768 512 10.35
569.79
65536 32768 1024 10.35
553.70
65536 32768 2048 10.35
584.20
65536 32768 4096 10.35
602.45
Linux: 87380 16384 64 10.03
778.21
87380 16384 128 10.03
779.72
87380 16384 256 10.04
780.16
87380 16384 512 10.03
776.85
87380 16384 1024 10.04
777.52
87380 16384 2048 10.04
777.83
87380 16384 4096 10.03
780.17
===============================
Iperf Test Results
===============================
Bandwidth Test
--------------
PC2/4: #iperf -s
PC1/3: #iperf -c 192.168.1.20
Interval Transfer Bandwidth
sec MBytes Mbits/sec
FreeBSD: 0.0-10.3 740 600
Linux: 0.0-10.0 972 815
PC1/3: #iperf -c 192.168.1.20 -d
Interval Transfer Bandwidth
sec MBytes Mbits/sec
FreeBSD: 0.0-10.0 402 337
0.0-10.0 404 338
Linux: 0.0-10.0 926 776
0.0-10.0 44.1 36.9
Parallel Test
-------------
PC2/4: #iperf -s
PC1/3: #iperf -c 192.168.1.20 -P 2
Interval Transfer Bandwidth
sec MBytes Mbits/sec
FreeBSD: 0.0-10.3 370 300
0.0-10.3 370 300
SUM: 0.0-10.3 739 600
Linux: 0.0-10.0 479 402
0.0-10.0 473 396
SUM: 0.0-10.0 952 797
FreeBSD 7.1 is quite old compared to Linux 2.6.31 - I'd like to see at
least FreeBSD 7.2 compared, if not 8.0-RC1. Maybe also the most recent
FreeBSD 4 should be taken into this test.
Thanks in advance,
Istvan
> _______________________________________________
> freebsd-p...@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "
> freebsd-perform...@freebsd.org"
>
--
the sun shines for all
Did you compare syscalls made and time taken?
For example, do either/both of them do a lot of gettimeofday() calls?
FreeBSD and Linux have (had?) different behaviours and performance
with those.
I'd suggest digging a bit deeper? :)
adrian
Thanks,
Steve
adrian
I ran newer FreeBSD code 8.0 RC1 this time. By using NetPIPE, we collected
test data and the result shows that FreeBSD TCP performance is worse than
Linux. I got troubles to plot using gnuplot, so here I am attaching the raw
data files.
The netpipe commands are as follows:
PC2: #NPtcp
PC1: #NPtcp –h 192.168.1.20
Regards,
Hongtao
_____
From: István [mailto:lec...@gmail.com]
Sent: 2009年10月15日 5:13
To: Hongtao Yin
Cc: freebsd-p...@freebsd.org
Subject: Re: Comparison of FreeBSD/Linux TCP Throughput performance
Thanks,
Steve
Really. Don't post attachments to mailing lists. It's just a bad idea,
a lot of people will be upset with the bandwidth it consumes. Keep in
mind that not everyone on the list is interested in every conversation.
--
Bill Moran
Collaborative Fusion Inc.
wmo...@collaborativefusion.com
Phone: 412-422-3463x4023
****************************************************************
IMPORTANT: This message contains confidential information
and is intended only for the individual named. If the reader of
this message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.
****************************************************************
Disclaimers should go the same way too! :)
echo "
****************************************************************
IMPORTANT: This message contains confidential information
and is intended only for the individual named. If the reader of
this message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.
****************************************************************" | wc
16 129 958
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentler gamester is the soonest winner.
Shakespeare
Have you seen any FreeBSD performance tuning guides?
Regards,
Istvan
2009/10/16 Hongtao Yin <ht...@huawei.com>
> Istvan,
>
>
>
> I ran newer FreeBSD code 8.0 RC1 this time. By using NetPIPE, we collected
> test data and the result shows that FreeBSD TCP performance is worse than
> Linux. I got troubles to plot using gnuplot, so here I am attaching the raw
> data files.
>
> The netpipe commands are as follows:
>
> PC2: #NPtcp
>
> PC1: #NPtcp 锟紺h 192.168.1.20
>
>
>
> Regards,
>
> Hongtao
>
>
>
>
>
>
> ------------------------------
>
> *From:* Istv锟斤拷n [mailto:lec...@gmail.com]
> *Sent:* 2009锟斤拷10锟斤拷15锟斤拷 5:13
>
> *To:* Hongtao Yin
> *Cc:* freebsd-p...@freebsd.org
> *Subject:* Re: Comparison of FreeBSD/Linux TCP Throughput performance
Istvan
2009/10/16 Steve Dong <sd...@huawei.com>
> Here are graphs from the netpipe test results with 8.0 RC1
>
>
> Thanks,
> Steve
>
>
>
> -----Original Message-----
> From: owner-freebs...@freebsd.org
> [mailto:owner-freebs...@freebsd.org] On Behalf Of Hongtao Yin
> Sent: Thursday, October 15, 2009 8:52 PM
> To: 'Istv锟斤拷n'
> Cc: freebsd-p...@freebsd.org
> Subject: RE: Comparison of FreeBSD/Linux TCP Throughput performance
>
> Istvan,
>
>
>
> I ran newer FreeBSD code 8.0 RC1 this time. By using NetPIPE, we collected
> test data and the result shows that FreeBSD TCP performance is worse than
> Linux. I got troubles to plot using gnuplot, so here I am attaching the raw
> data files.
>
> The netpipe commands are as follows:
>
> PC2: #NPtcp
>
> PC1: #NPtcp 锟紺h 192.168.1.20
>
>
>
> Regards,
>
> Hongtao
>
>
>
>
>
>
>
> _____
> I ran newer FreeBSD code 8.0 RC1 this time. By using NetPIPE, we collected
Check
man tuning
There are a few parameters there worth exploring.
For example check the section on net.inet.tcp.sendspace and
net.inet.tcp.recvspace
Does anyone have the tuned parameters?
Actually we are looking for info like
1. Any bugs in Freebsd driver which have been fixed in linux kernel
2. Any TCP features/RFCs implemented in Linux, but not in Freebsd
3. Any other discrepancies between Linux and FreeBSD TCP implementation that
could potentially have caused this
Thanks.
Hongtao
_____
From: Istv锟斤拷n [mailto:lec...@gmail.com]
Sent: 2009锟斤拷10锟斤拷16锟斤拷 5:29
To: Hongtao Yin
Cc: freebsd-p...@freebsd.org
Subject: Re: Comparison of FreeBSD/Linux TCP Throughput performance
I see.
It shows that linux default setup is better.
Have you seen any FreeBSD performance tuning guides?
Regards,
Istvan
2009/10/16 Hongtao Yin <ht...@huawei.com>
Istvan,
I ran newer FreeBSD code 8.0 RC1 this time. By using NetPIPE, we collected
test data and the result shows that FreeBSD TCP performance is worse than
Linux. I got troubles to plot using gnuplot, so here I am attaching the raw
data files.
The netpipe commands are as follows:
PC2: #NPtcp
PC1: #NPtcp 锟紺h 192.168.1.20
Regards,
Hongtao
_____
From: Istv锟斤拷n [mailto:lec...@gmail.com]
Sent: 2009锟斤拷10锟斤拷15锟斤拷 5:13
To: Hongtao Yin
Cc: freebsd-p...@freebsd.org
Subject: Re: Comparison of FreeBSD/Linux TCP Throughput performance
.. being completely correct, it shows the linux default setup _for
netpipe_ is better on that particular hardware.
That identifies a few other variables which may need addressing. :)
Adrian
I like! :)
--
Thanks,
Steve
-----Original Message-----
From: owner-freebs...@freebsd.org
[mailto:owner-freebs...@freebsd.org] On Behalf Of Bill Moran
Sent: Friday, October 16, 2009 4:54 AM
To: Steve Dong
Cc: freebsd-p...@freebsd.org
Subject: Re: Comparison of FreeBSD/Linux TCP Throughput performance
wmo...@collaborativefusion.com
Phone: 412-422-3463x4023
And maybe the wise can send out word of doing well on FreeBSD, so we can
choose a better setup with looking forward repeating the test under
'tuned' conditions? I'm willing to perform some tests within the next 4
weeks when our server hardware (Dell PowerEdge 1950-III with two
if_bge() NICs and 16GB RAM) changes OS from FreeBSD 8.0 to RedHat Linux.
In a time-window of about a week I might be capable of testing FreeBSD
8.0 as it would be that time by the mid of November with a setup of
Linux (distro doesn't matter as I can choose). I need to know WHAT,
WHERE and HOW. Thanks.
oh
On Oct 17, 2009, at 8:14 AM, Steve Dong wrote:
> If there's a better/lighter way to show these graphics, I'd like to
> know.
Sure-- put 'em on a webserver somewhere, and put links to them in your
email to this mailing list.
If you wanted to do even better than that, set up a simple webpage
describing what you are doing in your comparison, have a link to the
dmesg/boot output for each platform as a .txt file and a description
of any system tweaks & tuning, have a link that points to a
description of the test setup (ie, your ASCII diagram of the switch
and 4 machines), then your graphs, then the raw data (or links to it,
depending). You can then throw in netstat -s output, or NIC driver
stats from sysctl, or switch stats, etc-- anything else that adds
useful context.
There are a fair number of posts in the list archives which describe
how to benchmark reliably, and the people who are most likely to be
making code changes to FreeBSD also tend to like to know whether
you've collected enough data, in a controlled fashion, to have an idea
as to whether your measurements are reproducible. I'm not a purist,
and I believe you can get useful estimations without rigorous testing,
but there are others who make the point that if you haven't provided
at least a standard deviation, then you haven't collected enough
data-- done enough trials-- to determine whether the results are
meaningful. (See /usr/src/tools/tools/ministat/README)
Of course, you're not obligated to do any of the above, but if you
want the effort you've put in to be more useful, consider these a
suggestion. Finally, the next step beyond that would be to try
tweaking some things, and see what kind of changes you get from that
versus the original performance. It might be the case that making a
simple tuning change would have a significant difference in
performance; if you can identify that, then FreeBSD or Linux
developers can use that information to better tune the OS defaults.
Regards,
--
-Chuck
Trying to chime in with a few pointers here. Things to check when
doing a TCP benchmark on FreeBSD.
In particular make sure to adjust theses:
net.inet.tcp.recvbuf_max: 262144
net.inet.tcp.recvbuf_inc: 16384
net.inet.tcp.recvbuf_auto: 1
net.inet.tcp.sendbuf_max: 262144
net.inet.tcp.sendbuf_inc: 8192
net.inet.tcp.sendbuf_auto: 1
Leave the auto on, but increase the max values and you should probably
also change the inc (increment)
values as well. Make sure that if you increase the buffer sizes you
increase your number of mbufs and
clusters as well. See kern.ipc.nmbclusters, which is a kernel tunable
that can be set in /boot/loader.conf .
Make sure that both of the systems you're testing have the same low
level hardware support such as
TCP Segment Offload (TSO) and TCP Checkusm Offload are turned on.
Also you might want to turn this off:
net.inet.tcp.inflight.enable: 1
This page http://fasterdata.es.net/TCP-tuning/FreeBSD.html
claims that it can harm high speed connections.
Those are the basics to start with. A search of "Tuning FreeBSD TCP"
turns up some decent pages as well.
Best,
George
128 byte send/receive buffers on the client side:
kristy# netperf -H 192.168.10.2 -p 22113 -l 10
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.2
(192.168.10.2) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
8192 128 128 10.00 426.17
1kbyte send/receive buffers:
kristy# netperf -H 192.168.10.2 -p 22113 -l 10
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.2
(192.168.10.2) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
8192 1024 1024 10.00 903.39
8kbyte send/receive buffers:
kristy# netperf -H 192.168.10.2 -p 22113 -l 10
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.2
(192.168.10.2) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
8192 8192 8192 10.00 913.71
Both boxes are 7.2-REL amd64 boxes on 3.4GHz Pentium-D CPUs using some
onboard flavour of the intel e1000 NIC:
device = '82573E Intel Corporation 82573E Gigabit Ethernet
Controller (Copper)'
They are connected via a Cisco 3750G L3 switch. In fact, the traffic
is routed, rather than switched.
My /etc/sysctl.conf:
net.inet.icmp.icmplim=0
net.inet.icmp.icmplim_output=0
net.inet.tcp.msl=3000
net.inet.tcp.sendspace=8192
net.inet.tcp.recvspace=8192
kern.maxfilesperproc=65536
kern.maxfiles=262144
kern.ipc.maxsockets=32768
kern.ipc.somaxconn=1024
kern.ipc.nmbclusters=131072
net.inet.ip.fw.enable=0
kern.ipc.somaxconn=10240
2c,
Adrian
2009/10/15 Hongtao Yin <ht...@huawei.com>:
Can you try with 64K and up tp 1MB buffers?
I see ~1Gbit speeds with my FreeBSD boxes using Broadcom NIC's and
cheap Netgear switches.
I'm not sure how the original tester got such poor numbers, when my
setup is relatively low end, and sustaining Gbit speeds is no major
feat.
--
Brent Jones
br...@servuhome.net
kristy# netperf -H 192.168.10.2 -p 22113 -l 10
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.2
(192.168.10.2) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
8192 65536 65536 10.00 862.48
1 megabyte socket buffers threw an error. I'll see why later.
Now, as for why 64k socket buffers gave a slower result than 8k socket
buffers... ah. If I change the sending end to use 64k socket buffers:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.2
(192.168.10.2) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
65536 65536 65536 10.00 916.23
Adrian
2009/10/19 Brent Jones <br...@servuhome.net>:
net.inet.tcp.inflight.enable=0
net.inet.tcp.sendspace=65536
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
Regards
Steve
----- Original Message -----
From: "Adrian Chadd" <adr...@freebsd.org>
To: "Brent Jones" <br...@servuhome.net>
Cc: "Hongtao Yin" <ht...@huawei.com>; <freebsd-p...@freebsd.org>
Sent: Monday, October 19, 2009 2:36 AM
Subject: Re: Comparison of FreeBSD/Linux TCP Throughput performance
uhm:
Adrian
================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.
In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
or return the E.mail to postm...@multiplay.co.uk.
16 MB network buffers? What kind of % impact do you see from them?
Hi,
I haven't tried comparing this sort of performance with Linux so your
conclusion still might be right, but the fact that you couldn't saturate
1 Gbps on either system even with big packets suggests that there might
be an external problem - a bad network card or a bad driver for the
network card, or a switch whose line discipline is a bit in conflict
with the NIC or the driver.
I have previously successfully (and rather trivially) saturated 1 Gbps
links with Broadcom cards with FreeBSD 7.x, so it *is* possible.
Also, the OP should take a look at some previous benchmarks and the link
to benchmark advices here:
Regards
Steve
----- Original Message -----
From: "Ivan Voras" <ivo...@freebsd.org>
To: <freebsd-p...@freebsd.org>
Sent: Monday, October 19, 2009 12:44 PM
Subject: Re: Comparison of FreeBSD/Linux TCP Throughput performance
> uhm:
>
> kristy# netperf -H 192.168.10.2 -p 22113 -l 10
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.2
> (192.168.10.2) port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 8192 65536 65536 10.00 862.48
>
> 1 megabyte socket buffers threw an error. I'll see why later.
>
> Now, as for why 64k socket buffers gave a slower result than 8k socket
> buffers... ah. If I change the sending end to use 64k socket buffers:
>
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.2
> (192.168.10.2) port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 65536 65536 65536 10.00 916.23
>
>
>
> Adrian
>
>
therefore i like netpipe runs you can see the performance and the latency as
well using the packet size as your "x" axis, i think it makes more sense
then just 1 number
--
the sun shines for all
> therefore i like netpipe runs you can see the performance and the latency as
> well using the packet size as your "x" axis, i think it makes more sense
> then just 1 number
My point was to demonstrate that saturating gigabit ethernet is very
doable with FreeBSD, and his limitation is more likely somewhere other
than "TCP".
I've told him privately to check CPU utilisation. I'll do the same on
my boxes when I get some time; I'd like to know why I'm only seeing ~
800mbit with large buffers.
going to chime in on this one....just trying to help.
There's some simple things to get Gb, jumbo frames (MTU > 1500 on both the switch port and the card) is a simple way.
However, I'd have to read back on this thread as I haven't had time of late. Basically, and I've seen this on many, many Gb cards, chipsets and Drives make the world of difference.
I tried for a few days to try and get an HP DL360 with it's dual on-board Broadcom bge NIC to get to 1 Gb.... just plain no way. If anyone has settings for that, I'd like to know them. Also, this is the same chip set that a lot of vendors use and it is cheap and inexpensive. When I couldn't get the thing to go beyond 720Mb, I tried something simple. I ordered an Intel dual Gb port card and put that in. WITHOUT tuning, this thing started at almost 800 Mb throughput and I almost got it to 850 Mb within a few hours.
I wish I could send those settings to this list but it was well over a year ago that I did this.
Sadly, most large vendors start with Broadcom chipsets and don't want to spent the extra $10 for the Intel chipset. (No, I am not a fan boy of Intel, more of AMD if anything, but their NICs rock.)
P.
________________________________
From: Adrian Chadd <adr...@freebsd.org>
To: Istv�n <lec...@gmail.com>
Cc: Hongtao Yin <ht...@huawei.com>; freebsd-p...@freebsd.org; Brent Jones <br...@servuhome.net>
Sent: Mon, October 19, 2009 10:39:53 PM
Subject: Re: Comparison of FreeBSD/Linux TCP Throughput performance
2009/10/20 Istv�n <lec...@gmail.com>:
_
On Tue, Oct 20, 2009 at 3:39 AM, Adrian Chadd <adr...@freebsd.org> wrote:
> 2009/10/20 István <lec...@gmail.com>:
--