is there anywhere a website that show me all the performancetools to
measure the performance of ZFS or a single Harddisk?
In Linux I have "hdparm -tT /dev/sda" to get the raw disk performance.
What is the equivalent FreeBSD command?
/gT/
> _______________________________________________
> freebsd-p...@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "
> freebsd-perform...@freebsd.org"
>
> _______________________________________________
> freebsd-p...@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "freebsd-perform...@freebsd.org"
diskinfo(8) ?
--
Regards,
Nikolay Denev
On Nov 29, 2009 9:31 PM, "Ray Kinsella" <raykin...@gmail.com> wrote:
Can I recommend using bonnie++
On Sat, Nov 28, 2009 at 8:57 AM, Nikolay Denev <nde...@gmail.com> wrote: >
On 28 Nov, 2009, at 04:...
"diskinfo -vt /dev/blahblah" will give you seek tests and linear read tests.
----- Original Message -----
From: "Noisex" <noi...@apollo.lv>
To: <freebsd-p...@freebsd.org>
Sent: Monday, December 07, 2009 12:41 PM
Subject: FreeBSD TCP tuning and performance
Hi! I have a problem with TCP performance on FBSD boxes with 1Gbps net i-faces (Broadcom NetXtreme II BCM5708 1000Base-T (B2)).
Currently i use FBSD 7.1 AMD64.
The test lab: 2 x (server-client) HP Proliant G5 DL360 (quad-core/8gb ram, raid 5 SAS).
For net benchmark i used nuttcp and iperf.
The servers (client-server) are in 1 VLAN.
The results on 1Gbps (down & up):
63.4375 MB / 1.00 sec = 532.1332 Mbps
64.3750 MB / 1.00 sec = 540.0426 Mbps
62.8125 MB / 1.00 sec = 526.8963 Mbps
64.5625 MB / 1.00 sec = 541.6318 Mbps
63.9375 MB / 1.00 sec = 536.3595 Mbps
63.7500 MB / 1.00 sec = 534.7566 Mbps
63.0000 MB / 1.00 sec = 528.5003 Mbps
63.5000 MB / 1.00 sec = 532.7150 Mbps
64.0000 MB / 1.00 sec = 536.8586 Mbps
63.5625 MB / 1.00 sec = 533.2452 Mbps
637.6688 MB / 10.02 sec = 533.9108 Mbps 9 %TX 9 %RX 9 host-retrans 0.67 msRTT
25.5625 MB / 1.00 sec = 214.3916 Mbps
30.8750 MB / 1.00 sec = 259.0001 Mbps
29.9375 MB / 1.00 sec = 251.1347 Mbps
27.1875 MB / 1.00 sec = 228.0669 Mbps
30.5000 MB / 1.00 sec = 255.8533 Mbps
30.2500 MB / 1.00 sec = 253.7551 Mbps
26.8125 MB / 1.00 sec = 224.9211 Mbps
30.3750 MB / 1.00 sec = 254.8047 Mbps
30.3750 MB / 1.00 sec = 254.8050 Mbps
30.0625 MB / 1.00 sec = 252.1835 Mbps
292.2155 MB / 10.02 sec = 244.6825 Mbps 10 %TX 12 %RX 0 host-retrans 0.71 msRTT
As you can see down is littlebit more than half of full link speed. And upload is only 20-25% of full link.
I tried to change a lot sysctl params but without a big results. Currenlty my entries in /etc/sysctl.conf which regarding to TCP:
#kernel tuning, tcp
kern.ipc.somaxconn=2048
kern.ipc.nmbclusters=32768
kern.ipc.maxsockbuf=8388608
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.inflight.enable=0
net.inet.tcp.sendspace=65536
net.inet.tcp.recvspace=65536
net.inet.udp.recvspace=65536
net.inet.tcp.inflight.enable=0
net.inet.tcp.rfc1323=1
net.inet.tcp.sack.enable=1
net.inet.tcp.path_mtu_discovery=1
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.recvbuf_inc=524288
Do you have some kind suggestion what i could to change to increase the performance of TCP?
Besides when i make the benchamrks i run the sniffer to see whats happening with network..sometimes i saw that window size is
0...does it mean that server can't handle something or recieve buffer size is to small?
p.s sory for my bad english :)
Noisex
================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.
In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
or return the E.mail to postm...@multiplay.co.uk.
> Hi! I have a problem with TCP performance on FBSD boxes with 1Gbps net i-faces (Broadcom NetXtreme II BCM5708 1000Base-T (B2)). Currently i use FBSD 7.1 AMD64.
>
> The test lab: 2 x (server-client) HP Proliant G5 DL360 (quad-core/8gb ram, raid 5 SAS).
>
> For net benchmark i used nuttcp and iperf.
>
> The servers (client-server) are in 1 VLAN.
If this is on a switch shared with other (busy) systems, you might be
measuring the saturation/capacity of the switch (even if you have those
two units on a dedicated vlan). Try the test with a crossover cable to
eliminate that possibility.
> The results on 1Gbps (down & up):
>
> 63.4375 MB / 1.00 sec = 532.1332 Mbps
> 64.3750 MB / 1.00 sec = 540.0426 Mbps
> 62.8125 MB / 1.00 sec = 526.8963 Mbps
> 64.5625 MB / 1.00 sec = 541.6318 Mbps
> 63.9375 MB / 1.00 sec = 536.3595 Mbps
> 63.7500 MB / 1.00 sec = 534.7566 Mbps
> 63.0000 MB / 1.00 sec = 528.5003 Mbps
> 63.5000 MB / 1.00 sec = 532.7150 Mbps
> 64.0000 MB / 1.00 sec = 536.8586 Mbps
> 63.5625 MB / 1.00 sec = 533.2452 Mbps
>
> 637.6688 MB / 10.02 sec = 533.9108 Mbps 9 %TX 9 %RX 9 host-retrans 0.67 msRTT
>
> 25.5625 MB / 1.00 sec = 214.3916 Mbps
> 30.8750 MB / 1.00 sec = 259.0001 Mbps
> 29.9375 MB / 1.00 sec = 251.1347 Mbps
> 27.1875 MB / 1.00 sec = 228.0669 Mbps
> 30.5000 MB / 1.00 sec = 255.8533 Mbps
> 30.2500 MB / 1.00 sec = 253.7551 Mbps
> 26.8125 MB / 1.00 sec = 224.9211 Mbps
> 30.3750 MB / 1.00 sec = 254.8047 Mbps
> 30.3750 MB / 1.00 sec = 254.8050 Mbps
> 30.0625 MB / 1.00 sec = 252.1835 Mbps
>
> 292.2155 MB / 10.02 sec = 244.6825 Mbps 10 %TX 12 %RX 0 host-retrans 0.71 msRTT
>
> As you can see down is littlebit more than half of full link speed. And upload is only 20-25% of full link.
I'm not familiar with that program, but can you increase the test sample
size? 65M isn't a lot of data to push over a 1gps link for testing
purposes, and you might be seeing startup overhead.
> I tried to change a lot sysctl params but without a big results. Currenlty my entries in /etc/sysctl.conf which regarding to TCP:
>
> #kernel tuning, tcp
> kern.ipc.somaxconn=2048
> kern.ipc.nmbclusters=32768
>
> kern.ipc.maxsockbuf=8388608
> net.inet.tcp.sendbuf_max=16777216
> net.inet.tcp.recvbuf_max=16777216
> net.inet.tcp.inflight.enable=0
> net.inet.tcp.sendspace=65536
> net.inet.tcp.recvspace=65536
> net.inet.udp.recvspace=65536
> net.inet.tcp.inflight.enable=0
> net.inet.tcp.rfc1323=1
> net.inet.tcp.sack.enable=1
> net.inet.tcp.path_mtu_discovery=1
> net.inet.tcp.sendbuf_auto=1
> net.inet.tcp.sendbuf_inc=16384
> net.inet.tcp.recvbuf_auto=1
> net.inet.tcp.recvbuf_inc=524288
>
> Do you have some kind suggestion what i could to change to increase the performance of TCP?
>
> Besides when i make the benchamrks i run the sniffer to see whats happening with network..sometimes i saw that window size is 0...does it mean that server can't handle something or recieve buffer size is to small?
If the window size drops to 0, it means the receive buffer on the receiving
system is full and waiting to be flushed by the application.
Considering the fact that you're sending 65M per second, a 16M buffer might
not be large enough.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
Actually on that server is hosted MyConnection SpeedServer (http://www.visualware.com/) for bandwidth tests. Some month ago we started to clients offer GPON with speed 500Mbit/500mbit.
While we offer DSL technology and speeds was up to 100Mbit it was pretty good results...but now we can't measure a speed on 1Gbps link...the results is very pure (almost half from real on FreeBSD).
This results aren't only with MyConnection..also with nuttcp and iperf...i have feeling, that FreBSD can't handle window size send/receive buffer
p.s maybe I should enable Network Pooling/disable interrupts on network card etc things? What could be a recommendations to set up for maximum performance TCP settings on 1/10Gbps interface with sysctl parameters?
Noisex
A standard iperf cmd line won't give line rate, the ones we use here are:-
== Server ==
iperf -s -w 2.5M -l 2.5M
== Client ==
iperf -i 10 -t 20 -c <server> -w 2.5M -l 2.5M
== Tuning ==
We use the following tuning on our machines to achieve line rate Gig on 7.0 amd64
net.inet.tcp.inflight.enable=0
net.inet.tcp.sendspace=65536
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
Out of curiousity I just tried this very test on an 8.0 box we have here and was only
able to achieve similar performance to yourself. So it may be the case that there
has been a significant regression since 7.0, I'll have to do some more tests when
I have time.
For reference the machines we have tested and get line rate on have the following
nic's
== Machine #1 7.0-RELEASE amd64 ==
em0: <Intel(R) PRO/1000 Network Connection Version - 6.7.3> port 0x2000-0x201f mem 0xd8400000-0xd841ffff irq 18 at device 0.0 on
pci6
em0: Using MSI interrupt
em0: Ethernet address: .....
em0: [FILTER]
== Machine #2 7.0-RELEASE amd64 ==
bge0: <Broadcom NetXtreme Gigabit Ethernet Controller, ASIC rev. 0x2100> mem 0xfc9f0000-0xfc9fffff irq 26 at device 5.0 on pci3
miibus0: <MII bus> on bge0
brgphy0: <BCM5704 10/100/1000baseTX PHY> PHY 1 on miibus0
brgphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-FDX, auto
bge0: Ethernet address: ....
bge0: [ITHREAD]
The machine which is currently under performing
== Machine #3 8.0-RELEASE amd64 ==
em0: <Intel(R) PRO/1000 Network Connection 6.9.14> port 0x2000-0x201f mem 0xd8400000-0xd841ffff irq 18 at device 0.0 on pci6
em0: Using MSI interrupt
em0: [FILTER]
em0: Ethernet address: 00:30:48:33:ec:44
Regards
Steve
using FreeBSD RELENG_8 amd64 on a low traffic dell 5224 switch I was
able to see these results, and I don't see a problem
wile running iperf I brought up top -P on my desktop machine here is
what it said
last pid: 27659; load averages: 0.62, 0.34, 0.22
up 1+20:29:22 14:28:43
166 processes: 3 running, 163 sleeping
CPU 0: 0.0% user, 0.0% nice, 91.6% system, 0.0% interrupt, 8.4% idle
CPU 1: 3.8% user, 0.0% nice, 9.6% system, 0.0% interrupt, 86.5% idle
CPU 2: 7.7% user, 0.0% nice, 18.7% system, 0.6% interrupt, 72.9% idle
CPU 3: 1.3% user, 0.0% nice, 26.5% system, 0.6% interrupt, 71.6% idle
Mem: 587M Active, 706M Inact, 376M Wired, 6556K Cache, 271M Buf, 2241M Free
Swap: 3072M Total, 168K Used, 3072M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
24268 sfourman 1 46 0 368M 43060K CPU2 1 87:54 2.29%
npviewer.bin
24263 sfourman 10 44 0 357M 186M ucond 1 25:21 1.07% firefox-bin
1396 sfourman 1 44 0 3245M 98416K select 1 47:19 0.00% Xorg
24271 sfourman 1 44 0 368M 43060K futex 1 15:35 0.00%
npviewer.bin
24272 sfourman 1 44 0 368M 43060K futex 2 12:26 0.00%
npviewer.bin
24273 sfourman 1 44 0 368M 43060K futex 3 8:14 0.00%
npviewer.bin
24284 sfourman 1 44 0 368M 43060K pcmwrv 2 1:07 0.00%
npviewer.bin
22460 sfourman 1 44 0 33336K 25280K select 0 1:02 0.00% wowmatrix
1657 sfourman 2 51 0 112M 18880K piperd 1 0:52 0.00%
gnome-terminal
1368 root 1 44 0 12536K 1688K select 1 0:50 0.00%
hald-addon-storage
1366 root 1 44 0 12536K 1684K select 1 0:50 0.00%
hald-addon-storage
1412 sfourman 3 44 0 212M 19304K ucond 1 0:40 0.00%
gnome-settings-daem
705 root 1 44 0 8036K 1184K select 3 0:37 0.00% moused
1437 sfourman 2 44 0 198M 41172K ucond 1 0:32 0.00% nautilus
1329 root 1 44 0 12536K 1628K select 3 0:19 0.00%
hald-addon-storage
18652 sfourman 2 56 0 112M 18736K piperd 2 0:18 0.00%
gnome-terminal
1321 haldaemon 1 44 0 24380K 4920K select 2 0:18 0.00% hald
1435 sfourman 2 44 0 164M 29048K ucond 2 0:17 0.00% gnome-panel
1434 sfourman 1 44 0 110M 16348K select 1 0:15 0.00% metacity
17410 sfourman 1 44 0 13000K 2336K select 3 0:15 0.00% gam_server
1506 sfourman 2 44 0 172M 22000K ucond 2 0:14 0.00%
clock-applet
1469 sfourman 2 44 0 98440K 14148K ucond 3 0:12 0.00%
gnome-screensaver
1476 sfourman 1 44 0 135M 18960K select 3 0:10 0.00% wnck-applet
1482 sfourman 2 44 0 25104K 4200K ucond 1 0:10 0.00% gvfsd-trash
1488 sfourman 3 47 0 28204K 4628K piperd 0 0:10 0.00%
gvfs-hal-volume-mon
1407 sfourman 1 44 0 26764K 7252K select 1 0:10 0.00% gconfd-2
27653 root 3 44 0 15588K 4512K ucond 1 0:07 0.00% iperf
19711 sfourman 2 44 0 113M 20304K piperd 3 0:07 0.00% Thunar
1546 root 1 44 0 13000K 2116K select 3 0:06 0.00% gam_server
24852 sfourman 10 44 0 249M 51160K select 0 0:03 0.00% vlc
22153 sfourman 1 60 16 66140K 13548K select 2 0:03 0.00% trackerd
25373 sfourman 10 44 0 251M 54412K select 1 0:02 0.00% vlc
1450 sfourman 1 44 0 126M 17076K select 2 0:02 0.00%
gnome-power-manager
FreeBSD Sam.PuffyBSD.Com 8.0-STABLE FreeBSD 8.0-STABLE #1: Mon Nov 30
21:25:50 CST 2009
sfou...@Sam.PuffyBSD.Com:/usr/obj/usr/src/sys/GENERIC amd64
nfe0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4>
ether 00:23:54:96:dd:8d
inet 192.168.12.117 netmask 0xffffff00 broadcast 192.168.12.255
media: Ethernet autoselect (1000baseT <full-duplex,flag2>)
status: active
Sam# iperf -i 1 -t 60 -c 192.168.12.188 -w 2.5M -l 2.5M
------------------------------------------------------------
Client connecting to 192.168.12.188, TCP port 5001
TCP window size: 32.5 KByte (WARNING: requested 2.50 MByte)
------------------------------------------------------------
[ 3] local 192.168.12.117 port 46609 connected with 192.168.12.188 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 110 MBytes 923 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 3] 1.0- 2.0 sec 110 MBytes 923 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 3] 2.0- 3.0 sec 112 MBytes 944 Mbits/sec
[ ID] Interval Transfer Bandwidth