Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Slow performance with Intel X540-T2 10Gb NIC

309 views
Skip to first unread message

Chris Dunbar

unread,
Jul 20, 2016, 8:39:33 PM7/20/16
to
Hello,

I am new to FreeBSD and recently built a file server out of new components running FreeBSD 10.3. I installed an Intel X540-T2 10 Gb NIC and am experiencing what I consider to be slow transfer speeds. I am using iperf3 to measure the speed and test the results of modifications. So far nothing I have done has made a noticeable difference. If I run iperf3 -s on the FreeBSD server, I see transfer speeds of approximately 1.6 Gb/s. If I run iperf3 in client mode, the speed improves to ~2.75 Gb/s. However, if I replace FreeBSD with CentOS 7 on the same hardware, I see iperf3 speeds surpassing 8 GB/s. The other end of my iperf3 test is a Windows 10 box that also has an Intel X540-T2 installed.

I did notice that FreeBSD 10.3 (and 11.0 alpha 6 for that matter) includes a slightly older Intel driver (v3.1.13-k). I managed to build a custom kernel that removed the Intel PRO/10GbE PCIE NIC drivers. That allowed me to manually load the latest 3.1.14 driver downloaded from Intel's web site. Unfortunately that did not produce any improvements. I am working my way through man tuning() and some other articles on network performance. So far nothing I tweak makes a noticeable difference. I'm increasingly skeptical that I am going to find a setting or two that more than doubles the speed I am currently experiencing.

I am open to any and all suggestions at this point.

Thank you!
Chris
_______________________________________________
freeb...@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net...@freebsd.org"

Eric Joyner

unread,
Jul 21, 2016, 1:27:45 PM7/21/16
to
(Replying-all this time)

Did you try to set these settings that ESnet recommends?
https://fasterdata.es.net/host-tuning/freebsd/

We don't use iperf3 here at Intel (we use netperf instead), so I'm not sure
I can be much help diagnosing what's wrong.

Chris Dunbar

unread,
Jul 21, 2016, 4:56:14 PM7/21/16
to
Eric, et al:

I haven't tried netperf yet, but I do have some new information to share. I have two systems that I am using for testing: the new server and an older (not too old) desktop PC. I installed CentOS on the new server again because I know it can achieve >9 GB/s with the X540. I replaced Windows on the desktop PC with FreeBSD 10.3 (it also has an X540) and ran iperf3 again. I was able to achieve >9 GB/s so I know the problem isn't the X540 and I know the problem isn't anything with the default installation of FreeBSD 10.3. So, what in the world might be nutty in my BIOS settings (or elsewhere) that would cause the new server + FreeBSD 10.3 + X540 to equal slow performance?

Regards,
Chris

Jack Vogel

unread,
Jul 21, 2016, 5:07:34 PM7/21/16
to
NUMA issues maybe? They have been a problem on some recent system
architectures.

Chris Dunbar

unread,
Jul 21, 2016, 5:34:03 PM7/21/16
to
Hello again,

I have good news and bad news:

The bad news first: I am an idiot and I have wasted some of your time for which I apologize.

The good news: Testing now between two FreeBSD 10.3 systems, I am achieving blistering speeds with iperf3. I apparently fell into the trap of assuming the new thing (FreeBSD is new to me) was broken. Now I see that I was assuming Windows was working fine and focusing all my attention on FreeBSD. Looking back over everything I have done to troubleshoot this situation I must conclude that the performance issue was on the Windows side and not the FreeBSD side. I am less concerned about that because my ultimate goal is to install my three X540s into one FreeBSD server and two VMware ESXi hosts. I am now fairly confident performance will be great.

Many thanks for your collective attention and the suggestions I received from Eric and others.

Regards,
Chris

Sami Halabi

unread,
Jul 22, 2016, 8:34:52 AM7/22/16
to
hi,
would you share what was wrong in the windows side and how you solved it?

Sami

בתאריך 22 ביולי 2016 12:33 AM,‏ "Chris Dunbar" <ch...@dunbar.net> כתב:

Chris Dunbar

unread,
Jul 22, 2016, 9:52:45 AM7/22/16
to
Hi Sami,

I haven't actually fixed anything yet. I have only demonstrated that the poor performance does not appear to happen between two FreeBSD boxes and possibly between a Linux and FreeBSD, but I am going to confirm that now. I have also seen good performance between the Windows box and Linux so that doesn't quite add up either. I may have to break out Wireshark and make some packet captures to see if I can tell what's going on. If I find anything, I will be sure to share it.

Regards,
Chris

Kevin Oberman

unread,
Jul 22, 2016, 2:23:49 PM7/22/16
to

This sort of problem can be very tricky to diagnose. I'd like to suggest
that one of the tool you use should be SIFTR. It does kernel level
collection of network statistics and is a loadable module. By default it i
IPv4 only. It will have to be re-built with "CFLAGS+=-DSIFTR_IPV6"
uncommented in /sys/modules/siftr/Makefile for IPv6 support. It starts,
stops, amd manages collection under the control of 4 sysctls.

I have found it invaluable for analysis of netork performance issues, but
seems to not be widely known.
--
Kevin Oberman, Part time kid herder and retired Network Engineer
E-mail: rkob...@gmail.com
PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683

Chris Dunbar

unread,
Jul 22, 2016, 2:36:35 PM7/22/16
to
Thank you - I will check that out.

hiren panchasara

unread,
Jul 23, 2016, 3:22:16 PM7/23/16
to
On 07/22/16 at 11:23P, Kevin Oberman wrote:
[skip]
>
> This sort of problem can be very tricky to diagnose. I'd like to suggest
> that one of the tool you use should be SIFTR. It does kernel level
> collection of network statistics and is a loadable module. By default it i
> IPv4 only. It will have to be re-built with "CFLAGS+=-DSIFTR_IPV6"
> uncommented in /sys/modules/siftr/Makefile for IPv6 support. It starts,
> stops, amd manages collection under the control of 4 sysctls.
>
> I have found it invaluable for analysis of netork performance issues, but
> seems to not be widely known.

Another such tool which I (personally) find more powerful and less known
is dtrace in this context.

For example, in output direction, right when tcp is about to send a
packet to ip, there is a dtrace trace point which you can use to get a
ton of useful information:

# dtrace -n 'tcp:::send / args[2]->ip_saddr == "192.168.0.1" / {printf ("%8u", args[3]->tcps_mss)}'

This can let me see MSS for that out going packet. You can see all of
tcp control block data this way. The mapping is in /usr/lib/dtrace/tcp.d
file. You can always add whatever you want in there and use it.

https://github.com/brendangregg/DTrace-book-scripts/blob/master/Chap6/tcpio.d
is one such awesome script that you can modify to match your needs and
look at whatever bidirectional traffic in detail.

With dtrace predicates, you can filter out what you want to log which is
not possible yet with siftr. This makes running siftr a little annoying
on a busy box as it logs all the things and you have to do post
processing to get what you want to see.

Just my 2 rupees.

Cheers,
Hiren
0 new messages