Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Samba] aio settings for samba 4.3

656 views
Skip to first unread message

Russell R Poyner

unread,
Jul 19, 2016, 3:00:03 PM7/19/16
to
I'm tuning a samba 4.3 install on freebsd and I'm confused about aio
settings.

I've loaded the freebsd aio kernel module and tried various values or
aio read size and aio write size, but it seems to make no difference in
the speed.

Using MS diskspd against a samba share from a fast zfs pool I get
something like 25MB/s tops. That's well below the capacity of my Gb
network and my disk system. FWIW iperf shows >900Mbits/sec in both
directions on the link.

# smbd -b|grep aio
vfs_aio_fork_init
vfs_aio_posix_init
vfs_aio_pthread_init

As always google finds lots of tuning advice, but it's not clear what if
any of it applies to 4.3 on FreeBSD.

Thanks
Russ Poyner

--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba

Jeremy Allison

unread,
Jul 19, 2016, 3:50:02 PM7/19/16
to
On Tue, Jul 19, 2016 at 12:49:09PM -0500, Russell R Poyner wrote:
> I'm tuning a samba 4.3 install on freebsd and I'm confused about aio
> settings.
>
> I've loaded the freebsd aio kernel module and tried various values
> or aio read size and aio write size, but it seems to make no
> difference in the speed.
>
> Using MS diskspd against a samba share from a fast zfs pool I get
> something like 25MB/s tops. That's well below the capacity of my Gb
> network and my disk system. FWIW iperf shows >900Mbits/sec in both
> directions on the link.
>
> # smbd -b|grep aio
> vfs_aio_fork_init
> vfs_aio_posix_init
> vfs_aio_pthread_init

You don't need these, modern Samba includes a pthread pool
implementation that will parallelize SMB io requests.

Ensure the clients are using SMB2 and SMB2 leases are
enabled. You should be able to get close to wirespeed.

Volker Lendecke

unread,
Jul 20, 2016, 5:10:03 AM7/20/16
to
On Tue, Jul 19, 2016 at 12:42:53PM -0700, Jeremy Allison wrote:
> On Tue, Jul 19, 2016 at 12:49:09PM -0500, Russell R Poyner wrote:
> > I'm tuning a samba 4.3 install on freebsd and I'm confused about aio
> > settings.
> >
> > I've loaded the freebsd aio kernel module and tried various values
> > or aio read size and aio write size, but it seems to make no
> > difference in the speed.
> >
> > Using MS diskspd against a samba share from a fast zfs pool I get
> > something like 25MB/s tops. That's well below the capacity of my Gb
> > network and my disk system. FWIW iperf shows >900Mbits/sec in both
> > directions on the link.
> >
> > # smbd -b|grep aio
> > vfs_aio_fork_init
> > vfs_aio_posix_init
> > vfs_aio_pthread_init
>
> You don't need these, modern Samba includes a pthread pool
> implementation that will parallelize SMB io requests.

The main reason for our user-space threaded approach is lack of aio in
Linux. Proper kernel support for posix AIO might be faster than our
implementation. "vfs objects = aio_posix" will give you that.

This needs very thorough performance testing. If it turns out to be
faster than our threaded aio on FreeBSD, we might have to revive
the aio_posix module, it went away last year.

Volker

Russell R Poyner

unread,
Jul 20, 2016, 6:00:04 PM7/20/16
to
Jeremy,

I re-built the samba43 port with aio_support unset and pthreadpool set.

smbd -b|grep aio
vfs_aio_fork_init
vfs_aio_pthread_init

from smb4.conf:

oplocks = yes
kernel oplocks = no
smb2 leases = yes
server min protocol = smb2

aio read size = 1024
aio write size = 1024
vfs objects = aio_pthread


The new build gives pretty much the same speeds as before using MS
diskspd under windows 7. Around 15MB/s for a random workload with 4k
blocks. Up to about 25MB/s with 64k blocks. For comparison I ran diskspd
on the windows 7 box using a windows 8 machine as the server. The
results were 49MB/s for 4k blocks and 1787MB/s for 64k blocks. Clearly
1787 MB/s is more than wire speed and reflects a cache effect that samba
doesn't access.

With the new build I do see smbd spawning extra threads under load. They
just don't seem to add any performance benefit.

This is a freebsd 10.2 system with samba running inside a jail.

Thanks again
Russ poyner

Russell R Poyner

unread,
Jul 21, 2016, 12:00:04 PM7/21/16
to
Volker,

Today I built samba43 with support for both posix and pthread aio. I
then ran the diskspd tests using either vfs objects = aio_pthread or vfs
objects = aio_posix.

There appears to be a very small advantage to the aio_pthread
implementation. Hard to say for sure given the run to run variation in
the numbers.

I'm left to conclude that something else in samba is limiting
performance. The best I've been able to measure was 48MB/s using 64k
blocks against a memdisk on the FreeBSD server. Still only around 1/2 of
wire speed as measured by iperf, and much less than I see running the
same test against a windows 8.1 server sharing from a single 7200rpm disk.

Any suggestions on to where to look next are welcome.

Thanks again
Russ Poyner

Russell R Poyner

unread,
Jul 21, 2016, 1:30:04 PM7/21/16
to
One more data point for comparison

I installed the stock samba 4.2 rpm on a centos 7 machine and ran the
same diskspd tests against a share configured with:
vfs objects = aio_pthread
aio read size = 1024
aio read size = 1024

smb2 leases = yes

I get 27MB/s with 4k blocks and 145MB/s with 64k blocks. Disabling
cacheing by passing the -h switch to diskspd lowered these to 72MB/s and
11MB/s. Which I view as 'close enough' to wire speed. Thus it seems that
the dismal performance I see is associated with the FreeBSD
implementation somehow.

Thanks again
RP

Achim Gottinger

unread,
Jul 21, 2016, 1:50:02 PM7/21/16
to
I think you must tune zfs on freebsd and assume you used an different fs
for the centos test.

Jeremy Allison

unread,
Jul 21, 2016, 2:10:02 PM7/21/16
to
On Thu, Jul 21, 2016 at 12:23:01PM -0500, Russell R Poyner wrote:
> One more data point for comparison
>
> I installed the stock samba 4.2 rpm on a centos 7 machine and ran
> the same diskspd tests against a share configured with:
> vfs objects = aio_pthread
> aio read size = 1024
> aio read size = 1024
>
> smb2 leases = yes
>
> I get 27MB/s with 4k blocks and 145MB/s with 64k blocks. Disabling
> cacheing by passing the -h switch to diskspd lowered these to 72MB/s
> and 11MB/s. Which I view as 'close enough' to wire speed. Thus it
> seems that the dismal performance I see is associated with the
> FreeBSD implementation somehow.

That's interesting, but I'm afraid I don't know FreeBSD well
enough to help here. This does imply the problem isn't Samba
specific though (unless it's a complex interaction between
Samba+FreeBSD).

Russell R Poyner

unread,
Jul 21, 2016, 3:00:02 PM7/21/16
to
Jeremy,

I think this is exactly a complex interaction between FreeBSD and Samba.
Best guess would be some system call that is fast in linux but slow in
FreeBSD holding things back.

Russ

Achim Gottinger

unread,
Jul 21, 2016, 5:00:03 PM7/21/16
to


Am 21.07.2016 um 20:56 schrieb Russell R Poyner:
> Jeremy,
>
> I think this is exactly a complex interaction between FreeBSD and
> Samba. Best guess would be some system call that is fast in linux but
> slow in FreeBSD holding things back.
>
> Russ
>
> On 07/21/2016 01:00 PM, Jeremy Allison wrote:
>> On Thu, Jul 21, 2016 at 12:23:01PM -0500, Russell R Poyner wrote:
>>> One more data point for comparison
>>>
>>> I installed the stock samba 4.2 rpm on a centos 7 machine and ran
>>> the same diskspd tests against a share configured with:
>>> vfs objects = aio_pthread
>>> aio read size = 1024
>>> aio read size = 1024
>>>
>>> smb2 leases = yes
>>>
>>> I get 27MB/s with 4k blocks and 145MB/s with 64k blocks. Disabling
>>> cacheing by passing the -h switch to diskspd lowered these to 72MB/s
>>> and 11MB/s. Which I view as 'close enough' to wire speed. Thus it
>>> seems that the dismal performance I see is associated with the
>>> FreeBSD implementation somehow.
>> That's interesting, but I'm afraid I don't know FreeBSD well
>> enough to help here. This does imply the problem isn't Samba
>> specific though (unless it's a complex interaction between
>> Samba+FreeBSD).
>
>
On my debian jessie server with zfs these settings seem to work.

max xmit = 65536
socket options = TCP_NODELAY

Copying an file from server to a windows 7 client increases from 50% to
80% network utilisation on an 1GB link without jumbo frames when i add
these settings.

Jeremy Allison

unread,
Jul 21, 2016, 5:10:04 PM7/21/16
to
That's kind of voodoo I'm afraid. By default Samba sets
TCP_NODELAY, and max xmit is only used on SMB1, and modern
Samba and Windows 7 should only be using SMB2.

Achim Gottinger

unread,
Jul 21, 2016, 5:20:04 PM7/21/16
to
Thank you for the clarification, finaly gotta drop TCP_NODELAY from
configs now.
Indeed the connection is using smb2_10. I assume some intermediate
caching influenced the copy test.

Tested it a few times with these settings enabled/disabled before i
posted but still...
Now im at 80% without these settings.

It is mentioned here (together with the abandonen IPTOS_LOWDELAY)
http://unicolet.blogspot.de/2013/03/a-not-so-short-guide-to-zfs-on-linux.html

I assume

zfs set xattr=sa tank/fish
zfs set atime=off tank/fish

are an good idea on freebsd as well.

Russell R Poyner

unread,
Jul 22, 2016, 9:40:03 AM7/22/16
to

Thanks for all the zfs tuning tips. The point is a good one.

I was concerned that zfs performance might be limiting, and posted tests
run against a ufs formatted ram disk for that reason. The tests with the
ram disk are slightly faster than the zfs backed tests, but still slower
than tests run with samba on linux using xfs on a single hard disk.

Russ Poyner

Volker Lendecke

unread,
Jul 22, 2016, 9:50:02 AM7/22/16
to
On Fri, Jul 22, 2016 at 08:30:42AM -0500, Russell R Poyner wrote:
>
> Thanks for all the zfs tuning tips. The point is a good one.
>
> I was concerned that zfs performance might be limiting, and posted tests run
> against a ufs formatted ram disk for that reason. The tests with the ram
> disk are slightly faster than the zfs backed tests, but still slower than
> tests run with samba on linux using xfs on a single hard disk.

Just a random thought: Purely for testing purposes, not for production
use: Try "strict locking = no". If this improves things, then the
upcoming robust shared mutexes will help you.

Volker

Jeremy Allison

unread,
Jul 22, 2016, 12:30:03 PM7/22/16
to
On Fri, Jul 22, 2016 at 08:30:42AM -0500, Russell R Poyner wrote:
>
> Thanks for all the zfs tuning tips. The point is a good one.
>
> I was concerned that zfs performance might be limiting, and posted
> tests run against a ufs formatted ram disk for that reason. The
> tests with the ram disk are slightly faster than the zfs backed
> tests, but still slower than tests run with samba on linux using xfs
> on a single hard disk.

Maybe a networking issue ? Once you've elimated disk bottlenecks
there aren't many other places left to look.
0 new messages