Speed Tests

103 views
Skip to first unread message

ylluminate

unread,
Mar 8, 2014, 8:28:27 PM3/8/14
to zfs-...@googlegroups.com
Hi folks, thought it was time to try maczfs again.  I am shifting over from ZEVO due to GreenByte's fubar of the project.  Seems like the shift over to OpenZFS is worthwhile and I'm satisfied to see some positive results.  

One thing that has bothered me so far is the marked differential in speed.  Running AJA System Test I am seeing on ZEVO with lzjb compression rates such as the following:
         File Size Sweep
  MB     MB/sec
             Read      Write
128.0      1828.7      917.7
256.0      2291.3      545.3
512.0      1902.6      272.7
1024.0      1965.6      350.4
2048.0      1877.9      305.7
4096.0      1898.1      323.9
8192.0      1864.9      355.6
16384.0      1749.5      312.8

Whereas with maczfs as installed via: 

I am seeing:
         File Size Sweep
  MB     MB/sec
             Read      Write
128.0      390.7      82.3
256.0      285.5      93.9
512.0      290.2      84.1
1024.0      275.5      88.2
2048.0      272.7      87.5
4096.0      268.7      88.6
8192.0      281.3      88.3
16384.0      273.2      85.2

With LZJB on maczfs I am seeing about 10-30% slower rates.

Any thoughts on why we would be seeing such a remarkable difference at this point?


-ylluminate

Jason Belec

unread,
Mar 8, 2014, 8:41:16 PM3/8/14
to zfs-...@googlegroups.com
Can you give us the info that you used to build the test pools? 


--
Jason Belec
Sent from my iPad
--

---
You received this message because you are subscribed to the Google Groups "zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

ylluminate

unread,
Mar 8, 2014, 8:51:38 PM3/8/14
to zfs-...@googlegroups.com
In this particular case the pool is an imported ZEVO pool.  I don't yet have the luxury of creating a new pool until I move this data around

I did perform an upgrade on this pool and here are the settings:

Additionally, I'm seeing some odd spikes now on my processor that were not previously present within the kernel_task.  Any way to pry that open and see what is happening therein?  I'm concerned there's some junk happening under the covers here that I'm not readily seeing with zfs now.

ylluminate

unread,
Mar 8, 2014, 8:56:10 PM3/8/14
to zfs-...@googlegroups.com
Oh, and just to give a quick idea of how dramatic those peaks are compared to the last 24 hours without zfs enabled, here is a graph:

Jason Belec

unread,
Mar 8, 2014, 9:13:52 PM3/8/14
to zfs-...@googlegroups.com
Whats the pool history say about how this was created? 

I understand your importing, which may contribute to issues. Seen some in pools that are old that I've imported before copying contents to new pools with the suggested parameters.



--
Jason Belec
Sent from my iPad

ylluminate

unread,
Mar 8, 2014, 9:27:33 PM3/8/14
to zfs-...@googlegroups.com
Here's a history( `zpool history`):

ylluminate

unread,
Mar 8, 2014, 9:46:23 PM3/8/14
to zfs-...@googlegroups.com
Also curious, what has been the best pool and zfs creation args you've seen so far with best results here?  After I get data migrated off of this pool and onto a temp volume I'll recreate the pool and run a test again here in a few days to see if I get some different results.  Nonetheless, I certainly wouldn't think this kind of a disparity in results should be occurring.  I do know that Don did some amazing work on the memory aspects of ZEVO that have yet to be addressed in these implementations due to the problems of xnu memory mgmt, so that might have some bearing as well.

Jason Belec

unread,
Mar 9, 2014, 8:26:23 AM3/9/14
to zfs-...@googlegroups.com
Yes to everything you noticed. 

That said, hoping for anything Don did, is utterly pointless. He walked away with cash in his pocket. 

Currently pools are being built with commands like this

 zpool create -f -O compression=lz4 -O casesensitivity=insensitive -O normalization=formD -O atime=off -o ashift=12 pool raidz disk1 disk2 disk3


So definitely going to see issues with pools migrated from ZEVO or pools on the old/current MacZFS. 

I've been moving (copying) all my pool data across here in the lab and for clients for a few weeks as the improvements and data integrity for me are proving worthwhile. 

Jason
Sent from my iPhone 5S

On Mar 8, 2014, at 9:46 PM, ylluminate <yllum...@gmail.com> wrote:

Also curious, what has been the best pool and zfs creation args you've seen so far with best results here?  After I get data migrated off of this pool and onto a temp volume I'll recreate the pool and run a test again here in a few days to see if I get some different results.  Nonetheless, I certainly wouldn't think this kind of a disparity in results should be occurring.  I do know that Don did some amazing work on the memory aspects of ZEVO that have yet to be addressed in these implementations due to the problems of xnu memory mgmt, so that might have some bearing as well.

--

ylluminate

unread,
Mar 9, 2014, 8:54:20 AM3/9/14
to zfs-...@googlegroups.com
Please do not say things like that about Don. I know him personally and that is not what happened.  I'm not going to get into details here as it is not appropriate, but it is frankly not what happened.

I'll continue on my path then and work on getting everything off this pool and recreate it.  As you can see I used essentially those options on zfs create:
`zfs create -o atime=off -o utf8only=on -o casesensitivity=insensitive -o normalization=formD -o compression=lzjb tank/Users`

We'll see if this changes the games once I recreate the pool.

Jason Belec

unread,
Mar 9, 2014, 9:43:42 AM3/9/14
to zfs-...@googlegroups.com
Haha, it was not derogatory just business term for guys coming out of Apple, starting something with the tech and getting acquired for whatever reason. Been there myself. However it does not change how end users feel. ;)

As for your options, the change seems to be for pool vs zfs directory, but essentially yes. Alot of things are similar.



--
Jason Belec
Sent from my iPad

Daniel

unread,
Mar 9, 2014, 5:47:21 PM3/9/14
to zfs-macos
Is my understanding correct that you can't "fix" a pool after it's been created, but you actually have to create a new pool with the optimal settings of the day, and copy data around to get performance improvements?

That doesn't sound right.
"America was founded by men who understood that the threat of domestic tyranny is as great as any threat from abroad. If we want to be worthy of their legacy, we must resist the rush toward ever-increasing state control of our society. Otherwise, our own government will become a greater threat to our freedoms than any foreign terrorist."
 - Ron Paul, Texas Straight Talk, May 31, 2004

Jason Belec

unread,
Mar 9, 2014, 6:46:00 PM3/9/14
to zfs-...@googlegroups.com
Haha. I'm not claiming anything other than results of current testing. However for me, importing old pools and updating has not provided the performance of starting fresh. All that said, lot of things could be different in individual pool creation. Those coming from ZEVO may have significantly different experience. ;)


Jason
Sent from my iPhone 5S

ylluminate

unread,
Mar 10, 2014, 11:25:09 AM3/10/14
to zfs-...@googlegroups.com
Well I'm getting hangs that are at least an order of magnitude worse than ZEVO now.  At one point prior to the memory issues being worked around I had a lot of hangs where I would have to reboot the system from a remote ssh session or just hold the power in and hard reboot it.  I am finding after doing some heavy operations (in this case it was rebuilding about a 100GB iPhoto library) for a prolonged period of time, the hung to the point where I had to hard reboot.  Some apps were semi responsive, but after trying to close them, nearly all of them hung as well and force quit sometimes worked.  Terminal was completely beach balled.  I really had no way of digging any further to see where the problem might have come about, but obviously something is going on and locking up the kernel in some very bad ways right now.  Particularly painful since I could run no zfs commands to see where it was going haywire.

Daniel

unread,
Mar 10, 2014, 1:36:17 PM3/10/14
to zfs-macos
I also switched from Zevo this weekend. I didn't want to be stuck on 10.8 forever and I noticed some performance difference, that's for sure.

However, I'm grateful for all the work the guys are doing so I can use it :) Hopefully performance will be addressed too though.

ylluminate

unread,
Mar 10, 2014, 2:41:53 PM3/10/14
to zfs-...@googlegroups.com
Yeah, I certainly wish it was just performance.  A hang such as this is well beyond a performance issue at the moment.  I'm probably going to just go back to using ZFS on FBSD to keep my backups and use software RAID on my workstations until maczfs matures or Greenbytes get's its proverbial head out of its rear end (which seems unlikely at this point).

brendon....@mac.com

unread,
Mar 10, 2014, 7:34:07 PM3/10/14
to zfs-...@googlegroups.com
yllluminate,

Yes its a given that we are slower that Zevo at this point.

I am disturbed to hear of your hangs. Perhaps you could be a bit more specific and let us know what build of zfs-osx you are running, on what hardware, and importantly whether the machine can be sshed into when it enters this hang state. The behavior you are describing sounds a little like a zfs deadlock. If this is the case the machine will degrade gradually to the point of un-usability. Before it dies completely we ask users to run "spindump" and report the results on the irc channel, or via github issue. These deadlocks can be unraveled and corrected in the code. It has been at least a couple of weeks since we saw such a deadlock. We take these issues seriously of course.

There is one other user reporting hangs when executing certain stress tests. However Lundman and I have not been able to reproduce to date. The user reporting these hangs has not been able to reproduce using the latest source code either.

Simple usage such as what you describe (rebuilding a large iphoto library, not quite as large ~80 GB) is something I personally have done recently with no particular issues noted.

Please ensure that you have disabled spotlight for zfs-osx as this can cause problems (we don't support spotlight yet).

Cheers
Brendon

Jorgen Lundman

unread,
Mar 10, 2014, 8:01:58 PM3/10/14
to zfs-...@googlegroups.com

Once the installer is pushed out for the current "stable" version, the next focus is on OSX integration, and performance improvements.  Do we have a feel for how the latest snapshots performs compared to ZEVO? Are we talking 5-10-20%, or is it so high we suspect there is a problem?

Having said that, the most recent "hang" fix to master was about 48 hours ago. It could be worth making sure you are running the latest master.

ylluminate

unread,
Mar 10, 2014, 8:35:25 PM3/10/14
to zfs-...@googlegroups.com
Hey Jorgen, this compilation was a pull that is still considered current (alpha-276-ga87b684).  What can I do to better track these issues down presently for you? I'm about to take a jump from ZFS due to these issues, so I'd be happy to help if we can get to the bottom of them before I give up as I'm lacking time to really focus on it right now.  Presently performance is significantly lower than ZEVO.  From the transfers you saw, initially in this thread you can see that it is very significant, but with these hangs it has gotten to the point where it is simply unusable when they happen, but they are not predictable.  I believe it must be a memory issue (although I do have 32GB on this workstation).

Jorgen Lundman

unread,
Mar 10, 2014, 9:20:57 PM3/10/14
to zfs-...@googlegroups.com

We have no at all looked at performance, since focus was to get it working, then to get it stable. On the hangs, we hope to have already fixed them. I just tagged ZFS so the version should say
ZFS: Loaded module v0.6.2-rc1, ZFS pool version 5000, ZFS filesystem version 5

As long as you don't change "zfs.arc_meta_limit" it should be stable. If you are trying to lift this limit, you need to run the special branch "bmalloc", that lets ZFS go over the kernel 1/8 memory limit.

As for performance that is to come.

David Cantrell

unread,
Mar 12, 2014, 8:42:40 AM3/12/14
to zfs-...@googlegroups.com
On Mon, Mar 10, 2014 at 05:01:58PM -0700, Jorgen Lundman wrote:

> Once the installer is pushed out for the current "stable" version, the next
> focus is on OSX integration

Will this include GUI management of pools/volumes/etc?

--
David Cantrell | Minister for Arbitrary Justice

Safety tip: never strap firearms to a hamster

ilov...@icloud.com

unread,
Mar 12, 2014, 8:53:52 AM3/12/14
to zfs-...@googlegroups.com
Possibly, but the discussion of specific features has barely been broached.

jazzsmoothies

unread,
Mar 15, 2014, 9:58:37 AM3/15/14
to zfs-...@googlegroups.com
Can someone refresh my memory on the different ashift values (12, 13, n)… and how they affect the pool?

(jasonbelec) zpool create -f -O compression=lz4 -O casesensitivity=insensitive -O normalization=formD -O atime=off -o ashift=12 pool raidz disk1 disk2 disk3

(ylluminate) zpool create -f -o ashift=13 tank raidz /dev/disk1s2 /dev/disk2s2 /dev/disk3s2 /dev/disk4s2
zfs create -o atime=off -o utf8only=on -o casesensitivity=insensitive -o normalization=formD -o compression=lzjb -o mountpoint=/Users/username tank/username

Specifically pools built with ashift=12 vs ashift=13.  What is the default if none is specified?

Also, is there a difference between normalization=formD at the pool vs directory level?

Fred
--

Daniel Becker

unread,
Mar 15, 2014, 5:03:41 PM3/15/14
to zfs-...@googlegroups.com
On Mar 15, 2014, at 6:58 AM, jazzsmoothies <jazzsm...@gmail.com> wrote:

Also, is there a difference between normalization=formD at the pool vs directory level?

Creating a zpool always implies creating its root fs (which has the same name as the pool) as well. Any -O options you pass to zpool create are effectively passed as -o options to the implied zfs create for the root fs. (Note that in all cases, it's an fs, not the pool or a directory, that gets the option.)
Reply all
Reply to author
Forward
0 new messages