Generic atomics_cas_32 patch

19 views
Skip to first unread message

Gordan Bobic

unread,
Feb 19, 2014, 6:02:10 AM2/19/14
to zfs-...@googlegroups.com
To get the latest 0.7.0 (20121023) version to work on ARM I wrote the attached patch that provides generic atomics_cas_32() implementation - essentially copied from atomics_cas_64 with the parameters and return changed from 64-bit to 32-bit.

This fixes the FTBFS.

Please consider accepting this patch in the authoritative git tree (and perhaps even tagging the 0.7.0 release).

I know the zfs-fuse is semi-abandoned, but it still has a number of important uses such as 32-bit Linux platforms as well as being a really handy fall-back recovery option that doesn't involve trying to import a pool on a completely different OS.


Gordan
generic-atomic_cas_32.patch

Emmanuel Anne

unread,
Feb 19, 2014, 8:57:21 AM2/19/14
to zfs-...@googlegroups.com
Well I am not sure anyone still uses the repository, but it's pushed anyway :

But this one is still quite different from the official 0.7.0... !
Anyway it's free, if someone wants to open a github repository from this, there is no pb.


--
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Gordan Bobic

unread,
Feb 19, 2014, 9:01:02 AM2/19/14
to zfs-...@googlegroups.com
My understanding was there there was no longer an "official" zfs-fuse, and 0.7.0 was never actually released. Or was it? Is there a more official/authoritative zfs-fuse repository somewhere? I thought yours was the only one with pool v26 support for a start.

Emmanuel Anne

unread,
Feb 20, 2014, 1:18:58 PM2/20/14
to zfs-...@googlegroups.com
Yeah I guess that's the problem, but 0.7.0 was released, it has official packages everwhere, but version 26 of the pool never made it to 0.7.0 and that's the point where everything stopped for zfs-fuse.
Official page was on http://zfs-fuse.net/ but the site went down lately since someone had to pay for it and it became useless to pay here.
The git repository from there is still up and it probably contains the latest 0.7.0 official : http://zfs-fuse.sehe.nl/ (that's here that it was built after all !).

Gordan Bobic

unread,
Feb 21, 2014, 1:42:48 PM2/21/14
to zfs-...@googlegroups.com
On Thu, Feb 20, 2014 at 6:18 PM, Emmanuel Anne <emmanu...@gmail.com> wrote:
Yeah I guess that's the problem, but 0.7.0 was released, it has official packages everwhere, but version 26 of the pool never made it to 0.7.0 and that's the point where everything stopped for zfs-fuse.
Official page was on http://zfs-fuse.net/ but the site went down lately since someone had to pay for it and it became useless to pay here.
The git repository from there is still up and it probably contains the latest 0.7.0 official : http://zfs-fuse.sehe.nl/ (that's here that it was built after all !).

I was under the impression that your pool v26 branch of post 0.7.0 code, even if it wasn't tagged as such. Were there any coommits that went into 0.7.0 release that didn't make it into your v26 tree?

On a separate note I'm about to try attacking zfs-fuse with valgrind because when doing zfs receive on a 2GHz ARM (single core ARMv5)I am only getting about 25MB/s (large files), and similar effective throughput on a scrub (50MB/s but that seems to be total across all disks, and I am running a 4-disk RAIDZ2, so effective scrube speed is 25MB/s). The CPU usage is split roughly 1/3 netcat and 2/3 zfs-fuse, and about 25% is showing up as system, probably due to the fuse kernel/userspace context switching. Scrub is purely CPU bound.

Still it would be nice to try to squeeze a little more performance out of it - 25MB/s is about 1/8 of what the network subsystem on the machine (dual gigabit ethernet apparently on the CPU die, not connected over PCIe) and 1/4 of what the disks controller (PCIe x1, so 120MB/s theoretical max) should be able to handle. Having said that, I am not too hopeful - its not like there is vectorization I could leverage in hardware for a 4x speedup, and this CPU only has MD5 and SHA1 async offload in hardware via cryptodev, so not useful for ZFS's hashes which, IIRC are fletcher and sha2.

Emmanuel Anne

unread,
Feb 21, 2014, 4:42:37 PM2/21/14
to zfs-...@googlegroups.com
2014-02-21 19:42 GMT+01:00 Gordan Bobic <gordan...@gmail.com>:
I was under the impression that your pool v26 branch of post 0.7.0 code, even if it wasn't tagged as such. Were there any coommits that went into 0.7.0 release that didn't make it into your v26 tree?

Probably, mainly some packages fixes, but there is probably some stuff that I didn't see. There was no mail for each package so I had to monitor the other git repository to keep in sync, which I didin't do... !
 
On a separate note I'm about to try attacking zfs-fuse with valgrind because when doing zfs receive on a 2GHz ARM (single core ARMv5)I am only getting about 25MB/s (large files), and similar effective throughput on a scrub (50MB/s but that seems to be total across all disks, and I am running a 4-disk RAIDZ2, so effective scrube speed is 25MB/s). The CPU usage is split roughly 1/3 netcat and 2/3 zfs-fuse, and about 25% is showing up as system, probably due to the fuse kernel/userspace context switching. Scrub is purely CPU bound.

Still it would be nice to try to squeeze a little more performance out of it - 25MB/s is about 1/8 of what the network subsystem on the machine (dual gigabit ethernet apparently on the CPU die, not connected over PCIe) and 1/4 of what the disks controller (PCIe x1, so 120MB/s theoretical max) should be able to handle. Having said that, I am not too hopeful - its not like there is vectorization I could leverage in hardware for a 4x speedup, and this CPU only has MD5 and SHA1 async offload in hardware via cryptodev, so not useful for ZFS's hashes which, IIRC are fletcher and sha2.

And good luck with valgrind, it makes the programs run very slowly while it tests them, plus zfs code is extremely complex, but I guess you already know that, in any case, you'll need luck ! 

Emmanuel Anne

unread,
Feb 21, 2014, 4:43:33 PM2/21/14
to zfs-...@googlegroups.com
2014-02-21 22:42 GMT+01:00 Emmanuel Anne <emmanu...@gmail.com>:
There was no mail for each package so I had to monitor the other git repository to keep in sync, which I didin't do... !

There was no mail for each commit !!! 
 

Gordan Bobic

unread,
Feb 22, 2014, 4:10:49 AM2/22/14
to zfs-...@googlegroups.com
On 02/21/2014 09:42 PM, Emmanuel Anne wrote:
> 2014-02-21 19:42 GMT+01:00 Gordan Bobic <gordan...@gmail.com
> <mailto:gordan...@gmail.com>>:
>
> I was under the impression that your pool v26 branch of post 0.7.0
> code, even if it wasn't tagged as such. Were there any coommits that
> went into 0.7.0 release that didn't make it into your v26 tree?
>
>
> Probably, mainly some packages fixes, but there is probably some stuff
> that I didn't see. There was no mail for each package so I had to
> monitor the other git repository to keep in sync, which I didin't do... !

I see. Is there a list of commits you added to get pool v26 working (and
any other fixes you committed)? Is it a huge list? If not, I guess the
simplest way to reconcile the repositories might be to get the 0.7.0
release and merge/backport all of your extras into that.

> On a separate note I'm about to try attacking zfs-fuse with valgrind
> because when doing zfs receive on a 2GHz ARM (single core ARMv5)I am
> only getting about 25MB/s (large files), and similar effective
> throughput on a scrub (50MB/s but that seems to be total across all
> disks, and I am running a 4-disk RAIDZ2, so effective scrube speed
> is 25MB/s). The CPU usage is split roughly 1/3 netcat and 2/3
> zfs-fuse, and about 25% is showing up as system, probably due to the
> fuse kernel/userspace context switching. Scrub is purely CPU bound.
>
> Still it would be nice to try to squeeze a little more performance
> out of it - 25MB/s is about 1/8 of what the network subsystem on the
> machine (dual gigabit ethernet apparently on the CPU die, not
> connected over PCIe) and 1/4 of what the disks controller (PCIe x1,
> so 120MB/s theoretical max) should be able to handle. Having said
> that, I am not too hopeful - its not like there is vectorization I
> could leverage in hardware for a 4x speedup, and this CPU only has
> MD5 and SHA1 async offload in hardware via cryptodev, so not useful
> for ZFS's hashes which, IIRC are fletcher and sha2.
>
>
> And good luck with valgrind, it makes the programs run very slowly while
> it tests them, plus zfs code is extremely complex, but I guess you
> already know that, in any case, you'll need luck !

I'm mostly hoping to see which functions eat most CPU, and see if there
is some optimization I can apply that might give a decent boost, at
least on CPU-limited architectures like ARM.

Gordan

Emmanuel Anne

unread,
Feb 22, 2014, 6:36:33 AM2/22/14
to zfs-...@googlegroups.com
About the merge of the repositories : in theory yes, in practice they diverged in a lot of small details if I remember correctly, merging both wouldn't be easy, but of course it's possible for those who are really motivated !

About optimization : yes it was my way of thinking too, but from what I remember zfs-fuse is not really cpu intensive, it spends most of its time waiting for threads to wake up on conditions, and it's a damn mess of threads !
As I said : good luck !



--
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
--- You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe@googlegroups.com.

Gordan Bobic

unread,
May 13, 2015, 8:46:03 AM5/13/15
to zfs-...@googlegroups.com
Sorry to necropost, but is there anywhere I can get the "official" 0.7 release git code in a per-patch consumable format? I'm pondering looking at merging the patches that didn't make it into the branch with v26 pool support.

There is some drive toward use of zfs-fuse again since Debian seems to intend to include support for ZFS as rootfs, and due to licensing there was a suggestion to use zfs-fuse in the installer and switch to ZoL later on in the process.

So merging things into a definitive latest version would probably be helpful.

On  side-note, having migrated all of my recent x86 machines to using ZFS rootfs, I am pondering starting to do similar with my 32-bit machines (mostly ARM) that use zfs-fuse, and that means adapting the dracut modules from ZoL for zfs-fuse.

Gordan

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+u...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages