--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
You're going to have to help with the "backtrace of a debug build"
request.
zfs-fuse-0.6.9-6.20100709git.fc13.x86_64
Context is running qemu-img convert on a 73G VMWare disk image to
generate a qcow2 qemu/kvm image . Both the read file and the output
file were on the 6-disk raid-z pool.
zfs-fuse is taking up a rather large chunk of RAM:
1910 root 20 0 5381m 401m 1640 S 0.0 5.0 1:06.74 zfs-fuse
To visit our Web site, click on http://zfs-fuse.net/
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
I'm using glibc-2.11.2 and glibc-2.12.1 on Gentoo, i didn't had similiar
issues. But, this is not debian.
Regards
I wanted to see if we could bring this discussion back into the public eye. Emmanuel wrote: About the backtrace : the crash is in the libc when waiting for a thread to wake up. This is a very simple function normally, just a conditional variable which becomes true when the thread wakes up, it doesn't crash programs normally. Now the question is : why does it crash for you ?
RETURN VALUE
Zero if the requested time has elapsed, or the number of seconds left to sleep, if the call was interrupted by a
signal handler.
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
Fedora 14 uses a glibc with that change; Dunno about other distro's (I
use archlinux, and I've had no issues).
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
To visit our Web site, click on http://zfs-fuse.net/
There is definitely a bug lurking in unlinked files - but it appears
hard for some people to reproduce, and easier for others. If you can
find a way to reproduce it and supply stack traces/steps I'd love to get
a handle on this one.
Seth
PS. Meanwhile consider reporting a bug/adding to #108?
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
Ok, congratulations, you found a new bug, which is quite rare these days (yeah I am sure you would have prefered not to find 1 !).
The good news is that this part of the code has been updated for solaris a long time ago, and it's been in my branch for ages (you know, the one noone uses these days !).
It it works, it would be quite ironic, I have been using this for months now...
By the way, avoid to run zpool upgrade -a with this version or you would not be able to use your old rpm after this.
On 11/17/2010 10:26 PM, Emmanuel Anne wrote:Ok, congratulations, you found a new bug, which is quite rare these days (yeah I am sure you would have prefered not to find 1 !).
That would make some 4 or five persons that 'found' it all together over the last 4 months. It's in the tracker, you know, and #29 is about 80 issues back in history, just for the record.
647 error = VOP_CLOSE(info->vp, info->flags, 1, (offset_t) 0, &cred, NULL);
648 if (error)
649 {
650 syslog(LOG_WARNING, "zfsfuse_release: stale inode (%s)?", strerror(error));
651 } else
652 {
653 VN_RELE(info->vp);
654 kmem_cache_free(file_info_cache, info);
655 }
On 11/18/2010 12:23 AM, sgheeren wrote:Hi Dustin, I had you confused for Jan, sorry
> I will cross post at issue #108 for Jan to evaluate.
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
Yes yes, I included this 43 fix in a batch of patches I merged, maybe I should have looked twice, but I might have made the mistake too.
Anyway, please stop calling my branch "unstable".
Well, for what it's worth I'm using your branch.
I upgraded to version 26 without problems.
That's because:
a) I'm interested in faster removing snapshot;
b) I would like to help the porting (doing the boring things: testing,
stressing, and so on).
Well, I'm using zfs-fuse for my /home.
I backup often, and make compare of content every day.
Also, if anybody has clues about fs stress/testing suite, you're welcome!
I can only scream my big big big *thank you* to all people working on this prj.
Thanks again,
Andrea
Well, the dynamic duo is working harder.
I join your appreciation.
Thanks to both all,
Andrea
Yes yes, I included this 43 fix in a batch of patches I merged, maybe I should have looked twice, but I might have made the mistake too.
Anyway, please stop calling my branch "unstable".
Emmanuel
I'm very happy to see you helping out with a few of the issues. Thanks for that[0].
Still, I don't think the discussion on whether your branch is superior and should not have been forgotten is relevant: it isn't forgotten anyway[1]
Right now it might seem as though you are in denial, unpleasantly surprised to find that there are problems and trying to convince yourself that the problems don't exist with your version. I suggest you spend some time with the tracker to properly dispel that dream :)
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
Well, I like to keep everything in it because I snapshot every 5 minutes.
That's important for me, because I do a lot of hibernate/resume, and sometimes
I have problem with resume that corrupts config on my "live" apps.¹
Usually I have no problem with performance.
I go really slow only with dedup and/or gzip enabled.
Is there anyone else on the list using zfs-fuse in production?
Thanks a lot for your precious work,
Andrea
---------------------
¹ Things are a little bit more complicated, but I don't want to bother
you with details. That's not related to zfs-fuse.
--
Xavier,
thanks a lot for your quick reply.
> Maybe if performances are not well enough, we'll try to look into nexenta
> core or zfsonlinux, butfor the moment it is good, even if the overall
> performances are not marvellous
Well, of course native kernel porting of ZFS can be much more faster
than zfs-fuse. The problem (almos, my problem) is they usually require
64bit processor and a lot of RAM. Instead zfs-fuse can work on any
common hardware. Slower, but it works.
In my daily work at keyboard I need much more snapshot capabilities
and complete data checksum than anything else.
By the way, Emmanuel that's the reason I still didn't played/benchmarked
zfsonlinux... At the moment I don't have spare 64bit machine full of RAM
to play on.
So my urge is to expand the zfs-fuse community, because only a big enough
user base can trigger all the bugs and problems.
I'm also evaluating the idea to find a sponsor/financer for the work of
the dynamic duo. But that's is nothing more than a wish, right now.
Thanks again,
Andrea
--
> [...]
> That's the way it is, if you stop the train, you take risks not to fix
> a few bugs, it has already happened before, and I guess it could
> happen again..
Not a chance. Note how there aren't any changes coming out any more :)
I agree on your analysis of upstream development, I just don't know how
that means we absolutely need to copy that. If people want, they can run
your branch. Like I said, I'm all 'pro' a rename of that pre-testing
branch.
Note that unstable will be the new testing anyway, and barring other
sources of development, there is no need for a 'pre-test' branch
[whatever the name] for a while
Seth
Op 18-11-2010 11:59, Emmanuel Anne schreef:
By the way, did you choose not address the issue at hand; I'm having trouble figuring out whether you simply agree or haven´t read it.You miss the point, it's not about superior.
--To post to this group, send email to zfs-...@googlegroups.comTo visit our Web site, click on http://zfs-fuse.net/
I have been using the testing build for over 36hours now with the same activity that used to crash it. So far, no crashes!
BTW this is an 8-core Opteron @ 2Ghz, with a 6 way, 500G drive RAIDZ1. Its no screamer, but I could certainly help test.
I'm ready to label it 0.7.0 if Dustin Ward reports ok on #108 too