Future of KQStore ZFS ????

18 views
Skip to first unread message

robsee

unread,
Apr 30, 2011, 5:25:13 PM4/30/11
to KQStor ZFS Discussion
Hi,

The github repositories haven't seen any significant updates in the
last two months, so I just went to check out the website and saw that
KQInfotech has been purchased by another company. Does anyone have any
information on what this means for this project ? Can we expect this
to remain a well maintained open source project, or should people
looking for a stable implementation of ZFS look somewhere else ?

Thanks,
-Rob

Hugues Talbot

unread,
May 1, 2011, 1:34:40 PM5/1/11
to kqstor-zf...@googlegroups.com
Yes this is pretty bad.

There is another open-source zfs implementation.

http://zfsonlinux.org/

I haven't tested it, but the KQ implementation was based on this one.

Jason J. W. Williams

unread,
May 1, 2011, 3:11:00 PM5/1/11
to kqstor-zf...@googlegroups.com, kqstor-zf...@googlegroups.com
The zfsonlinux.org lists are pretty active. Got better insight over there when we were investigating moving off OpenSolaris ZFS. I think they're pretty close to going release and it seems line theyve got a less hacky replacement for the POSIX layer than the KQ stuff.

-H

Sent via iPhone

Is your e-mail Premiere?

Gordan Bobic

unread,
May 2, 2011, 4:22:57 AM5/2/11
to kqstor-zf...@googlegroups.com
Yeah, I'm planning to switch to zfsonlinux.org implementation, too, as
soon as 0.6 is comes past rc stage. I've been fighting an interesting
issue on the kq implementation that seems to result in the kernel
locking up hard as soon as the pool is imported if both deduplication
and compression are enabled on a volume. Since the kernel crash happens
pretty much immediately, it's not possible to get the data back, except
perhaps by installing OpenSolaris and using it's implementation to get
the pool assembled, but I haven't tried this (thankfully I didn't have
to because I had backups).

I'm not sure if it is related to the flags being changed during file
copying to ZFS, or whether merely the combination of dedupe+compress
causes it, but copying a few hundred GB of files while twiddling
dedupe+compress flags and then kicking off a scrub leads to the
mentioned crash.

I'm hoping the problem will disappear on the other implementation, but
for now I'm remaining paranoid about the backups.

Gordan

Reply all
Reply to author
Forward
0 new messages