zfs hangs my system

106 views
Skip to first unread message

mgiammarco

unread,
Feb 10, 2011, 3:17:30 PM2/10/11
to KQStor ZFS Discussion
Hello,
I have a test server were I was using zfs-fuse.

It was slow but it worked.

Now I switched to kqzfs but I have two problems:

- the first, obvious, if I import zfs-fuse pools I have the same
problems of people that try to import bsd or solaris pools (chmod
000...);

- the second, more serious is that if I try a bonnie++ benchmark (with
a new pool created with kqzfs) with size 4000 it complete the test,
with size 30000 it hangs the system (kernel oops).

I have checked system ram and it is ok. With zfs-fuse all benchmarks
are fine.

What can I do?

Thanks,
Mario

Massimo Maggi

unread,
Feb 10, 2011, 3:23:30 PM2/10/11
to kqstor-zf...@googlegroups.com
The first thing that you can do is to post here all of the informations
that you have about the error, exact text included!
At worst a screen photo is better than nothing, but only if you really
can't save the log with dmesg >logfiletoposthere.

Massimo Maggi

Vikram jadhav

unread,
Feb 10, 2011, 3:35:01 PM2/10/11
to kqstor-zf...@googlegroups.com
Hi,
first problem of chmod 000.. might created because of different version of zfs-fuse and kq zfs file system.

you can upgrade the zpool and zfs to the current version after importing pool which created by zfs-fuse

e.g:
#zpool upgrade -V 28 <poolname>
#zfs upgrade -V 5 <fsname>
#zfs upgrade -V 5 <poolname>
#zfs get all <fsname> -- to check version

it might solve your first problem.

mgiammarco

unread,
Feb 21, 2011, 7:59:58 AM2/21/11
to KQStor ZFS Discussion
I have upgraded pool and zfs following your instructions.

Now for old files permission are maintained. New files are created
again with 000 permissions, but now I can change them and even with
permission 000 I can read and write the files.

On 10 Feb, 21:35, Vikram jadhav <vik...@kqinfotech.com> wrote:
> Hi,
> first problem of chmod 000.. might created because of different version of
> zfs-fuse and kq zfs file system.
>
> you can upgrade the zpool and zfs to the current version after importing
> pool which created by zfs-fuse
>
> e.g:
> #zpool upgrade -V 28 <poolname>
> #zfs upgrade -V 5 <fsname>
> #zfs upgrade -V 5 <poolname>
> #zfs get all <fsname> -- to check version
>
> it might solve your first problem.
>

mgiammarco

unread,
Feb 21, 2011, 7:58:18 AM2/21/11
to KQStor ZFS Discussion
Well, I cannot give you the stack trace of a full system hang.

Anyway I have done some tests and it seems that to complete tests you
should not have nay zfs-fuse filesystem imported.
Even if you do not use it, it causes problems.
With only two pools created with kqzfs I was able to finish several
bonnie++ benchmarks.
But anyway I have this worrying log in syslog (where sda3 is a kqzfs
pool)

Feb 20 21:59:23 player1 kernel: [117413.565020] sda3: rw=0,
want=36028797018658841,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565029] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565034] sda3: rw=0,
want=36028797018830065,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565046] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565052] sda3: rw=0,
want=36028797018847101,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565066] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565071] sda3: rw=0,
want=36028797018851037,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565095] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565100] sda3: rw=0,
want=36028797018853341,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565131] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565137] sda3: rw=0,
want=36028797018853597,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565150] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565155] sda3: rw=0,
want=36028797018853693,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565175] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565181] sda3: rw=0,
want=36028797018869821,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565194] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565200] sda3: rw=0,
want=36028797018869981,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565219] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565225] sda3: rw=0,
want=36028797018874597,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565254] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565260] sda3: rw=0,
want=36028797018874829,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565279] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565285] sda3: rw=0,
want=36028797018875085,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565305] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565310] sda3: rw=0,
want=36028797018875341,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565323] attempt to access
beyond end of
device
Feb 20 21:59:23 player1 kernel: [117413.565328] sda3: rw=0,
want=36028797018875437,
limit=70348635
Feb 20 21:59:23 player1 kernel: [117413.565337] attempt to access
beyond end of device


and this log:

Feb 20 22:02:33 player1 kernel: [117603.600105] INFO: task txg_quiesce:
2404 blocked for more than 120
seconds.
Feb 20 22:02:33 player1 kernel: [117603.600123] "echo 0 > /proc/sys/
kernel/hung_task_timeout_secs" disables this
message.
Feb 20 22:02:33 player1 kernel: [117603.600141] txg_quiesce D
00000000ffffffff 0 2404 2
0x00000000
Feb 20 22:02:33 player1 kernel: [117603.600147] ffff880016299d70
0000000000000046 ffff880016182dc0
0000000000015980
Feb 20 22:02:33 player1 kernel: [117603.600152] ffff880016299fd8
0000000000015980 ffff880016299fd8
ffff880016182dc0
Feb 20 22:02:33 player1 kernel: [117603.600156] 0000000000015980
0000000000015980 ffff880016299fd8
0000000000015980
Feb 20 22:02:33 player1 kernel: [117603.600161] Call
Trace:
Feb 20 22:02:33 player1 kernel: [117603.600171]
[<ffffffff8107f2a0>] ? prepare_to_wait_exclusive
+0x60/0x90
Feb 20 22:02:33 player1 kernel: [117603.600193] [<ffffffffa0e0efa7>]
cv_wait_common+0x97/0xd0
[spl]
Feb 20 22:02:33 player1 kernel: [117603.600197]
[<ffffffff8107f080>] ? autoremove_wake_function
+0x0/0x40
Feb 20 22:02:33 player1 kernel: [117603.600215] [<ffffffffa0e0f013>]
__cv_wait+0x13/0x20
[spl]
Feb 20 22:02:33 player1 kernel: [117603.600281] [<ffffffffa0eda84d>]
txg_thread_wait+0x1d/0x40
[zfs]
Feb 20 22:02:33 player1 kernel: [117603.600320] [<ffffffffa0edafcb>]
txg_quiesce_thread+0x24b/0x250
[zfs]
Feb 20 22:02:33 player1 kernel: [117603.600360]
[<ffffffffa0edad80>] ? txg_quiesce_thread+0x0/0x250
[zfs]
Feb 20 22:02:33 player1 kernel: [117603.600370] [<ffffffffa0e0b0b8>]
thread_generic_wrapper+0x78/0x90
[spl]
Feb 20 22:02:33 player1 kernel: [117603.600380]
[<ffffffffa0e0b040>] ? thread_generic_wrapper+0x0/0x90
[spl]
Feb 20 22:02:33 player1 kernel: [117603.600384] [<ffffffff8107eb26>]
kthread
+0x96/0xa0
Feb 20 22:02:33 player1 kernel: [117603.600388] [<ffffffff8100aee4>]
kernel_thread_helper
+0x4/0x10
Feb 20 22:02:33 player1 kernel: [117603.600392]
[<ffffffff8107ea90>] ? kthread
+0x0/0xa0
Feb 20 22:02:33 player1 kernel: [117603.600396]
[<ffffffff8100aee0>] ? kernel_thread_helper
+0x0/0x10
Feb 20 22:02:33 player1 kernel: [117603.600422] INFO: task bonnie++:
8356 blocked for more than 120
seconds.
Feb 20 22:02:33 player1 kernel: [117603.600435] "echo 0 > /proc/sys/
kernel/hung_task_timeout_secs" disables this
message.
Feb 20 22:02:33 player1 kernel: [117603.600451] bonnie++ D
00000000ffffffff 0 8356 3404
0x00000000
Feb 20 22:02:33 player1 kernel: [117603.600456] ffff88002ac89a98
0000000000000086 0000000000000003
0000000000015980
Feb 20 22:02:33 player1 kernel: [117603.600461] ffff88002ac89fd8
0000000000015980 ffff88002ac89fd8
ffff8800205c5b80
Feb 20 22:02:33 player1 kernel: [117603.600465] 0000000000015980
0000000000015980 ffff88002ac89fd8
0000000000015980
Feb 20 22:02:33 player1 kernel: [117603.600469] Call
Trace:
Feb 20 22:02:33 player1 kernel: [117603.600473]
[<ffffffff8107f2a0>] ? prepare_to_wait_exclusive
+0x60/0x90
Feb 20 22:02:33 player1 kernel: [117603.600484] [<ffffffffa0e0efa7>]
cv_wait_common+0x97/0xd0
[spl]
Feb 20 22:02:33 player1 kernel: [117603.600488]
[<ffffffff8107f080>] ? autoremove_wake_function
+0x0/0x40
Feb 20 22:02:33 player1 kernel: [117603.600493]
[<ffffffff8104d203>] ? __wake_up
+0x53/0x70
Feb 20 22:02:33 player1 kernel: [117603.600503] [<ffffffffa0e0f013>]
__cv_wait+0x13/0x20
[spl]
Feb 20 22:02:33 player1 kernel: [117603.600545] [<ffffffffa0eda5ab>]
txg_wait_open+0x7b/0xa0
[zfs]
Feb 20 22:02:33 player1 kernel: [117603.600578] [<ffffffffa0ea0e4d>]
dmu_tx_wait+0xed/0xf0
[zfs]
Feb 20 22:02:33 player1 kernel: [117603.600619] [<ffffffffa0f089cd>]
zfs_create+0x28d/0x6d0
[zfs]
Feb 20 22:02:33 player1 kernel: [117603.600626] [<ffffffffa0c5a480>]
lzfs_vnop_create+0x120/0x330
[lzfs]
Feb 20 22:02:33 player1 kernel: [117603.600631] [<ffffffff8115e84c>]
vfs_create+0xac/
0xd0
Feb 20 22:02:33 player1 kernel: [117603.600634] [<ffffffff8115e934>]
__open_namei_create
+0xc4/0x110
Feb 20 22:02:33 player1 kernel: [117603.600638] [<ffffffff8115f898>]
do_last
+0x3f8/0x460
Feb 20 22:02:33 player1 kernel: [117603.600642] [<ffffffff8116185b>]
do_filp_open+0x21b/
0x660
Feb 20 22:02:33 player1 kernel: [117603.600648]
[<ffffffff8159e7ee>] ? _raw_spin_lock+0xe/
0x20
Feb 20 22:02:33 player1 kernel: [117603.600665]
[<ffffffff8113e518>] ? add_partial
+0x58/0x90
Feb 20 22:02:33 player1 kernel: [117603.600702]
[<ffffffff8159e7ee>] ? _raw_spin_lock+0xe/
0x20
Feb 20 22:02:33 player1 kernel: [117603.600708]
[<ffffffff812b619d>] ? _atomic_dec_and_lock+0x4d/
0x80
Feb 20 22:02:33 player1 kernel: [117603.600712]
[<ffffffff8116ce4a>] ? alloc_fd+0x10a/
0x150
Feb 20 22:02:33 player1 kernel: [117603.600718] [<ffffffff81151009>]
do_sys_open
+0x69/0x170
Feb 20 22:02:33 player1 kernel: [117603.600723] [<ffffffff81151150>]
sys_open
+0x20/0x30
Feb 20 22:02:33 player1 kernel: [117603.600728] [<ffffffff8100a0f2>]
system_call_fastpath
+0x16/0x1b
Feb 20 22:04:35 player1 kernel: [117723.600067] INFO: task txg_quiesce:
2404 blocked for more than 120
seconds.
Feb 20 22:04:35 player1 kernel: [117723.600086] "echo 0 > /proc/sys/
kernel/hung_task_timeout_secs" disables this
message.
Feb 20 22:04:35 player1 kernel: [117723.600102] txg_quiesce D
00000000ffffffff 0 2404 2
0x00000000
Feb 20 22:04:35 player1 kernel: [117723.600108] ffff880016299d70
0000000000000046 ffff880016182dc0
0000000000015980
Feb 20 22:04:35 player1 kernel: [117723.600114] ffff880016299fd8
0000000000015980 ffff880016299fd8
ffff880016182dc0
Feb 20 22:04:35 player1 kernel: [117723.600118] 0000000000015980
0000000000015980 ffff880016299fd8
0000000000015980
Feb 20 22:04:35 player1 kernel: [117723.600122] Call
Trace:
Feb 20 22:04:35 player1 kernel: [117723.600132]
[<ffffffff8107f2a0>] ? prepare_to_wait_exclusive
+0x60/0x90
Feb 20 22:04:35 player1 kernel: [117723.600153] [<ffffffffa0e0efa7>]
cv_wait_common+0x97/0xd0
[spl]
Feb 20 22:04:35 player1 kernel: [117723.600158]
[<ffffffff8107f080>] ? autoremove_wake_function
+0x0/0x40
Feb 20 22:04:35 player1 kernel: [117723.600169] [<ffffffffa0e0f013>]
__cv_wait+0x13/0x20
[spl]
Feb 20 22:04:35 player1 kernel: [117723.600232] [<ffffffffa0eda84d>]
txg_thread_wait+0x1d/0x40
[zfs]
Feb 20 22:04:35 player1 kernel: [117723.600271] [<ffffffffa0edafcb>]
txg_quiesce_thread+0x24b/0x250
[zfs]
Feb 20 22:04:35 player1 kernel: [117723.600311]
[<ffffffffa0edad80>] ? txg_quiesce_thread+0x0/0x250
[zfs]
Feb 20 22:04:35 player1 kernel: [117723.600321] [<ffffffffa0e0b0b8>]
thread_generic_wrapper+0x78/0x90
[spl]
Feb 20 22:04:35 player1 kernel: [117723.600331]
[<ffffffffa0e0b040>] ? thread_generic_wrapper+0x0/0x90
[spl]
Feb 20 22:04:35 player1 kernel: [117723.600335] [<ffffffff8107eb26>]
kthread
+0x96/0xa0
Feb 20 22:04:35 player1 kernel: [117723.600340] [<ffffffff8100aee4>]
kernel_thread_helper
+0x4/0x10
Feb 20 22:04:35 player1 kernel: [117723.600343]
[<ffffffff8107ea90>] ? kthread
+0x0/0xa0
Feb 20 22:04:35 player1 kernel: [117723.600347]
[<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10


...


Feb 20 23:09:00 player1 kernel: [121590.807580] SPL: Showing stack for
process
22115
Feb 20 23:09:00 player1 kernel: [121590.807590] Pid: 22115, comm: zfs
Tainted: P 2.6.35-22-server #35-
Ubuntu
Feb 20 23:09:00 player1 kernel: [121590.807595] Call
Trace:
Feb 20 23:09:00 player1 kernel: [121590.807626] [<ffffffffa0e065c7>]
spl_debug_dumpstack+0x27/0x40
[spl]
Feb 20 23:09:00 player1 kernel: [121590.807646] [<ffffffffa0e0a63d>]
kmem_alloc_debug+0x11d/0x130
[spl]
Feb 20 23:09:00 player1 kernel: [121590.807743] [<ffffffffa0ef9a2e>]
zfs_prop_set_special+0x23e/0x3c0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.807757]
[<ffffffffa0e77183>] ? zprop_get_proptable+0x13/0x30
[zcommon]
Feb 20 23:09:00 player1 kernel: [121590.807830] [<ffffffffa0efcd72>]
zfs_set_prop_nvlist+0x192/0x6d0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.807903] [<ffffffffa0efe68b>]
zfs_ioc_set_prop+0x7b/0x140
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.807973]
[<ffffffffa0ed15d3>] ? spa_open+0x13/0x20
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.808044] [<ffffffffa0efc22f>]
zfsdev_ioctl+0xef/0x1c0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.808053] [<ffffffff81162e1d>]
vfs_ioctl+0x3d/
0xd0
Feb 20 23:09:00 player1 kernel: [121590.808060] [<ffffffff811635b1>]
do_vfs_ioctl
+0x81/0x3d0
Feb 20 23:09:00 player1 kernel: [121590.808068]
[<ffffffff815a2569>] ? do_page_fault
+0x159/0x350
Feb 20 23:09:00 player1 kernel: [121590.808074] [<ffffffff81163981>]
sys_ioctl
+0x81/0xa0
Feb 20 23:09:00 player1 kernel: [121590.808084] [<ffffffff8100a0f2>]
system_call_fastpath
+0x16/0x1b
Feb 20 23:09:00 player1 kernel: [121590.808935] SPL: Showing stack for
process
22115
Feb 20 23:09:00 player1 kernel: [121590.808944] Pid: 22115, comm: zfs
Tainted: P 2.6.35-22-server #35-
Ubuntu
Feb 20 23:09:00 player1 kernel: [121590.808948] Call
Trace:
Feb 20 23:09:00 player1 kernel: [121590.808976] [<ffffffffa0e065c7>]
spl_debug_dumpstack+0x27/0x40
[spl]
Feb 20 23:09:00 player1 kernel: [121590.808994] [<ffffffffa0e0a63d>]
kmem_alloc_debug+0x11d/0x130
[spl]
Feb 20 23:09:00 player1 kernel: [121590.809090] [<ffffffffa0ef9a2e>]
zfs_prop_set_special+0x23e/0x3c0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.809104]
[<ffffffffa0e77183>] ? zprop_get_proptable+0x13/0x30
[zcommon]
Feb 20 23:09:00 player1 kernel: [121590.809177] [<ffffffffa0efcd72>]
zfs_set_prop_nvlist+0x192/0x6d0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.809249] [<ffffffffa0efe68b>]
zfs_ioc_set_prop+0x7b/0x140
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.809319]
[<ffffffffa0ed15d3>] ? spa_open+0x13/0x20
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.809390] [<ffffffffa0efc22f>]
zfsdev_ioctl+0xef/0x1c0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.809399] [<ffffffff81162e1d>]
vfs_ioctl+0x3d/
0xd0
Feb 20 23:09:00 player1 kernel: [121590.809406] [<ffffffff811635b1>]
do_vfs_ioctl+0x81/0x3d0
Feb 20 23:09:00 player1 kernel: [121590.811525] [<ffffffff81163981>]
sys_ioctl
+0x81/0xa0
Feb 20 23:09:00 player1 kernel: [121590.811559] [<ffffffff8100a0f2>]
system_call_fastpath
+0x16/0x1b
Feb 20 23:09:00 player1 kernel: [121590.812357] SPL: Showing stack for
process
22115
Feb 20 23:09:00 player1 kernel: [121590.812365] Pid: 22115, comm: zfs
Tainted: P 2.6.35-22-server #35-
Ubuntu
Feb 20 23:09:00 player1 kernel: [121590.812369] Call
Trace:
Feb 20 23:09:00 player1 kernel: [121590.812397] [<ffffffffa0e065c7>]
spl_debug_dumpstack+0x27/0x40
[spl]
Feb 20 23:09:00 player1 kernel: [121590.812416] [<ffffffffa0e0a63d>]
kmem_alloc_debug+0x11d/0x130
[spl]
Feb 20 23:09:00 player1 kernel: [121590.812513] [<ffffffffa0ef9a2e>]
zfs_prop_set_special+0x23e/0x3c0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.812528]
[<ffffffffa0e77183>] ? zprop_get_proptable+0x13/0x30
[zcommon]
Feb 20 23:09:00 player1 kernel: [121590.812600] [<ffffffffa0efcd72>]
zfs_set_prop_nvlist+0x192/0x6d0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.812700] [<ffffffffa0efe68b>]
zfs_ioc_set_prop+0x7b/0x140
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.812770]
[<ffffffffa0ed15d3>] ? spa_open+0x13/0x20
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.812841] [<ffffffffa0efc22f>]
zfsdev_ioctl+0xef/0x1c0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.812850] [<ffffffff81162e1d>]
vfs_ioctl+0x3d/
0xd0
Feb 20 23:09:00 player1 kernel: [121590.812857] [<ffffffff811635b1>]
do_vfs_ioctl
+0x81/0x3d0
Feb 20 23:09:00 player1 kernel: [121590.812864] [<ffffffff81163981>]
sys_ioctl
+0x81/0xa0
Feb 20 23:09:00 player1 kernel: [121590.812874] [<ffffffff8100a0f2>]
system_call_fastpath
+0x16/0x1b
Feb 20 23:09:00 player1 kernel: [121590.814397] SPL: Showing stack for
process
22115
Feb 20 23:09:00 player1 kernel: [121590.814406] Pid: 22115, comm: zfs
Tainted: P 2.6.35-22-server #35-
Ubuntu
Feb 20 23:09:00 player1 kernel: [121590.814411] Call
Trace:
Feb 20 23:09:00 player1 kernel: [121590.814441] [<ffffffffa0e065c7>]
spl_debug_dumpstack+0x27/0x40
[spl]
Feb 20 23:09:00 player1 kernel: [121590.814459] [<ffffffffa0e0a63d>]
kmem_alloc_debug+0x11d/0x130
[spl]
Feb 20 23:09:00 player1 kernel: [121590.814558] [<ffffffffa0ef9a2e>]
zfs_prop_set_special+0x23e/0x3c0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.814572]
[<ffffffffa0e77183>] ? zprop_get_proptable+0x13/0x30
[zcommon]
Feb 20 23:09:00 player1 kernel: [121590.814644] [<ffffffffa0efcd72>]
zfs_set_prop_nvlist+0x192/0x6d0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.814717] [<ffffffffa0efe68b>]
zfs_ioc_set_prop+0x7b/0x140
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.814787]
[<ffffffffa0ed15d3>] ? spa_open+0x13/0x20
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.814858] [<ffffffffa0efc22f>]
zfsdev_ioctl+0xef/0x1c0
[zfs]
Feb 20 23:09:00 player1 kernel: [121590.814867] [<ffffffff81162e1d>]
vfs_ioctl+0x3d/
0xd0
Feb 20 23:09:00 player1 kernel: [121590.814874] [<ffffffff811635b1>]
do_vfs_ioctl
+0x81/0x3d0
Feb 20 23:09:00 player1 kernel: [121590.814880] [<ffffffff81163981>]
sys_ioctl
+0x81/0xa0
Feb 20 23:09:00 player1 kernel: [121590.814890] [<ffffffff8100a0f2>]
system_call_fastpath+0x16/0x1b


....




Feb 21 00:57:23 player1 kernel: [ 6242.745030] INFO: task
zio_write_issue:2310 blocked for more than 120
seconds.
Feb 21 00:57:23 player1 kernel: [ 6242.748551] "echo 0 > /proc/sys/
kernel/hung_task_timeout_secs" disables this
message.
Feb 21 00:57:23 player1 kernel: [ 6242.752093] zio_write_iss D
00000000ffffffff 0 2310 2
0x00000000
Feb 21 00:57:23 player1 kernel: [ 6242.752100] ffff880017fad660
0000000000000046 0000000000000003
0000000000015980
Feb 21 00:57:23 player1 kernel: [ 6242.752108] ffff880017fadfd8
0000000000015980 ffff880017fadfd8
ffff880017fc0000
Feb 21 00:57:23 player1 kernel: [ 6242.752116] 0000000000015980
0000000000015980 ffff880017fadfd8
0000000000015980
Feb 21 00:57:23 player1 kernel: [ 6242.752123] Call
Trace:
Feb 21 00:57:23 player1 kernel: [ 6242.752130] [<ffffffff8107f2a0>] ?
prepare_to_wait_exclusive
+0x60/0x90
Feb 21 00:57:23 player1 kernel: [ 6242.752151] [<ffffffffa0e25fa7>]
cv_wait_common+0x97/0xd0
[spl]
Feb 21 00:57:23 player1 kernel: [ 6242.752158] [<ffffffff8107f080>] ?
autoremove_wake_function
+0x0/0x40
Feb 21 00:57:23 player1 kernel: [ 6242.752166] [<ffffffff8104d203>] ?
__wake_up
+0x53/0x70
Feb 21 00:57:23 player1 kernel: [ 6242.752184] [<ffffffffa0e26013>]
__cv_wait+0x13/0x20
[spl]
Feb 21 00:57:23 player1 kernel: [ 6242.752256] [<ffffffffa0ee45ab>]
txg_wait_open+0x7b/0xa0
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752314] [<ffffffffa0eaae4d>]
dmu_tx_wait+0xed/0xf0
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752372] [<ffffffffa0eaaf0a>]
dmu_tx_assign+0x6a/0x410
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752444] [<ffffffffa0f158af>]
zfs_inactive+0xdf/0x210
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752453] [<ffffffffa0b77534>]
lzfs_clear_vnode+0x104/0x160
[lzfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752462] [<ffffffff8116abc5>]
clear_inode
+0x75/0xf0
Feb 21 00:57:23 player1 kernel: [ 6242.752469] [<ffffffff8116ac80>]
dispose_list
+0x40/0x150
Feb 21 00:57:23 player1 kernel: [ 6242.752476] [<ffffffff8116af23>]
prune_icache
+0x193/0x2a0
Feb 21 00:57:23 player1 kernel: [ 6242.752484] [<ffffffff8116b081>]
shrink_icache_memory
+0x51/0x60
Feb 21 00:57:23 player1 kernel: [ 6242.752493] [<ffffffff8110e6d4>]
shrink_slab+0x124/0x180
Feb 21 00:57:23 player1 kernel: [ 6242.752499] [<ffffffff8110ff0f>]
do_try_to_free_pages+0x1cf/
0x360
Feb 21 00:57:23 player1 kernel: [ 6242.752506] [<ffffffff8111024b>]
try_to_free_pages+0x6b/
0x70
Feb 21 00:57:23 player1 kernel: [ 6242.752514] [<ffffffff8110740a>]
__alloc_pages_slowpath+0x27a/
0x590
Feb 21 00:57:23 player1 kernel: [ 6242.752522] [<ffffffff81107884>]
__alloc_pages_nodemask
+0x164/0x1d0
Feb 21 00:57:23 player1 kernel: [ 6242.752530] [<ffffffff811397ba>]
alloc_pages_current+0x9a/
0x100
Feb 21 00:57:23 player1 kernel: [ 6242.752538] [<ffffffff8112c48e>]
__vmalloc_area_node+0xee/
0x190
Feb 21 00:57:23 player1 kernel: [ 6242.752605] [<ffffffffa0ecbf28>] ?
z_compress_level+0x78/0x100
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752613] [<ffffffff8112c392>]
__vmalloc_node
+0xa2/0xb0
Feb 21 00:57:23 player1 kernel: [ 6242.752679] [<ffffffffa0ecbf28>] ?
z_compress_level+0x78/0x100
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752687] [<ffffffff8112c6ec>]
vmalloc+0x2c/
0x30
Feb 21 00:57:23 player1 kernel: [ 6242.752753] [<ffffffffa0ecbf28>]
z_compress_level+0x78/0x100
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752812] [<ffffffffa0eae30b>] ?
dnode_diduse_space+0x6b/0xa0
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752879] [<ffffffffa0ecbfef>]
gzip_compress+0x3f/0x80
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.752949] [<ffffffffa0f226df>]
zio_compress_data+0x9f/0xe0
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.753020] [<ffffffffa0f1e735>]
zio_write_bp_init+0x235/0x500
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.753090] [<ffffffffa0f20b7a>]
zio_execute+0x7a/0xb0
[zfs]
Feb 21 00:57:23 player1 kernel: [ 6242.753109] [<ffffffffa0e22726>]
taskq_thread+0x1a6/0x310
[spl]
Feb 21 00:57:23 player1 kernel: [ 6242.753116] [<ffffffff81056c10>] ?
default_wake_function
+0x0/0x20
Feb 21 00:57:23 player1 kernel: [ 6242.753133] [<ffffffffa0e22580>] ?
taskq_thread+0x0/0x310
[spl]
Feb 21 00:57:23 player1 kernel: [ 6242.753140] [<ffffffff8107eb26>]
kthread
+0x96/0xa0
Feb 21 00:57:23 player1 kernel: [ 6242.753147] [<ffffffff8100aee4>]
kernel_thread_helper
+0x4/0x10
Feb 21 00:57:23 player1 kernel: [ 6242.753153] [<ffffffff8107ea90>] ?
kthread
+0x0/0xa0
Feb 21 00:57:23 player1 kernel: [ 6242.753160] [<ffffffff8100aee0>] ?
kernel_thread_helper+0x0/0x10



I hope it can be useful to you.

mgiammarco

unread,
Feb 27, 2011, 7:22:20 AM2/27/11
to KQStor ZFS Discussion
Ops look at my kern.log:


eb 27 12:12:14 player1 kernel: [83518.500835] Pid: 2533, comm:
txg_sync Tainted: P 2.6.35-22-server #35-Ubuntu
Feb 27 12:12:14 player1 kernel: [83518.500839] Call Trace:
Feb 27 12:12:14 player1 kernel: [83518.500849] [<ffffffff810521b6>]
__schedule_bug+0x66/0x70
Feb 27 12:12:14 player1 kernel: [83518.500857] [<ffffffff8159c86e>]
schedule+0x7ee/0x830
Feb 27 12:12:14 player1 kernel: [83518.500865] [<ffffffff8159cc15>]
schedule_timeout+0x195/0x310
Feb 27 12:12:14 player1 kernel: [83518.500875] [<ffffffff8106ffd0>] ?
process_timeout+0x0/0x10
Feb 27 12:12:14 player1 kernel: [83518.500897] [<ffffffffa0e10e70>]
__cv_timedwait_common+0xb0/0x110 [spl]
Feb 27 12:12:14 player1 kernel: [83518.500905] [<ffffffff8107f080>] ?
autoremove_wake_function+0x0/0x40
Feb 27 12:12:14 player1 kernel: [83518.500924] [<ffffffffa0e10f03>]
__cv_timedwait+0x13/0x20 [spl]
Feb 27 12:12:14 player1 kernel: [83518.501006] [<ffffffffa0edc864>]
txg_thread_wait+0x34/0x40 [zfs]
Feb 27 12:12:14 player1 kernel: [83518.501076] [<ffffffffa0edcaed>]
txg_sync_thread+0x10d/0x3a0 [zfs]
Feb 27 12:12:14 player1 kernel: [83518.501147] [<ffffffffa0edc9e0>] ?
txg_sync_thread+0x0/0x3a0 [zfs]
Feb 27 12:12:14 player1 kernel: [83518.501177] [<ffffffffa0e0d0b8>]
thread_generic_wrapper+0x78/0x90 [spl]
Feb 27 12:12:14 player1 kernel: [83518.501195] [<ffffffffa0e0d040>] ?
thread_generic_wrapper+0x0/0x90 [spl]
Feb 27 12:12:14 player1 kernel: [83518.501202] [<ffffffff8107eb26>]
kthread+0x96/0xa0
Feb 27 12:12:14 player1 kernel: [83518.501210] [<ffffffff8100aee4>]
kernel_thread_helper+0x4/0x10
Feb 27 12:12:14 player1 kernel: [83518.501217] [<ffffffff8107ea90>] ?
kthread+0x0/0xa0
Feb 27 12:12:14 player1 kernel: [83518.501224] [<ffffffff8100aee0>] ?
kernel_thread_helper+0x0/0x10
Feb 27 12:12:19 player1 kernel: [83523.500074] BUG: scheduling while
atomic: txg_sync/2533/0x00000001
Feb 27 12:12:19 player1 kernel: [83523.500081] Modules linked in:
ip6table_filter ip6_tables lzfs zfs(P) zcommon(P) znvpair(P) zavl(P)
zlib_deflate zunicode(P) spl vmnet vmblock vsock vmci vmmon
ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
xt_state nf_$
Feb 27 12:12:19 player1 kernel: [83523.500187] Pid: 2533, comm:
txg_sync Tainted: P 2.6.35-22-server #35-Ubuntu
Feb 27 12:12:19 player1 kernel: [83523.500192] Call Trace:
Feb 27 12:12:19 player1 kernel: [83523.500207] [<ffffffff810521b6>]
__schedule_bug+0x66/0x70
Feb 27 12:12:19 player1 kernel: [83523.500217] [<ffffffff8159c86e>]
schedule+0x7ee/0x830
Feb 27 12:12:19 player1 kernel: [83523.500226] [<ffffffff8107f2a0>] ?
prepare_to_wait_exclusive+0x60/0x90
Feb 27 12:12:19 player1 kernel: [83523.500253] [<ffffffffa0e10fa7>]
cv_wait_common+0x97/0xd0 [spl]
Feb 27 12:12:19 player1 kernel: [83523.500260] [<ffffffff8107f080>] ?
autoremove_wake_function+0x0/0x40
Feb 27 12:12:19 player1 kernel: [83523.500279] [<ffffffffa0e11013>]
__cv_wait+0x13/0x20 [spl]
Feb 27 12:12:19 player1 kernel: [83523.500369] [<ffffffffa0edc84d>]
txg_thread_wait+0x1d/0x40 [zfs]
Feb 27 12:12:19 player1 kernel: [83523.500438] [<ffffffffa0edcb90>]
txg_sync_thread+0x1b0/0x3a0 [zfs]
Feb 27 12:12:19 player1 kernel: [83523.500508] [<ffffffffa0edc9e0>] ?
txg_sync_thread+0x0/0x3a0 [zfs]
Feb 27 12:12:19 player1 kernel: [83523.500527] [<ffffffffa0e0d0b8>]
thread_generic_wrapper+0x78/0x90 [spl]
Feb 27 12:12:19 player1 kernel: [83523.500544] [<ffffffffa0e0d040>] ?
thread_generic_wrapper+0x0/0x90 [spl]
Feb 27 12:12:19 player1 kernel: [83523.500551] [<ffffffff8107eb26>]
kthread+0x96/0xa0
Feb 27 12:12:19 player1 kernel: [83523.500559] [<ffffffff8100aee4>]
kernel_thread_helper+0x4/0x10
Feb 27 12:12:19 player1 kernel: [83523.500566] [<ffffffff8107ea90>] ?
kthread+0x0/0xa0
Feb 27 12:12:19 player1 kernel: [83523.500572] [<ffffffff8100aee0>] ?
kernel_thread_helper+0x0/0x10
Feb 27 12:12:19 player1 kernel: [83523.500737] BUG: scheduling while
atomic: txg_sync/2533/0x00000001
Feb 27 12:12:19 player1 kernel: [83523.500742] Modules linked in:
ip6table_filter ip6_tables lzfs zfs(P) zcommon(P) znvpair(P) zavl(P)
zlib_deflate zunicode(P) spl vmnet vmblock vsock vmci vmmon
ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
xt_state nf_$
Feb 27 12:12:19 player1 kernel: [83523.500858] Pid: 2533, comm:
txg_sync Tainted: P 2.6.35-22-server #35-Ubuntu
Feb 27 12:12:19 player1 kernel: [83523.500863] Call Trace:
Feb 27 12:12:19 player1 kernel: [83523.500872] [<ffffffff810521b6>]
__schedule_bug+0x66/0x70
Feb 27 12:12:19 player1 kernel: [83523.500880] [<ffffffff8159c86e>]
schedule+0x7ee/0x830
Feb 27 12:12:19 player1 kernel: [83523.500888] [<ffffffff8159cc15>]
schedule_timeout+0x195/0x310
Feb 27 12:12:19 player1 kernel: [83523.500898] [<ffffffff8106ffd0>] ?
process_timeout+0x0/0x10
Feb 27 12:12:19 player1 kernel: [83523.500920] [<ffffffffa0e10e70>]
__cv_timedwait_common+0xb0/0x110 [spl]
Feb 27 12:12:19 player1 kernel: [83523.500928] [<ffffffff8107f080>] ?
autoremove_wake_function+0x0/0x40
Feb 27 12:12:19 player1 kernel: [83523.500947] [<ffffffffa0e10f03>]
__cv_timedwait+0x13/0x20 [spl]
Feb 27 12:12:19 player1 kernel: [83523.501029] [<ffffffffa0edc864>]
txg_thread_wait+0x34/0x40 [zfs]
Feb 27 12:12:19 player1 kernel: [83523.501099] [<ffffffffa0edcaed>]
txg_sync_thread+0x10d/0x3a0 [zfs]
Feb 27 12:12:19 player1 kernel: [83523.501170] [<ffffffffa0edc9e0>] ?
txg_sync_thread+0x0/0x3a0 [zfs]
Feb 27 12:12:19 player1 kernel: [83523.501188] [<ffffffffa0e0d0b8>]
thread_generic_wrapper+0x78/0x90 [spl]
Feb 27 12:12:19 player1 kernel: [83523.501206] [<ffffffffa0e0d040>] ?
thread_generic_wrapper+0x0/0x90 [spl]
Feb 27 12:12:19 player1 kernel: [83523.501214] [<ffffffff8107eb26>]
kthread+0x96/0xa0
Feb 27 12:12:19 player1 kernel: [83523.501221] [<ffffffff8100aee4>]
kernel_thread_helper+0x4/0x10
Feb 27 12:12:19 player1 kernel: [83523.501228] [<ffffffff8107ea90>] ?
kthread+0x0/0xa0
Feb 27 12:12:19 player1 kernel: [83523.501235] [<ffffffff8100aee0>] ?
kernel_thread_helper+0x0/0x10
Feb 27 12:12:24 player1 kernel: [83528.500043] BUG: scheduling while
atomic: txg_sync/2533/0x00000001
Feb 27 12:12:24 player1 kernel: [83528.500049] Modules linked in:
ip6table_filter ip6_tables lzfs zfs(P) zcommon(P) znvpair(P) zavl(P)
zlib_deflate zunicode(P) spl vmnet vmblock vsock vmci vmmon
ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
xt_state nf_$
Feb 27 12:12:24 player1 kernel: [83528.500157] Pid: 2533, comm:
txg_sync Tainted: P 2.6.35-22-server #35-Ubuntu
Feb 27 12:12:24 player1 kernel: [83528.500162] Call Trace:
Feb 27 12:12:24 player1 kernel: [83528.500177] [<ffffffff810521b6>]
__schedule_bug+0x66/0x70
Feb 27 12:12:24 player1 kernel: [83528.500186] [<ffffffff8159c86e>]
schedule+0x7ee/0x830
Feb 27 12:12:24 player1 kernel: [83528.500195] [<ffffffff8107f2a0>] ?
prepare_to_wait_exclusive+0x60/0x90
Feb 27 12:12:24 player1 kernel: [83528.500223] [<ffffffffa0e10fa7>]
cv_wait_common+0x97/0xd0 [spl]
Feb 27 12:12:24 player1 kernel: [83528.500230] [<ffffffff8107f080>] ?
autoremove_wake_function+0x0/0x40
Feb 27 12:12:24 player1 kernel: [83528.500249] [<ffffffffa0e11013>]
__cv_wait+0x13/0x20 [spl]
Feb 27 12:12:24 player1 kernel: [83528.500341] [<ffffffffa0edc84d>]
txg_thread_wait+0x1d/0x40 [zfs]
Feb 27 12:12:24 player1 kernel: [83528.500411] [<ffffffffa0edcb90>]
txg_sync_thread+0x1b0/0x3a0 [zfs]
Feb 27 12:12:24 player1 kernel: [83528.500481] [<ffffffffa0edc9e0>] ?
txg_sync_thread+0x0/0x3a0 [zfs]
Feb 27 12:12:24 player1 kernel: [83528.500499] [<ffffffffa0e0d0b8>]
thread_generic_wrapper+0x78/0x90 [spl]
Feb 27 12:12:24 player1 kernel: [83528.500517] [<ffffffffa0e0d040>] ?
thread_generic_wrapper+0x0/0x90 [spl]
Feb 27 12:12:24 player1 kernel: [83528.500524] [<ffffffff8107eb26>]
kthread+0x96/0xa0
Feb 27 12:12:24 player1 kernel: [83528.500531] [<ffffffff8100aee4>]
kernel_thread_helper+0x4/0x10
Feb 27 12:12:24 player1 kernel: [83528.500538] [<ffffffff8107ea90>] ?
kthread+0x0/0xa0
Feb 27 12:12:24 player1 kernel: [83528.500544] [<ffffffff8100aee0>] ?
kernel_thread_helper+0x0/0x10
Feb 27 12:12:24 player1 kernel: [83528.500709] BUG: scheduling while
atomic: txg_sync/2533/0x00000001
Feb 27 12:12:24 player1 kernel: [83528.500714] Modules linked in:
ip6table_filter ip6_tables lzfs zfs(P) zcommon(P) znvpair(P) zavl(P)
zlib_deflate zunicode(P) spl vmnet vmblock vsock vmci vmmon
ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
xt_state nf_$
Feb 27 12:12:24 player1 kernel: [83528.500840] Pid: 2533, comm:
txg_sync Tainted: P 2.6.35-22-server #35-Ubuntu
Feb 27 12:12:24 player1 kernel: [83528.500845] Call Trace:
Feb 27 12:12:24 player1 kernel: [83528.500854] [<ffffffff810521b6>]
__schedule_bug+0x66/0x70
Feb 27 12:12:24 player1 kernel: [83528.500863] [<ffffffff8159c86e>]
schedule+0x7ee/0x830
Feb 27 12:12:24 player1 kernel: [83528.500871] [<ffffffff8159cc15>]
schedule_timeout+0x195/0x310
Feb 27 12:12:24 player1 kernel: [83528.500882] [<ffffffff8106ffd0>] ?
process_timeout+0x0/0x10
Feb 27 12:12:24 player1 kernel: [83528.500905] [<ffffffffa0e10e70>]
__cv_timedwait_common+0xb0/0x110 [spl]
Feb 27 12:12:24 player1 kernel: [83528.500913] [<ffffffff8107f080>] ?
autoremove_wake_function+0x0/0x40
Feb 27 12:12:24 player1 kernel: [83528.500932] [<ffffffffa0e10f03>]
__cv_timedwait+0x13/0x20 [spl]
Feb 27 12:12:24 player1 kernel: [83528.501014] [<ffffffffa0edc864>]
txg_thread_wait+0x34/0x40 [zfs]
Feb 27 12:12:24 player1 kernel: [83528.501084] [<ffffffffa0edcaed>]
txg_sync_thread+0x10d/0x3a0 [zfs]
Feb 27 12:12:24 player1 kernel: [83528.501155] [<ffffffffa0edc9e0>] ?
txg_sync_thread+0x0/0x3a0 [zfs]
Feb 27 12:12:24 player1 kernel: [83528.501174] [<ffffffffa0e0d0b8>]
thread_generic_wrapper+0x78/0x90 [spl]
Feb 27 12:12:24 player1 kernel: [83528.501192] [<ffffffffa0e0d040>] ?
thread_generic_wrapper+0x0/0x90 [spl]
Feb 27 12:12:24 player1 kernel: [83528.501199] [<ffffffff8107eb26>]
kthread+0x96/0xa0
Feb 27 12:12:24 player1 kernel: [83528.501207] [<ffffffff8100aee4>]
kernel_thread_helper+0x4/0x10
Feb 27 12:12:24 player1 kernel: [83528.501214] [<ffffffff8107ea90>] ?
kthread+0x0/0xa0
Feb 27 12:12:24 player1 kernel: [83528.501221] [<ffffffff8100aee0>] ?
kernel_thread_helper+0x0/0x10
Feb 27 12:12:29 player1 kernel: [83533.500046] BUG: scheduling while
atomic: txg_sync/2533/0x00000001
Feb 27 12:12:29 player1 kernel: [83533.500053] Modules linked in:
ip6table_filter ip6_tables lzfs zfs(P) zcommon(P) znvpair(P) zavl(P)
zlib_deflate zunicode(P) spl vmnet vmblock vsock vmci vmmon
ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
xt_state nf_$
Feb 27 12:12:29 player1 kernel: [83533.500161] Pid: 2533, comm:
txg_sync Tainted: P 2.6.35-22-server #35-Ubuntu
Feb 27 12:12:29 player1 kernel: [83533.500166] Call Trace:
Feb 27 12:12:29 player1 kernel: [83533.500181] [<ffffffff810521b6>]
__schedule_bug+0x66/0x70
Feb 27 12:12:29 player1 kernel: [83533.500190] [<ffffffff8159c86e>]
schedule+0x7ee/0x830
........

On 10 Feb, 21:23, Massimo Maggi <mass...@mmmm.it> wrote:

Clemens Fruhwirth

unread,
Mar 6, 2011, 8:30:45 AM3/6/11
to KQStor ZFS Discussion
I can second that. I am seeing this error too on my system with bonnie+
+.

mgiammarco

unread,
Mar 10, 2011, 5:17:33 AM3/10/11
to KQStor ZFS Discussion
Ok I made a mistake. I have upgraded pool to version 28 but it has not
helped me. So now I have uninstalled kqzfs and installed zfs-fuse. But
zfs-fuse 0.7.0 supports only pool version 26. Is there a chance to go
back to version 26?

Thanks,
Mario

On 10 Feb, 21:35, Vikram jadhav <vik...@kqinfotech.com> wrote:
> Hi,
> first problem of chmod 000.. might created because of different version of
> zfs-fuse and kq zfs file system.
>
> you can upgrade the zpool and zfs to the current version after importing
> pool which created by zfs-fuse
>
> e.g:
> #zpool upgrade -V 28 <poolname>
> #zfs upgrade -V 5 <fsname>
> #zfs upgrade -V 5 <poolname>
> #zfs get all <fsname> -- to check version
>
> it might solve your first problem.
>

th...@thana.ath.cx

unread,
Mar 10, 2011, 5:25:53 AM3/10/11
to kqstor-zf...@googlegroups.com
On 10.03.2011 11:17, mgiammarco wrote:
> Ok I made a mistake. I have upgraded pool to version 28 but it has not
> helped me. So now I have uninstalled kqzfs and installed zfs-fuse. But
> zfs-fuse 0.7.0 supports only pool version 26. Is there a chance to go
> back to version 26?

No. Destroy the pool and restore the backup on an older version

mgiammarco

unread,
Mar 15, 2011, 4:56:38 AM3/15/11
to KQStor ZFS Discussion
Since I have read in the forum that zfs server needs 4 gb of ram and
because my test server had only 2giga I exported the hdd containing
zfs pools as an iscsi target to another server with 4giga of ram.
The only result after bonnie++ and rsync test is that with more ram it
needs more time to hang but the result is the same.

Can you try to do a bonnie++ or rsync stress test on yours
filesystems?

mgiammarco

unread,
Mar 15, 2011, 5:03:49 AM3/15/11
to KQStor ZFS Discussion
Which os does support pool version 28?
I have tried to import pools with freebsd8.2 and with nexentastor
(based on solaris) and they say that pool version is too new for them.
It seems strange to me!

Prasad Joshi

unread,
Mar 15, 2011, 5:55:28 AM3/15/11
to mgiammarco, kqstor-zf...@googlegroups.com
On Tue, Mar 15, 2011 at 8:56 AM, mgiammarco <mgiam...@gmail.com> wrote:
Since I have read in the forum that zfs server needs 4 gb of ram and
because my test server had only 2giga I exported the hdd containing
zfs pools as an iscsi target to another server with 4giga of ram.
The only result after bonnie++ and rsync test is that with more ram it
needs more time to hang but the result is the same.


Yes you are correct. We have observed the problems. The one of the reason the why this happens is, in the write code (zfs_write) path, the code allocates memory using __vmalloc(). One of the parameter to the this function is the allocation flags (__GFP_NOFS). It is expected that when the memory is allocated using GFP_NOFS, the Linux Kernel will not callback the file system functions incase of memory pressure. Unfortunately there is a BUG in Linux Kernel see (https://bugzilla.kernel.org/show_bug.cgi?id=30702) for more details.

Anyway the Linux Kernel BUG fix will only be available in the new kernel release. We are also working on a workaround for the current kernels. I have a patch which solves this problem, but the change is taking more CPU resources than expected. I will send you a mail as soon as the changes are committed to the repository.
 
Thanks and Regards,
Prasad
Reply all
Reply to author
Forward
0 new messages