spl/zfs-0.6.0-rc11

Showing 1-17 of 17 messages
spl/zfs-0.6.0-rc11 Brian Behlendorf 9/18/12 2:50 PM

The spl/zfs-0.6.0-rc11 release candidate is available.

  http://github.com/downloads/zfsonlinux/spl/spl-0.6.0-rc11.tar.gz
  http://github.com/downloads/zfsonlinux/zfs/zfs-0.6.0-rc11.tar.gz

This release includes several ZFS improvements which were backported
from Illumos and a variety of bug fixes.  Full details and proper
attribution for this work is below.  Highlights include:

  * Support for ZVOL based swap devices
  * Support for preemptible kernels
  * Vastly improved msync() performance
  * Improved behavior under low memory conditions
  * Improved 'zpool import' search behavior
  * Added 'zstreamdump' command from Illumos
  * Added 'zfs get -t <datatype>" support from Illumos
  * Fixed 'ZFS replay transaction error 5'
  * Fixed SA based xattr coherency issue
  * Fixed various NFS issues

 
----------------------- SPL Change Log ---------------------------
Brian Behlendorf (19):
      Add copy-builtin to EXTRA_DIST
      Remove autotools products
      Emergency slab objects
      Revert "Add TASKQ_NORECLAIM flag"
      Revert "Detect kernels that honor gfp flags passed to vmalloc()"
      Revert "Fix NULL deref in balance_pgdat()"
      Revert "Disable vmalloc() direct reclaim"
      Add PF_NOFS debugging flag
      Mutex ASSERT on self deadlock
      Switch KM_SLEEP to KM_PUSHPAGE
      Enhance SPLAT kmem:slab_overcommit test
      Suppress task_hash_table_init() large allocation warning
      Add KMC_NOEMERGENCY slab flag
      Set KMC_NOEMERGENCY for zlib workspaces
      Debug cv_destroy() with mutex held
      Revert "Switch KM_SLEEP to KM_PUSHPAGE"
      Remove TQ_SLEEP -> KM_SLEEP mapping
      Switch KM_SLEEP to KM_PUSHPAGE
      SPL 0.6.0-rc11

Chris Dunlop (1):
      Remove autotools products

Etienne Dechamps (1):
      Add DKIOCTRIM for TRIM support.

Prakash Surya (5):
      Wrap trace_set_debug_header in trace_[get|put]_tcd
      Avoid calling smp_processor_id in spl_magazine_age
      Add kpreempt_[dis|en]able macros in <sys/disp.h>
      Revert "Make CONFIG_PREEMPT Fatal"
      Remove SPL_LINUX_CONFIG autoconf macro

Richard Yao (1):
      Remove Makefile from non-toplevel .gitignore files


----------------------- ZFS Change Log ---------------------------
Alexander Eremin (2):
      Illumos #1977: zfs allow arguments not parsed correctly
      Illumos #1726: Removal of pyzfs broke delegation for volumes

Andrew Stormont (1):
      Illumos #1936: add support for "-t <datatype>" argument to zfs get

Brian Behlendorf (23):
      Add copy-builtin to EXTRA_DIST
      Revert "Use SA_HDL_PRIVATE for SA xattrs"
      rmdir(2) should return ENOTEMPTY
      Remove autotools products
      Pre-allocate vdev I/O buffers
      Annotate KM_PUSHPAGE call paths with PF_NOFS
      mzap_upgrade() must use kmem_alloc()
      Clear PG_writeback after zil_commit() for sync I/O
      Switch KM_SLEEP to KM_PUSHPAGE
      Switch KM_SLEEP to KM_PUSHPAGE
      Improve AF hard disk detection
      Switch KM_SLEEP to KM_PUSHPAGE
      Switch KM_SLEEP to KM_PUSHPAGE
      Disable page allocation warnings for ARC buffers
      Add zstreamdump .gitignore
      Remove zvol device node
      Revert "Improve AF hard disk detection"
      Move iput() after zfs_inode_update()
      Clear PG_writeback for sync I/O error case
      Switch KM_SLEEP to KM_PUSHPAGE
      Improve `zpool import` search behavior
      zfs-0.6.0-rc11
      Seg fault 'zpool import -d /dev/disk/by-id -a'

Chris Dunlop (2):
      Switch KM_SLEEP to KM_PUSHPAGE
      Remove autotools products

Christopher Siden (2):
      Illumos #1796, #2871, #2903, #2957
      Illumos #3085: zfs diff panics, then panics in a loop on booting

Cyril Plisko (5):
      Make ZFS filesystem id persistent across different machines
      Illumos #3064: cmd/zpool/zpool_main.c misspells "successful"
      Avoid running exportfs on each zfs/zpool command invocation
      Fix zdb printf format string for ZIL data blocks
      ZFS replay transaction error 5

Eric Schrock (1):
      Illumos #2635: 'zfs rename -f' to perform force unmount

Etienne Dechamps (4):
      Fix mount_zfs dependency on libzpool.
      Add libnvpair to mount_zfs dependencies
      Increase the stack space in userspace.
      Silence "setting dataset to sync always" message in ztest.

Garrett D'Amore (1):
      Illumos #2803: zfs get guid pretty-prints the output

Javen Wu (1):
      Drop spill buffer reference

Martin Matuska (2):
      Properly initialize and free destroydata
      Add zstreamdump(8) command to examine ZFS send streams.

Massimo Maggi (1):
      Fix snapshot automounting with GrSecurity constify plugin.

Michael Martin (1):
      Fix missing vdev names in zpool status output

Prakash Surya (2):
      Wrap smp_processor_id in kpreempt_[dis|en]able
      Remove autoconf check for CONFIG_PREEMPT

Richard Lowe (1):
      Illumos #2088 zdb could use a reasonable manual page

Richard Yao (6):
      Check kernel source directory for SPL
      Consistent menuconfig name

--
Thanks,
Brian

Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Turbo Fredriksson 9/18/12 3:03 PM
On Sep 18, 2012, at 11:50 PM, Brian Behlendorf wrote:

> The spl/zfs-0.6.0-rc11 release candidate is available.
Dang, that was quick! I haven't even had time to schedule downtime to
test rc10! :)

>  * Support for ZVOL based swap devices
Nice, finally! :). Much needed. I think.

>  * Vastly improved msync() performance
Any numbers?

>      Avoid running exportfs on each zfs/zpool command invocation
Anything I need to know about this regarding my iSCSI/SMB
patch?


I haven't been paying much attention the last couple of months,
but there is/was a problem with replacing a failed drive. Is
this problem still there, or have it been fixed?
--
System administrators motto:
You're either invisible or in trouble.
- Unknown

Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Etienne Dechamps 9/18/12 3:27 PM
On 2012-09-19 00:03, Turbo Fredriksson wrote:
>>   * Vastly improved msync() performance
> Any numbers?

1000x speedup. I'm not kidding.

See https://github.com/zfsonlinux/zfs/issues/907

--
Etienne Dechamps / e-t172 - AKE Group
Phone: +33 6 23 42 24 82
Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Turbo Fredriksson 9/18/12 3:32 PM
On Sep 19, 2012, at 12:27 AM, e-t172 wrote:

> 1000x speedup. I'm not kidding.
W00t indeed!! Greate, I'll have a look as soon as possible, thanx!
--
Imagine you're an idiot and then imagine you're in the government.
Oh, sorry. Now I'm repeating myself
- Mark Twain

Re: spl/zfs-0.6.0-rc11 Bleo 9/18/12 5:30 PM
"I haven't been paying much attention the last couple of months, 
but there is/was a problem with replacing a failed drive. Is 
this problem still there, or have it been fixed?"

I'm very interested in this as well!
Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Turbo Fredriksson 9/19/12 5:21 AM
On Sep 19, 2012, at 2:30 AM, Bleo wrote:

"I haven't been paying much attention the last couple of months, 
but there is/was a problem with replacing a failed drive. Is 
this problem still there, or have it been fixed?"

I'm very interested in this as well!

It seems that 'zfs replace' DO work. But I had to use the full
path to the devices (as told by Gregor Kopka earlier this year,
see attachment).


celia:~# zpool replace -f share scsi-SATA_ST31500341AS_9VS0DR98 scsi-SATA_ST31500341AS_9VS4R2MJ
cannot replace scsi-SATA_ST31500341AS_9VS0DR98 with scsi-SATA_ST31500341AS_9VS4R2MJ: no such device in pool
[... tried a bunch of other things, none worked ...]
celia:~# zpool replace -f share /dev/disk/by-id/scsi-SATA_ST31500341AS_9VS0DR98 /dev/disk/by-id/scsi-SATA_ST31500341AS_9VS4R2MJ
celia:~# zpool status
[... resilvering as we speak ...]


The '9VS0DR98' disk 'is no more' (serious amounts of SMART
errors, just refuses to be acknowledged by either BIOS nor OS)
and '9VS4R2MJ' is the new disk I got when I returned the old
one on warranty.

Do note that the 'old' disk device node did not exist. MIGHT have
worked without specifying it's path, but I did it any way, just in case :)
-- 
Try not. Do. Or do not. There is no try!
- Yoda
Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Turbo Fredriksson 9/19/12 5:39 AM
On Sep 19, 2012, at 2:21 PM, Turbo Fredriksson wrote:

celia:~# zpool status
[... resilvering as we speak ...]

What now's starting to scare me a little, is that there are FOUR drives resilvering:

celia:~# zpool status
  pool: share
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Wed Sep 19 16:13:55 2012
    1.32T scanned out of 18.4T at 952M/s, 5h13m to go
    60.2G resilvered, 7.17% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        share                                           DEGRADED     0     0     0
          raidz1-0                                      ONLINE       0     0     1
            scsi-SATA_ST31500341AS_9VS3S9YD             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS08THF             ONLINE       0     0     2
            scsi-SATA_ST31500341AS_9VS16S63             ONLINE       0     0     0
          raidz1-1                                      ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4XK4T             ONLINE       0     0     3
            scsi-SATA_ST31500341AS_9VS4Q3F4             ONLINE       0     0     0
            scsi-SATA_ST1500DL003-9VT_5YD1F2KF          ONLINE       0     0     0
          raidz1-2                                      DEGRADED     0     0     7
            scsi-SATA_ST31500341AS_9VS3SAWS             ONLINE       0     0     0  (resilvering)
            replacing-1                                 UNAVAIL      0     0     0
              scsi-SATA_ST31500341AS_9VS0DR98           UNAVAIL      0     0     0
              scsi-SATA_ST31500341AS_9VS4R2MJ           ONLINE       0     0     0  (resilvering)
            scsi-SATA_ST31500341AS_9VS13W11             ONLINE       0     0     0  (resilvering)
          raidz1-3                                      ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4VT5R             ONLINE       0     0     1  (resilvering)
            scsi-SATA_ST31500341AS_9VS4Q38C             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4WM30             ONLINE       0     0     0
          raidz1-4                                      ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4VT5X             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4WWPA             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS0H3A9             ONLINE       0     0     0
        cache
          ata-Corsair_Force_3_SSD_11486508000008952122  ONLINE       0     0     0

errors: No known data errors

It's resilvering a disk in raidz1-3, which should have been unaffected
and also the first disk in raidz1-2 (where the failed disk was).

At least it's quite fast...
--
Att inse sin egen betydelse är som att få ett kvalster att fatta att han bara syns i mikroskop
- Arne Anka

Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Brian Behlendorf 9/19/12 9:59 AM
On Wed, 2012-09-19 at 05:21 -0700, Turbo Fredriksson wrote:
> On Sep 19, 2012, at 2:30 AM, Bleo wrote:
>
> "I haven't been paying much attention the last couple of months,
> but there is/was a problem with replacing a failed drive. Is
> this problem still there, or have it been fixed?"
>
> I'm very interested in this as well!
>
> It seems that 'zfs replace' DO work. But I had to use the full
> path to the devices (as told by Gregor Kopka earlier this year,
> see attachment).

For some reason I didn't see an issue tracking this problem.  I opened a
new one and briefly looked at the code.  On first glance this one looks
pretty straight forward to debug if someone has the time.  See:

  https://github.com/zfsonlinux/zfs/issues/976

> Anything I need to know about this regarding my iSCSI/SMB
> patch?

Refreshing the patches against -rc11 would be good, and should be easy.
I don't think there's anything critical which needs to change.

--
Thanks,
Brian

Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Reinis Rozitis 9/19/12 12:14 PM
> For some reason I didn't see an issue tracking this problem.  I opened a
> new one and briefly looked at the code.  On first glance this one looks
pretty straight forward to debug if someone has the time.

There is the one of replacing UNAVAIL devices:
https://github.com/zfsonlinux/zfs/issues/544
While the workarround of dumping the missing device id from with 'zdb' and
using that in replace works it doesnt feel like the right/natural way.

Maybe both issues can be merged.

rr

Replace+Resilver ok (mostly), but pool still DEGRADED (Was: [zfs-discuss] spl/zfs-0.6.0-rc11) Turbo Fredriksson 9/19/12 12:12 PM
On Sep 19, 2012, at 2:39 PM, Turbo Fredriksson wrote:

On Sep 19, 2012, at 2:21 PM, Turbo Fredriksson wrote:

celia:~# zpool status
[... resilvering as we speak ...]

What now's starting to scare me a little, is that there are FOUR drives resilvering:

Ok, so now it's complete. Must have taken a lot less than 5h20m it was supposed
to take. Which is good! :)

But there seems to be a problem:

celia:~# zpool status   
  pool: share
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
 scan: resilvered 1.12T in 5h14m with 7 errors on Wed Sep 19 21:28:50 2012
config:

        NAME                                            STATE     READ WRITE CKSUM
        share                                           DEGRADED     0     0     7
          raidz1-0                                      ONLINE       0     0     1
            scsi-SATA_ST31500341AS_9VS3S9YD             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS08THF             ONLINE       0     0     2
            scsi-SATA_ST31500341AS_9VS16S63             ONLINE       0     0     0
          raidz1-1                                      ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4XK4T             ONLINE       0     0     3
            scsi-SATA_ST31500341AS_9VS4Q3F4             ONLINE       0     0     0
            scsi-SATA_ST1500DL003-9VT_5YD1F2KF          ONLINE       0     0     0
          raidz1-2                                      DEGRADED     0     0    21
            scsi-SATA_ST31500341AS_9VS3SAWS             ONLINE       0     0     0
            replacing-1                                 UNAVAIL      0     0     0
              scsi-SATA_ST31500341AS_9VS0DR98           UNAVAIL      0     0     0
              scsi-SATA_ST31500341AS_9VS4R2MJ           ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS13W11             ONLINE       0     0     0
          raidz1-3                                      ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4VT5R             ONLINE       0     0    19
            scsi-SATA_ST31500341AS_9VS4Q38C             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4WM30             ONLINE       0     0     0
          raidz1-4                                      ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4VT5X             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4WWPA             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS0H3A9             ONLINE       0     0     0
        cache
          ata-Corsair_Force_3_SSD_11486508000008952122  ONLINE       0     0     0

errors: 4 data errors, use '-v' for a list

The problem isn't so much the 4 data errors (I don't mind the files
lost - I can get them again if I want/need to). But rather the 'raidz1-2'
vdev (?). The old disk is still listed there (as UNAVAIL), with the new
one ok (as ONLINE). But I had expected the 'replacing-1' "dev" to be
removed together with the old disk when the resilver was done and the
new one taking it's place. Something like this:

          raidz1-2                                      ONLINE       0     0    21
            scsi-SATA_ST31500341AS_9VS3SAWS             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS4R2MJ             ONLINE       0     0     0
            scsi-SATA_ST31500341AS_9VS13W11             ONLINE       0     0     0

So what have I done wrong? Or have missed?
Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Cyril Pilsko 9/19/12 12:29 PM

In our internal ZFS builds we've added -g flag to "zpool status" command that reports device' guid together with their names.
Although not the cleanest way it does the trick for us. I can share the changeset if there is interest, but I would hesitate to consider upstream integration.
 
Maybe both issues can be merged.

rr

Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Reinis Rozitis 9/19/12 5:39 PM
> From: Cyril Plisko
>
> In our internal ZFS builds we've added -g flag to "zpool status" command
> that reports device' guid together with their names.


If it doesnt make too much work I would be grateful if you could share the
patch.
I have made few maintenance scripts which just "grep" the ids out but this
would make life simpler (before the spares start to work ;) ).


rr

Re: Replace+Resilver ok (mostly), but pool still DEGRADED (Was: [zfs-discuss] spl/zfs-0.6.0-rc11) Fajar A. Nugraha 9/19/12 11:03 PM
Strange, I'm getting different result. My replace works fine even
without full path, and it completes just fine.
This is with ubuntu, zfs/spl 0.6.0.78 (should be pretty close to
-rc11), disks are virtual disks created using scst_local.

# zpool create tpool raidz scsi-23866313233343231
scsi-23963623061633536 scsi-26262663539636238
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-id/scsi-23866313233343231 does not contain an EFI label
but it may contain partition
information in the MBR.
# zpool create -f tpool raidz scsi-23866313233343231
scsi-23963623061633536 scsi-26262663539636238
# dd_rescue -m 100M /dev/urandom /tpool/100M
dd_rescue: (info): ipos:    102400.0k, opos:    102400.0k, xferd:    102400.0k
                   errs:      0, errxfer:         0.0k, succxfer:    102400.0k
             +curr.rate:    12556kB/s, avg.rate:    11066kB/s, avg.load: 98.7%
             >-----------------------------------------< 100%  ETA:  0:00:00
dd_rescue: (info): Summary for /dev/urandom -> /tpool/100M:
dd_rescue: (info): ipos:    102400.0k, opos:    102400.0k, xferd:    102400.0k
                   errs:      0, errxfer:         0.0k, succxfer:    102400.0k
             +curr.rate:        0kB/s, avg.rate:    10952kB/s, avg.load: 97.6%
             >-----------------------------------------< 100%  ETA:  0:00:00
# sync
# df -h /tpool/
Filesystem      Size  Used Avail Use% Mounted on
tpool           2.0T  100M  2.0T   1% /tpool
# zpool status -v tpool
  pool: tpool
 state: ONLINE
 scan: none requested
config:

        NAME                        STATE     READ WRITE CKSUM
        tpool                       ONLINE       0     0     0
          raidz1-0                  ONLINE       0     0     0
            scsi-23866313233343231  ONLINE       0     0     0
            scsi-23963623061633536  ONLINE       0     0     0
            scsi-26262663539636238  ONLINE       0     0     0

errors: No known data errors
# zpool replace tpool scsi-23963623061633536 scsi-26536646436353133
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-id/scsi-26536646436353133 does not contain an EFI label
but it may contain partition
information in the MBR.
# zpool replace -f tpool scsi-23963623061633536 scsi-26536646436353133
# zpool status -v tpool
  pool: tpool
 state: ONLINE
 scan: resilvered 50.1M in 0h0m with 0 errors on Thu Sep 20 12:57:53 2012
config:

        NAME                        STATE     READ WRITE CKSUM
        tpool                       ONLINE       0     0     0
          raidz1-0                  ONLINE       0     0     0
            scsi-23866313233343231  ONLINE       0     0     0
            scsi-26536646436353133  ONLINE       0     0     0
            scsi-26262663539636238  ONLINE       0     0     0

errors: No known data errors


Note that the second disk is now replaced correctly.

--
Fajar
Re: Replace+Resilver ok (mostly), but pool still DEGRADED (Was: [zfs-discuss] spl/zfs-0.6.0-rc11) Fajar A. Nugraha 9/19/12 11:23 PM
On Thu, Sep 20, 2012 at 1:03 PM, Fajar A. Nugraha <li...@fajar.net> wrote:
> On Thu, Sep 20, 2012 at 2:12 AM, Turbo Fredriksson <tu...@bayour.com> wrote:

>> The problem isn't so much the 4 data errors (I don't mind the files
>> lost - I can get them again if I want/need to). But rather the 'raidz1-2'
>> vdev (?). The old disk is still listed there (as UNAVAIL), with the new
>> one ok (as ONLINE).


> Strange, I'm getting different result. My replace works fine even
> without full path, and it completes just fine.

> Note that the second disk is now replaced correctly.

... and the same thing also happens when replacing an UNAVAIL disk (on
my previous test the old disk was still ONLINE)

# zpool status -v tpool
  pool: tpool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
 scan: resilvered 50.1M in 0h0m with 0 errors on Thu Sep 20 12:57:53 2012
config:

        NAME                        STATE     READ WRITE CKSUM
        tpool                       DEGRADED     0     0     0
          raidz1-0                  DEGRADED     0     0     0
            scsi-23866313233343231  ONLINE       0     0     0
            12426161893666030544    UNAVAIL      0     0     0  was
/dev/disk/by-id/scsi-26536646436353133-part1
            scsi-26262663539636238  ONLINE       0     0     0

errors: No known data errors
# zpool replace tpool 12426161893666030544 scsi-23963623061633536
# zpool status -v tpool
  pool: tpool
 state: ONLINE
 scan: resilvered 50.1M in 0h0m with 0 errors on Thu Sep 20 13:21:42 2012
config:

        NAME                        STATE     READ WRITE CKSUM
        tpool                       ONLINE       0     0     0
          raidz1-0                  ONLINE       0     0     0
            scsi-23866313233343231  ONLINE       0     0     0
            scsi-23963623061633536  ONLINE       0     0     0
            scsi-26262663539636238  ONLINE       0     0     0

errors: No known data errors

--
Fajar
Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Cyril Pilsko 9/20/12 10:03 AM
Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Bill McGonigle 9/20/12 11:34 AM
On 09/19/2012 08:21 AM, Turbo Fredriksson wrote:
>
> It seems that 'zfs replace' DO work. But I had to use the full
> path to the devices (as told by Gregor Kopka earlier this year,
> see attachment).

Ah!  This even works for long-gone disks with their ghosts showing as
UNAVAIL in zpool status.

e.g:

         cache
           scsi-SATA_TS64GSSD25S-M_20110211405406066105-part1  UNAVAIL
     0     0     0

# zpool remove storage
/dev/disk/by-id/scsi-SATA_TS64GSSD25S-M_20110211405406066105-part1

yay!

-Bill

--
Bill McGonigle, Owner
BFC Computing, LLC
http://bfccomputing.com/
Telephone: +1.855.SW.LIBRE
Email, IM, VOIP: bi...@bfccomputing.com
VCard: http://bfccomputing.com/vcard/bill.vcf
Social networks: bill_mcgonigle/bill.mcgonigle
Re: [zfs-discuss] spl/zfs-0.6.0-rc11 Gordan Bobic 9/21/12 4:10 AM
On 09/18/2012 10:50 PM, Brian Behlendorf wrote:

>    * Support for ZVOL based swap devices

Has anyone tried ZVOL swap yet? Any issues or special requirements (e.g.
4KB block size)?

Gordan