Hi Mark,
good that my explanation helped you to get something that works :) more
will come in the next months
As per your question, right now we don't have ZFS support in ESOS,
because we are focused on SAN environments, and exporting filesystems is
not really our goal; although ZFS can be used to create software RAID
which can act as a container for SCST (file_io mode) I'm personally
against file_io (even if in some situations it can boost performance)
for several reaosns (VFS translation and page caching latencies) which
do not occur with block_io or passtrough SCSI.
I would say that the best option is to get a hardware RAID controller
(with backplane signaling support SGPIO/I2C) to light up the red LEDs in
case of disk failure and battery backup unit for the cache.
If you're building a cheap array you can get used controllers from ebay
starting at 50$, in my experience the adaptec/LSI works best (ex. Dell
PERC H700 512MB [LSI] or Adaptec ASR-52445 28 Ports for around 300$ with
battery and cabling)
I've used both ZFS and Linux RAID (both MD and LVM2) and my experience
with them has been somehow negative (sometimes high CPU usage in I/O
intensive scenarios, higher latency, soft locks for various
miliseconds), but the positive part is that you can avoid expensive RAID
cards for high density low performance cheap arrays.
ESOS should have support for Linux soft RAID (both MD and LVM2) I never
tested the soft RAID for levels different than 1 so will schedule a
couple of tests for the weekend and let you know what's wrong in here.
But in the end I think that ZFS could be a good addition to ESOS as the
Z2 and Z3 has better performance compared to MD RAID6 and there is no
equivalent of Z3 right now (a part from building RAID 60).
- Marc what do you think about having ZFS in ESOS?
= Marcin
> First off, let me congratulate and thank each and every one of the
> contributors on this project. You have built an amazing system. Your
> documentation is also top notch. I have never worked with Fiber
> Channel prior to last night (with the exception of plugging in the
> fiber and onlining/initializing/formatting the LUNs presented to my
> servers from our storage engineers) Marcin's explanation in the
> thread here:
>
https://groups.google.com/forum/m/#!topic/esos-users/NDwc_2AlNzs
> <
https://groups.google.com/forum/m/#%21topic/esos-users/NDwc_2AlNzs> was
> an eye opener... Through that I was able to configure my HP C7000
> chassis brocade 4/24 switch, configure the zoning on the switch, add
> the hosts in esos, and then follow the wiki for the install and
> configure and by the end of the night I had a 14.5TB Fibre Channel
> SAN.... incredible!
>
>
> On to my question... I was previously using FreeNAS with a ZFS RAIDZ2
> array for my storage that was shared via iSCSI... With my move to
> Fibre Channel I decided to explore other options which lead me to this
> fantastic project. I was wondering if there's any compelling reason
> you haven't yet included ZFS or if there is similar functionality here
> that I'm missing... I was able to create a striped lvm volume across
> my 8 disks, but using mdadm I was unable to use any of the --type
> raidn flags as I got a warning about something not being included in
> the kernel and dependencies....
>
> What sort of software based options to I have here to offer some fault
> tolerance and resiliency to my data array?
> --
> You received this message because you are subscribed to the Google
> Groups "esos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to
esos-users+...@googlegroups.com.
> For more options, visit
https://groups.google.com/groups/opt_out.