experience with VM's on onefs

350 views
Skip to first unread message

Jerry Uanino

unread,
Dec 29, 2014, 1:35:54 PM12/29/14
to isilon-u...@googlegroups.com
We're considering moving some fileservers from physical and virtual hardware today over to Isilon.
We've come up with 2 basic options, and I'm curious what others think or if they have experience.

1. use CIFs to serve the data, robocopy the files and do a cutover.
2. use NFS on the Isilon to present storage to vmware and run windows fileservers as VMs.

#2 would seem unnatural and necessary until you consider:
* DFSr would be possible as well as any future features supported by the windows fileserver in general that Isilon might no support
* Migration would be easier for existing VM's, I suppose we could storage vmotion them and be done with it (no robocopy needed)
* Windows admins maintain full control of the working environment, without much interaction from storage guys.

But the downsides:
* Can't really use file based snapshots in onefs
* Can't really use quotas in onefs
* You'd think there would be more overhead (although our initial testing doesn't show it)
* It's more complicated because you involve vmware and an OS.

Anyway, I'm more curious if anyone is actually running a fileserver as a VM using Isilon as the backing for the vmware storage.

Jerry

Eugene Lipsky

unread,
Dec 29, 2014, 2:08:12 PM12/29/14
to isilon-u...@googlegroups.com
We purchased our Isilon initially to go with option 1 and that's what we did. As we support multiple clients we were waiting for 7.0 release to allow multiple active directories before putting another client on. Once time came to start planning this we were initially asked to go with option 2 for second client as they use DFS and some other 3rd party based replication tools and wanted to keep them. I argued against this for a couple of reasons. I don't see the point in purchasing expensive isilon storage and basically using it as backend storage without onefs features as you mentioned (snapshots, quotas, etc.) Also extra layer of complexity that this would create for both the fileserver environment and our VMware farm. 

We ended keeping this client off of the isilon and sticking with windows server VMs on our existing VMware farm/SAN.

--
You received this message because you are subscribed to the Google Groups "Isilon Technical User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-gr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Neproshennie

unread,
Feb 20, 2015, 1:47:43 PM2/20/15
to isilon-u...@googlegroups.com
The biggest caveat to consider is the size of the virtual disk of the guest OS that will be housed on OneFS and the I/O going to it. As long as you don't heave the files stored on the virtual disk of the guest OS then you will reduce the I/O to the vmdk and the resulting protection overhead on OneFS. Aside from calculating parity information on the blocks that changed within the vmdk, the entire file has protection calculated as well. I've seen plenty of people run into problems with huge vmdks and heavy I/O to them. Running fileserver VMs on top of OneFS, you would also lose out on some of the advanced features and optimizations that allow for load balancing, caching, individual file protection, and etc.

--Jamie Ivanov

Jerry Uanino

unread,
Feb 21, 2015, 9:08:25 AM2/21/15
to isilon-u...@googlegroups.com
We indeed chose to just go with CIFs, we're still working through an upgrade.  Separately we started using our nice new NL cluster for backups.  We did find moving veeam backup files to it has a limitation of 4TB file sizes.  This is a real bummer.  Now I'm stuck putting those files on my ZFS appliance until we figure out if there is an *easy* way to deal with this.  The veeam guys don't want a million jobs to seperate things.

Neproshennie

unread,
Feb 23, 2015, 1:24:34 PM2/23/15
to isilon-u...@googlegroups.com
The main limitation of the 4TB filesize limit is going to be performance related as checksumming and hashing is done on the entire file (in addition to block level); the job engine will also have significant struggles with large files for similar reasons which may result in a butterfly effect of unhappiness. There are other reasons but that's for another story -- I don't know if any type of storage solution that really appreciates 4TB files unless it's strictly doing block-level protection and checksumming. Personally, I would never have a 4TB vmdk simply because that is having all of my eggs in one basket -- if that file becomes corrupt, I'm going to be up a creek trying to get the data back. Also, if I have multiple VMs sharing the same set of data (firing up a standby/clone if the VM host ever goes down), then I don't have to worry about replicating the entire VM as the worst-case-scenario is that I archive the VM's configuration and update the standby's configuration before it's fired up for failover but the serving data will be in a centralized spot.

Please pardon my thoughts as I'm not intimate with your infrstructure or needs, take what I say with a grain (or two) of salt. :)

PS, I love my ZFS array.

--Jamie Ivanov

John Beranek - PA

unread,
Feb 23, 2015, 2:27:45 PM2/23/15
to isilon-u...@googlegroups.com
Veeam files aren't VMDKs, they are a Veeam-specific file type for storing de-duplicated, compressed backup data. Due to that, the recommendation (as I remember it) is that a particular backup job includes a set of VMs with similar architecture (Linux, Windows, whatever) in order to increase the potential for de-dupe savings.

John

Neproshennie

unread,
Feb 23, 2015, 2:55:40 PM2/23/15
to isilon-u...@googlegroups.com
I was using VMDK's as an example due to the frequency of running into a situation like that.

When it comes to veeam or other large archives that won't have much I/O, I can see where the problem is going to become more frustrating. On the other hand, OneFS does have deduplication support in more recent releases which may be beneficial instead of transferring a 4TB+ archive over a network. Again, I'm only throwing my two-cents out there without having a sound understanding of your infrastructure and/or needs.

OneFS is far from a perfect product but I've seen what it can do, first hand, and when it works it's wicked cool. But like my tool chest, there is going to be the right tool for the right job because each tool will have strengths and weaknesses. Well, the exceptions would be a hammer and duct tape. What can't you fix with both of those in hand?

--Jamie Ivanov

Jerry Uanino

unread,
Feb 23, 2015, 6:59:25 PM2/23/15
to isilon-u...@googlegroups.com
It's about not changing the user workflow as well. Works great in zfs but doesn't work in onefs.  As far as dedupe in the isilon, we don't want to spend the time testing that yet.  The resources needed to experiment include time which is money. 

For now we've decided to just ignore this while we move our other apps from zfs.  Once we are done we will decide how to tackle that. 

Neproshennie

unread,
Feb 24, 2015, 9:22:52 AM2/24/15
to isilon-u...@googlegroups.com
I understand that the user's workflow shouldn't change, but the thought is that maybe it could be improved. There have been a number of times which I've personally helped shape the overall workflow of environments, including transitioning, to help streamline but I'm not saying that there is a single thing wrong; plus I'm by no means qualified to offer that type of assessment sans knowing the workflow, merely food for thought. Sometimes you simply need to use what works and in this case ZFS; I applaud the love of ZFS as I am also a fan of it on my storage array.

Out of curiosity, what does your ZFS array (or arrays) look like? Have you considered running a distributed filesystem on top of ZFS like Lustre, RSF-1, GlusterFS, Pacemaker, or Nexenta?

--Jamie Ivanov
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-group+unsubscribe@googlegroups.com.

Dan Pritts

unread,
Feb 24, 2015, 2:31:06 PM2/24/15
to isilon-u...@googlegroups.com
I'm surprised that veeam can't shard the backup files.  Seems like it could be a problem for lots of storage systems. 

danno

February 23, 2015 at 2:27 PM
February 23, 2015 at 1:24 PM
The main limitation of the 4TB filesize limit is going to be performance related as checksumming and hashing is done on the entire file (in addition to block level); the job engine will also have significant struggles with large files for similar reasons which may result in a butterfly effect of unhappiness. There are other reasons but that's for another story -- I don't know if any type of storage solution that really appreciates 4TB files unless it's strictly doing block-level protection and checksumming. Personally, I would never have a 4TB vmdk simply because that is having all of my eggs in one basket -- if that file becomes corrupt, I'm going to be up a creek trying to get the data back. Also, if I have multiple VMs sharing the same set of data (firing up a standby/clone if the VM host ever goes down), then I don't have to worry about replicating the entire VM as the worst-case-scenario is that I archive the VM's configuration and update the standby's configuration before it's fired up for failover but the serving data will be in a centralized spot.

Please pardon my thoughts as I'm not intimate with your infrstructure or needs, take what I say with a grain (or two) of salt. :)

PS, I love my ZFS array.

--Jamie Ivanov

On Saturday, February 21, 2015 at 8:08:25 AM UTC-6, jerry wrote:
--
You received this message because you are subscribed to the Google Groups "Isilon Technical User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-gr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
February 21, 2015 at 9:08 AM
We indeed chose to just go with CIFs, we're still working through an upgrade.  Separately we started using our nice new NL cluster for backups.  We did find moving veeam backup files to it has a limitation of 4TB file sizes.  This is a real bummer.  Now I'm stuck putting those files on my ZFS appliance until we figure out if there is an *easy* way to deal with this.  The veeam guys don't want a million jobs to seperate things.


--
You received this message because you are subscribed to the Google Groups "Isilon Technical User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-gr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
February 20, 2015 at 1:47 PM
The biggest caveat to consider is the size of the virtual disk of the guest OS that will be housed on OneFS and the I/O going to it. As long as you don't heave the files stored on the virtual disk of the guest OS then you will reduce the I/O to the vmdk and the resulting protection overhead on OneFS. Aside from calculating parity information on the blocks that changed within the vmdk, the entire file has protection calculated as well. I've seen plenty of people run into problems with huge vmdks and heavy I/O to them. Running fileserver VMs on top of OneFS, you would also lose out on some of the advanced features and optimizations that allow for load balancing, caching, individual file protection, and etc.

--Jamie Ivanov

On Monday, December 29, 2014 at 1:08:12 PM UTC-6, Eugene Lipsky wrote:
--
You received this message because you are subscribed to the Google Groups "Isilon Technical User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-gr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
December 29, 2014 at 2:08 PM
We purchased our Isilon initially to go with option 1 and that's what we did. As we support multiple clients we were waiting for 7.0 release to allow multiple active directories before putting another client on. Once time came to start planning this we were initially asked to go with option 2 for second client as they use DFS and some other 3rd party based replication tools and wanted to keep them. I argued against this for a couple of reasons. I don't see the point in purchasing expensive isilon storage and basically using it as backend storage without onefs features as you mentioned (snapshots, quotas, etc.) Also extra layer of complexity that this would create for both the fileserver environment and our VMware farm. 

We ended keeping this client off of the isilon and sticking with windows server VMs on our existing VMware farm/SAN.


--
You received this message because you are subscribed to the Google Groups "Isilon Technical User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-gr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan
+1 (734)615-7362
Reply all
Reply to author
Forward
0 new messages