Re: EnhanceIO, missing a step to activate it?

193 views
Skip to first unread message

Marc Smith

unread,
Feb 2, 2014, 8:44:51 PM2/2/14
to esos-...@googlegroups.com
Hi Mike,

Can you send your configuration output so we can take a look?


--Marc


On Sun, Feb 2, 2014 at 1:13 PM, Mike Blanchard <dark...@gmail.com> wrote:
I was trying to see the difference between Btier and EnhanceIO performance and I started by running the commands in the WIKI to activate and then rebooting,  but I see little to no change in the benchmarks with using a 250GB SSD for a cache.  Is this normal?  I've checked the EnhanceIO monitors and it looks like there is caching activity going on.  Did I miss a step somewhere? the only thing I can think of is i'm running the cache on /dev/sda and i'm using LVM on that disk to splice up that storage into individual luns, could that be the problem?

--
You received this message because you are subscribed to the Google Groups "esos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Mike Blanchard

unread,
Feb 2, 2014, 10:41:31 PM2/2/14
to esos-...@googlegroups.com
I thought I deleted this, I figured out my issue after i posted this.

Todd Hunter

unread,
Feb 4, 2014, 10:38:49 AM2/4/14
to esos-...@googlegroups.com

Hi Marc,

 I was wondering how your performance testing was going with EnhanceIO and Btier?  What kind of performance improvements are you seeing with them turned on?

 Todd


Marc Smith

unread,
Feb 4, 2014, 10:42:04 AM2/4/14
to esos-...@googlegroups.com
Hi Todd,

I actually haven't done any testing myself... I believe Mike was doing some of this, so it would be nice if he'd share with the group. =)

Until the last day or two, I haven't really been that interested in these caching options, however, I am now interested in the write-back cache mode of these projects (bcache, EnhanceIO, dm-cache). I'll be looking at some (or all) of these over the next couple weeks and I'll post.


--Marc


Todd Hunter

unread,
Feb 4, 2014, 11:22:51 AM2/4/14
to esos-...@googlegroups.com
Ha, I was looking at the last reply and saw you in stead of Mike.

So Mike,   ;)

Are you seeing anything good in EnhanceIO or Bteir with esos?

I have been using the LSI Cards with CacheCade.  It's a great solution but I am always looking for new ways to do things.  I also dont like some of the limitations such as max SSD size of 512 meg and no reporting features. 

Todd

Marc Smith

unread,
Feb 4, 2014, 11:27:12 AM2/4/14
to esos-...@googlegroups.com
Yeah... same here (looking at alternatives to CacheCade)... but we're more interested in write-back cache using SSDs, CacheCade is only for reads.

I'll let you know what I find out.


--Marc

Todd Hunter

unread,
Feb 4, 2014, 12:25:16 PM2/4/14
to esos-...@googlegroups.com
Actually if I am understanding you the current version of CacheCade does do write caching.   The downside is that the write data is stored on the SSD, so if the SSD fails you loose the data.   Which is why they recommend running mirrored SSDs.  I also dont like that there is no RAID5 for the SSD drives, not sure if this can be overcome with EnhanceIO though. 

I like that EnhanceIO will migrate the hot data to the hds in the background. 

LSI needs to add some tools to CacheCade,  I find uncomfortable the lack of info about the methods and what it's doing.  There seems to be no good reporting.  If I put a 240 Gb SSD is it being utilized, would I benefit for more SSD, or is the 240 Gb already overkill. 

Marc Smith

unread,
Feb 4, 2014, 1:09:25 PM2/4/14
to esos-...@googlegroups.com
On Tue, Feb 4, 2014 at 12:25 PM, Todd Hunter <todd.d...@gmail.com> wrote:
Actually if I am understanding you the current version of CacheCade does do write caching.   The downside is that the write data is stored on the SSD, so if the SSD fails you loose the data.   Which is why they recommend running mirrored SSDs.  I also dont like that there is no RAID5 for the SSD drives, not sure if this can be overcome with EnhanceIO though. 

Let me focus that... I believe the version of CacheCade that is included with the Syncro CS controllers that we're using will only do reads, which is why it doesn't matter of its volatile -- it will just have to go to the real disk if the CacheCade volume failed.

I did a quick Google search and don't have any definitive hits that CacheCade provides write-back cache, but sounds like it does, at least "CacheCade Pro 2.0" does. So maybe the original CacheCade does not, but CacheCade Pro 2.0 does? I'd have to search some more to find out for sure.

As for your comment about RAID5, during our experiences, we've learned that using RAID5 with SSDs puts quite the load on the controller(s). Its much harder for the controller to keep up doing RAID5 vs. RAID10 with writes. We do a lot of RAID5 SSD volumes, and reads are great all day long, we can get into latency trouble when we get storms of writes. We feel this is due to the write penalty paid with RAID5. I expect LSI does this on purpose with not allowing RAID5 for CacheCade (I think that's what you're saying) for this reason.



I like that EnhanceIO will migrate the hot data to the hds in the background. 

LSI needs to add some tools to CacheCade,  I find uncomfortable the lack of info about the methods and what it's doing.  There seems to be no good reporting.  If I put a 240 Gb SSD is it being utilized, would I benefit for more SSD, or is the 240 Gb already overkill. 

Agreed. It'll be interesting to see if the software solutions (bcache, EnahnceIO, etc.) will do a better job with statistics.

Todd Hunter

unread,
Feb 4, 2014, 1:48:55 PM2/4/14
to esos-...@googlegroups.com
Yep is the CacheCade V2 that does write cache.

Mike Blanchard

unread,
Feb 4, 2014, 9:22:13 PM2/4/14
to esos-...@googlegroups.com
EnhanceIO was working pretty well, the issue I was having with it is once the cache fills up, performance drops like a rock, especially large file copies to enhanced storage.  Otherwise the performance on my raid50 array was within 10% -  20% of an SSD  Right now i'm working on Btier, i'll post benchmarks once i'm done.  Right now btier is fighting me.

Morgan Robertson

unread,
Feb 5, 2014, 6:33:48 AM2/5/14
to esos-...@googlegroups.com
From my testing with EIO (write-through only sorry) & ESOS, I did initially get great/better performance using 2x raid0 Samsung 840 pros using MD raid but over time the performance dropped as one of the SSDs dropped in speed (Wear leveling?).  It can be hard to test if the caching is working as most benchmarks will write data and then read that same data back ...and that doesn't often given the caching mechanism time to cache the data / real world case (in the case of WT cache that is).

If you trust your SSDs in regards to power failure (only reason I didn't have more SSDs & use write-back), you could use MD raid to Raid 1/10 your SSDs.

I also had trouble with EIO reading my '/etc/eio.conf' file so I just had a eio_cli command that would run every boot.  I'd say I was probably doing something wrong.

Let me know how anyone goes if they get a chance to test any of the other caching/tiering packages.  I just haven't had time to test them.

Regards,

Morgan R 

Mike Blanchard

unread,
Feb 5, 2014, 8:39:07 AM2/5/14
to esos-...@googlegroups.com
That was what I was seeing, i'd see decent performance, but when I was moving files around it would be great until I moved more than the size of the SSD and then performance would drop as it would dump the data to the drive to clear more space for the cache.  For Btier i've got 3x 250Gb SSDs and I partitioned them down to 210gb to make sure there was some space for wear leveling and then I raid-5'd them.  I'll try again with the btier after marc updates it and hopefully my current issue will go away.  It will be interesting to see as most of the files are 20-100gb .vhdx files for the clustered VMs, so I don't know how well Btier will do with only 420gb of space, but I like the idea of software automatically tiering the data instead of me having to migrate VMs to different storage manually.
Reply all
Reply to author
Forward
0 new messages