Can you describe the problem you see? I am cc'ing to the whole group
in case Stephen or Douglas has an answer.
- Ming
On Sun, Sep 4, 2011 at 1:56 PM, Dulcardo Arteaga Clavijo
<dulc...@gmail.com> wrote:
> Hello,
> I get the module compiled for kernel version 2.6.39.1, but dom0 is using
> 2.6.32.x and this version can not be patched with dm-cache. I have been
> trying to patch a kernel 2.6.39 with dom0 patches to get dm-cache working. I
> having some problems I hope I can get this working today. I will let you
> know
> -Dulcardo
>
> On Sun, Sep 4, 2011 at 12:46 AM, Ming Zhao <mi...@cs.fiu.edu> wrote:
>>
>> Dulcardo,
>>
>> How's the progress with setting up dm-cache? Below is the problem
>> description from XLS Hosting. Read it carefully and ask me if any
>> questions. Come up with a plan (with Daniel) and discuss with me next
>> week.
>>
>> "The nature of our workload (with a small number of players, but not a
>> predictable set, brining most of the pain) would make it the sanest
>> approach to pool virtual servers into a common cache. With dm-cache
>> the way it is set up right now,-r every individual Logical Volume on
>> the SAN at the moment a guest is started (and tear it down when a
>> guest disappears). This will waste precious cache space on unworthy
>> players, but guarantee that guests will not deal with possibly stale
>> data if they are moved around different hardware nodes.
>>
>> 2. Create a dm-cache per LVM Volume Group. This would allow
>> most of the cache to be used by whichever guest produces the biggest
>> i/o load. We would have to take special care to flush the cache
>> whenever we see a guest machine attached to it move around the
>> cluster.
>>
>> I think what we're trying to do in approach 2 (but not necessarily
>> executed that way) is the way forward, but invalidating the entire
>> cache whenever one virtual machine moves around is sub-optimal. We'd
>> be looking for a way to work around that, for example by allowing us
>> to invalidate the cache for a specific range of blocks.
>>
>> Another (perhaps even cooler) approach would be to let dm-cache could
>> use one _shared_ cache-device for multiple cache maps. Then we could
>> execute approach 1 but get the performance of approach 2."
>>
>>
>> - Ming
>
>
I am having this problem... Any solutions?Thanks.
--
You received this message because you are subscribed to the Google Groups "dm-cache" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dm-cache+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.