Boris Partensky wrote:
> I am trying to understand how key redistribution algorithm works in
> case of ArrayModNodeLocator hashing.
> Looks like getSequence just returns same node over and over again?
I believe this is correct. The classic modulus hashing always hashes to
the same node, given the same server list. That means it really doesn't
do anything interesting when redistributing, unless you've changed the
nodes, which you can't really do.
I think the reason this is here is mainly because it's part of the
interface and there could be another way to determine primary/backup if
someone were to implement their own NodeLocator. The only thing that
makes sense with the ArrayModNodeLocator is to return the same node.
Perhaps Dustin will chime in if there's more to this and I need
correcting.
- Matt
--
You received this message because you are subscribed to the Google Groups "spymemcached" group.
To post to this group, send email to spymem...@googlegroups.com.
To unsubscribe from this group, send email to spymemcached...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/spymemcached?hl=en.
So, it would rehash to the subset of "active" nodes? This could be
implemented with a Whalin compatible NodeLocator, but that behavior may
be different than other clients.
I just checked with dormando on this, and he says the behavior with
modulus hashing is different with a number of clients. The defaults for
what to do when a node down has even changed in some clients over time.
According to dormando, clients didn't start getting more consistent
until consistent hashing.
It seems to me that if you're not using consistent hashing, it may be
safer to just cancel operations and backoff with retry until the node is
brought back.
I believe Ash has a patch which keeps delete ops around for retry. Let
me ping him to see if that can be published. It may help you and seems
like a 'safer' thing to do rather than cancel delete operations, though
an argument could be made that the whalin client behavior in this case
is similar to what you'd have with consistent hashing.
Note, the 2.5rc3 has in it a fix for consistent hashing compatibility
with libketama based clients. It's probably in your tree as well though
from what I saw on github.
- Matt
> <mailto:spymem...@googlegroups.com>.
> To unsubscribe from this group, send email to
> spymemcached...@googlegroups.com
> <mailto:spymemcached%2Bunsu...@googlegroups.com>.
Eric and I talked about cancellation a bit recently and we may be
able to help with that some, too. I think we may be being too
conservative in cancellations and could likely reduce the number of
operations that are cancelled on such failure.
Except that in this case I'm thinking of the situation where the server
is down for a period of time.
But maybe your talking about moving read/write operations that are
deletes back to the front of the inputQueue?
- Matt