> Hi guys, this is a correct group to talk about some implementations
> and features regarded libketama ? if no stop here ;)
>
> Today when I red the source code of libketama [1] to undertand how
> works and how consistent hashing works of corse I have found some
> "hot" characteristics with proably errors. Well may be I m in a
> mistake ;)
I think you're right. Because points-per-server is proportional to
(server weight / number of servers), when a server is added or
removed, the other servers may receive more or fewer points. Probably
ketama should have used points-per-server = server-weight * K.
robey
IIRC original libketama implementation uses _fixed total_ number of
On Jun 24, 4:18 am, Robey Pointer <ro...@twitter.com> wrote:
> I think you're right. Because points-per-server is proportional to
> (server weight / number of servers), when a server is added or
> removed, the other servers may receive more or fewer points. Probably
> ketama should have used points-per-server = server-weight * K.
points
derived from the size of the shared memory region they reside in,
hence the
change per server on server add/remove.
Though many memcached clients implement consistent hashing, most
provide their own implementation of the algorithm, rather than link
with
libketama directly. For instance, in Perl client
Cache::Memcached::Fast
we use fixed points-per-server = server-weight * K, likely other
clients do
the same.
You are right, that the algorithm is somehow broken :) but you are
missing a point here that makes things even worse:
1) Imagine you have cache servers A, B and C
2) C goes down
3) keys are remapped to A and B
4) C comes up
and here comes the mess :)
* since you have assigned some keys (k1, k2) from C to A and B, A and
B might now have the new versions of k1 and k2. Imagine the cache on C
is still there (it was probably a network or firewall issue), now C is
serving the old keys k1 and k2 ;)
* I have not read the source you are referring to, but if it does
suff like you describe, namely remap keys from A to B and vice versa,
the mess is bigger. When C went down, you have remapped k3 from A to
B, so B might now have a newer version of k3. When C comes up, you
remap again k3 to A, so A will still serve the old version of k3.
Messy messy, most of the clients are broken in this way: they remap
keys upon server down and don't care that you might serve old stuff
after server up + remap again.