Stats in a clustered environment - Getting keys

40 views
Skip to first unread message

Jonathan Minond

unread,
May 15, 2014, 4:39:06 PM5/15/14
to memc...@googlegroups.com
I am seeing something that struck me as a little odd.

From my reading, as I understand, in a memcached environment, each memcache node contains a portion of the objects in the cluster.

So, I would expect something like if I have 27 keys and 3 nodes.

Each node is holding ~9 keys/objects.... is that correct to assume?

So, to test out... 
<add key="MemCached.Endpoint" value="server1:11211,server2:11211,server3:11211" />

As a client, I am using the BeIT Memcached Client for .NET (code.google.com/p/beitmemcached/)

To get the keys, I am using Telnet, to get slabs, and then the items, as described by Boris here: groups.google.com/forum/#!topic/memcached/YyzonP9HUi0

1) I loop through my collection of hosts
2) Do the telnet process against that host
3) Collect all the info.

It seems to me, that I am getting the same keys listed on all 3 servers..... ?
I did not expect this, and I am hoping someone can explain.

To clarify:
This is how I do a GET:


And this is how I am trying to get the list of keys.... there is a bit of debug code buried in there, but it should still be clear:
(TelNetConn = A simple telnet helper)

List<string> ret = new List<string>();

string memCacheEndPointAddress = Config.GetValueWithDefault("MemCached.Endpoint", "localhost:11211");

string[] points = memCacheEndPointAddress.Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries);

                    foreach (string h in points)
                    {
                        string[] hParts = h.Split(new[] { ':' }, StringSplitOptions.RemoveEmptyEntries);

                        string cacheHost = hParts[0];
                        TelNetConn tc = new TelNetConn(cacheHost, Convert.ToInt32(hParts[1]));

                        if (tc.IsConnected)
                        {
                            ret.Add("HOST: " + cacheHost);

                            tc.WriteLine("stats items");
                            String s = tc.Read();
                            String[] sLines = s.Split(
                                new string[] { Environment.NewLine },
                                StringSplitOptions.RemoveEmptyEntries);

                            foreach (string sl in sLines)
                            {
                                if (sl == "END") continue;

                                String[] slParts = sl.Split(new[] { ':' }, StringSplitOptions.RemoveEmptyEntries);

                                int slabID = Convert.ToInt32(slParts[1]);
                                string slabType = slParts[2];

                                if (slabType.StartsWith("number") || slabType.StartsWith("age"))
                                {
                                    tc.WriteLine("stats cachedump " + slabID + " 100");

                                    s = tc.Read();

                                    if (!String.IsNullOrEmpty(s))
                                    {
                                        if (s != "END")
                                        {
                                            // ret.Add("FULL: " + s);

                                            if (s.StartsWith("ITEM "))
                                            {
                                                string[] itemparts = s.Split(new[] { ' ' }, StringSplitOptions.None);
                                                string key = itemparts[1];
                                                ret.Add("ITEM: " + key);
                                            }
                                        }
                                    }
                                }
                            }

                        }
                        else
                        {
                            ret.Add("HOST: " + cacheHost + " NOT CONNECTED");
                        }

                        tc.Dispose();
                    }

Ryan McElroy

unread,
May 17, 2014, 9:19:44 PM5/17/14
to memc...@googlegroups.com
memcached itself knows nothing about other nodes in a system. How the keys are distributed is entirely dependent upon your client implementation. I'm not familiar with BeIT client, but from reading through the wiki page, I would expect it to be splitting the keys among your memcache servers approximately equally. I say approximately, because hashing functions are probabilisitic. With only 27 keys, I wouldn't be surprised to see significant deviation here. At large numbers of keys, I would expect pretty even distribution though. 

I think more important than how you are fetching keys from each server is how you're using the BeIT client -- which you don't show here. Do you set it up to do replication or sharding? If replication, what you're seeing is expected. If sharding, I'd say it's unexpected.

You can figure out what it is doing by using a packet sniffer (eg, ngrep, wireshark) and seeing when the client sets keys to which boxes.

~Ryan


--

---
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jonathan Minond

unread,
May 19, 2014, 12:53:53 PM5/19/14
to memc...@googlegroups.com
Hi Ryan, 

Thanks for getting  back to me.
I have the code for the BtIT client, so I can look... can you give me a hint what I would be looking for?
It's an open source imp ( https://code.google.com/p/beitmemcached/source/browse/ )
If not, do you have a .NET client you would recommend, or is more widely used/supported perhaps? 
All of my interaction with BT / Memcached is wrapped in one library, so I can swap the back end client fairly easily if that would be better.




--

---
You received this message because you are subscribed to a topic in the Google Groups "memcached" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/memcached/G4el5l0eD7I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to memcached+...@googlegroups.com.

Ryan McElroy

unread,
May 19, 2014, 1:25:06 PM5/19/14
to memc...@googlegroups.com
I took a quick look at the code, specifically here: https://code.google.com/p/beitmemcached/source/browse/trunk/ClientLibrary/MemcachedClient.cs#354

As far as I can tell, the client is probably doing the right thing (eg, sharding across memcached instances as expected). I didn't see options to even attempt replication.

I think this comes down to a bug in your code or error in methodology, or a bug in the library. I can't tell which from the information you provided.

I do most of my coding in PHP, C, and Python, so I can't help much with .NET stuff, sorry.

~Ryan

Jonathan Minond

unread,
May 19, 2014, 1:35:48 PM5/19/14
to memc...@googlegroups.com
Great, thanks.
I will have a closer look at what I am doing. and see how the commands are moving around.
Thanks for your thoughts.

Henrik Schröder

unread,
May 19, 2014, 11:09:14 PM5/19/14
to memc...@googlegroups.com
I can guarantee you that the client only stores each key on a single server. :-)

I would guess that you get your results because you haven't cleared your cache servers between tests, or that your slab method of getting data isn't affected by flushing each server, or something similar.

Try with a simpler test. Set three keys through the client, then manually telnet to each server and try to get each and see what happens.


/Henrik




On Thu, May 15, 2014 at 1:39 PM, Jonathan Minond <jmi...@gmail.com> wrote:

--

Jonathan

unread,
May 20, 2014, 6:53:09 AM5/20/14
to memc...@googlegroups.com
I believe you are right about not clearing keys.
Spot on


Sent on a Sprint Samsung Galaxy S® 5
You received this message because you are subscribed to a topic in the Google Groups "memcached" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/memcached/G4el5l0eD7I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to memcached+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages