Aliaksandr Valialkin <
val...@gmail.com>
writes:
> Currently CGET and CSET commands are supported only by the Server
> class
> -
https://github.com/valyala/ybc/blob/master/libs/go/memcache/server.go
> and memcached app built on top of this class
> -
https://github.com/valyala/ybc/tree/master/apps/go/memcached .
So, this is a CAS-based GET, it seems.
(note, I wrote the CAS support in memcached and countless other client
and server implementations)
> CachingClient can significantly reduce latency by avoiding CGET
> round-trip to the server and returning locally cached item if the item
> has non-zero ValidateTtl value set via
> CachingClient.SetWithValidateTtl() call.
Well, sure, you're trading latency for coherence. If that works for
your application, then it's a tradeoff you're making that's really
independent of memcached.
> I have no any measurements at the moment. But simple math suggests
> than 'conditional get' requests will start saving network bandwidth
> when the average size of frequently requested items exceeds 10-100
> bytes depending on the number of responses sent per each TCP packet.
> Bandwidth savings may be significant for big items such as images or
> heavy web pages' parts.
This is why I'm thinking measurements would be good before suggesting
usage of this sort of thing in general (perhaps discussion over on the
memcached list?). I would posit that in practice, the latency wouldn't
be worth the bandwidth savings. For a second-level cache, a tap-based
evented invalidation protocol would provide the lowest-latency coherent
protocol, I'd think.
I do something similar for larger item caches served with a more
stream-friendly protocol (http). The larger an item gets, the less
suitable memcached is for it as the things we do for keeping the latency
low on object retrieval start to be a liability for first-byte latency
on larger objects. I don't have good figures for where that starts to
take effect on various networks, though. Intuitively, you can see an
object retrieval on the order of a few bytes completing in a single
packet vs. a 10GB object being pulled into an HTTP client is a vastly
different problem.
You still have to figure out *what* to cache. If you have needs in
this area, I'd been interested in hearing more about your workloads and
requirements (probably moreseo than the rest of the list).
--
dustin