Binary vs text protocol efficiency

1,764 views
Skip to first unread message

Manish Jain

unread,
Jul 17, 2013, 10:56:31 PM7/17/13
to memc...@googlegroups.com
Have there been any benchmarks about packet sizes for text vs binary communication to memcached? And, what sort of general performance gains one can expect by switching to binary protocol.

Thanks!
-Manish

Yiftach Shoolman

unread,
Jul 17, 2013, 11:58:33 PM7/17/13
to memc...@googlegroups.com
You can use memtier_benchmark to test the performance differences


--
 
---
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--

Yiftach Shoolman
+972-54-7634621

Travis Crowder

unread,
Jul 18, 2013, 12:35:37 AM7/18/13
to memc...@googlegroups.com, memc...@googlegroups.com
You could also use memslap which in my tests showed 25-33% better usage with binary protocol.

Sent from Mailbox for iPhone

David Terei

unread,
Jul 18, 2013, 3:06:53 AM7/18/13
to memc...@googlegroups.com
My 2cents on tooling: I use a tool called mutilate that is very nice to use and gives accurate results (i.e., 95th percentile latency and supports using multiple client machines to generate load).

So in my experience with benchmarking a lot of different memcache configuration, the answer isn't simple. Basically, it depends on your setup for binary Vs. ascii. With a low number of clients / connections to the server the binary protocol outperforms ascii significantly. With more clients / connections and load they become equal. Binary wins in the sense that it is never worse than ascii and sometimes significantly better, just not always so.

Here are some numbers I just ran again. This is with memcached running on a 12 core machine with HT (so 24 vcpus), Xeon 2.7ghz. Had about 30 client machines hitting it over a 10G network:

* Generating load from one machine using 8 connections:
  Binary: 933k req/s with 578us 99th percentile latency
  Text: 767k req/s with 631us 99th percentile latency

* Generating load from 30 machine using 4 connections per machine:
  Binary: 2.7M req/s with 700us 99th percentile latency
  Text: 2.7M req/s with 692us 99th percentile latency

I haven't investigated this further but imagine other bottle necks in event handling and the kernel become a limiting factor at load than the protocol.

Cheers,
David

Brian Moon

unread,
Jul 18, 2013, 9:52:57 AM7/18/13
to memc...@googlegroups.com, Manish Jain
While things like memslap and such will show very large gains, your
application may never see them. In my testing of my application, using
the binary protocol showed no noticable difference in performance. That
is simply because memcached is not the bottleneck in my application. I
highly doubt it is yours either.

There are other benefits like cas to using the binary protocol. I would
not do it for speed.


Brian.
--------
http://brian.moonspot.net/

dormando

unread,
Jul 27, 2013, 10:12:34 PM7/27/13
to memc...@googlegroups.com
Benchmarking binprot is nearly useless: there's nothing inherent in the
protocol which makes it any faster for raw requests/responses. It does
give your application more flexiblity on doing your gets/sets. IE: noreply
commands for issuing sets without waiting for the response. You can also
pack sets/gets together, etc.
> �
> �
>
>
Reply all
Reply to author
Forward
0 new messages