Nginx + Memcached and large JSON items

1,684 views
Skip to first unread message

Nadav Har Tzvi

unread,
Dec 3, 2012, 4:43:39 AM12/3/12
to memc...@googlegroups.com
Hello there,

Let me just start this topic by stating that I do know of the 1 MB item size limitation in memcached and the reasons for why it is so.

However I am faced here with a dilema. As part of a web service, I have to return a bit large JSON object that includes base64 encoded images in it (thus the large size).
The average JSON object size should be somewhere between 1.2 MB to 2MB max.
In order to boost the whole deal, I decided to cache those items (Server has more than enough memory) and grant access from Nginx to reduce the load on the service and provide quicker responses.

So my question is this, should I go for increasing memcached item size or is there any other solution to bypass this problem? Searching google didn't provide any good results, maybe you have any idea of how to deal with this?

Thanks.

Yiftach Shoolman

unread,
Dec 3, 2012, 6:51:18 AM12/3/12
to memc...@googlegroups.com
AFAIK, since version 1.4.14, the max size of a Memcached object is 500MB.

See more details here:



--

Yiftach Shoolman
+972-54-7634621

Nadav Har Tzvi

unread,
Dec 3, 2012, 7:53:40 AM12/3/12
to memc...@googlegroups.com
Oh! That's great, also found a repo that has a package of 1.4.15 available for Ubuntu 12.04 (Since the regular 12.04 repos don't have  that version yet)
If that is of any use:

If you are around here Nathan, thanks :)

I am going to try bombing it with data and see how it rolls.
Thanks Yiftach! Saved my day.

smallfish

unread,
Dec 3, 2012, 8:08:53 AM12/3/12
to memc...@googlegroups.com
Great! Just found default item size is 64MB.
--

Aliaksandr Valialkin

unread,
Dec 3, 2012, 11:44:41 AM12/3/12
to memc...@googlegroups.com
Hello, Nadav,

Try go-memcached - fast memcached server written in Go. It can cache objects with up to 2Gb sizes. It also has no 250 byte limit on key sizes.

Currently it supports the following memcache commands: get, gets, set, add, cas, delete, flush_all. It also has the following features missing in the original memcached:
  * Cache size may exceed available RAM size by multiple orders of magnitude.
  * Cached objects may survive server crashes and restarts if cache is backed by files.
  * It can shard objects into multiple backing files located on multiple distinct physical storage devices (HDDs or SSDs). Such sharding may linearly increase qps for I/O-bound workloads, when hot objects don't fit RAM.
  * It supports two useful commands (extensions to memcache protocol):
      * dogpile effect-aware get (getde). Clients with getde support may effectively combat negative consequences of dogpile effect such as periodic spikes in resource usage.
      * conditional get (cget). Clients with cget support may save network bandwidth and decrease latency between memcache servers and clients by caching objects in local in-process cache. This may be especially useful when dealing with large objects.
     Currently only a single memcache client takes advantage of these commands - CachingClient for Go.

According to my performance tests on Ubuntu 12.04 x64, go-memcached's speed is comparable to the original memcached.

go-memcached can be built from source code (see how to build and run it section for details) or it can be downloaded from https://github.com/downloads/valyala/ybc/go-memcached-1.tar.bz2 . The archive contains two programs - a memcache server (go-memcached) and a benchmark tool for memcache servers (go-memcached-bench). These programs can be configured with command line flags. Run them with --help in order to see available configuration options:

$ ./go-memcached --help
Usage of ./go-memcached:
  -cacheFilesPath="": Path to cache file. Leave empty for anonymous non-persistent cache.
Enumerate multiple files delimited by comma for creating a cluster of caches.
This can increase performance only if frequently accessed items don't fit RAM
and each cache file is located on a distinct physical storage.
  -cacheSize=100: Total cache capacity in Megabytes
  -deHashtableSize=16: Dogpile effect hashtable size
  -goMaxProcs=4: Maximum number of simultaneous Go threads
  -hotDataSize=0: Hot data size in bytes. 0 disables hot data optimization
  -hotItemsCount=0: The number of hot items. 0 disables hot items optimization
  -listenAddr=":11211": TCP address the server will listen to
  -maxItemsCount=1000000: Maximum number of items the server can cache
  -osReadBufferSize=229376: Buffer size in bytes for incoming requests in OS
  -osWriteBufferSize=229376: Buffer size in bytes for outgoing responses in OS
  -readBufferSize=4096: Buffer size in bytes for incoming requests
  -syncInterval=10s: Interval for data syncing. 0 disables data syncing
  -writeBufferSize=4096: Buffer size in bytes for outgoing responses

$ ./go-memcached-bench --help
Usage of ./go-memcached-bench:
  -connectionsCount=4: The number of TCP connections to memcache server
  -goMaxProcs=4: The maximum number of simultaneous worker threads in go
  -key="key": The key to query in memcache
  -maxPendingRequestsCount=1024: Maximum number of pending requests
  -osReadBufferSize=229376: The size of read buffer in bytes in OS
  -osWriteBufferSize=229376: The size of write buffer in bytes in OS
  -readBufferSize=4096: The size of read buffer in bytes
  -requestsCount=1000000: The number of requests to send to memcache
  -serverAddrs=":11211": Comma-delimited addresses of memcache servers to test
  -value="value": Value to store in memcache
  -workerMode="GetMiss": Worker mode. May be 'GetMiss', 'GetHit', 'Set', 'GetSetRand'
  -workersCount=512: The number of workers to send requests to memcache
  -writeBufferSize=4096: The size of write buffer in bytes

Dustin Sallings

unread,
Dec 3, 2012, 1:33:22 PM12/3/12
to memc...@googlegroups.com
Aliaksandr Valialkin <val...@gmail.com> writes:

> Try go-memcached - fast memcached server written in Go. It can cache
> objects with up to 2Gb sizes. It also has no 250 byte limit on key
> sizes.

Your description sounds like you've written something very much unlike
memcached.

> According to my performance tests on Ubuntu 12.04 x64, go-memcached's
> speed is comparable to the original memcached.

Can you publish anything in more detail? Calling it "fast" with the
feature list you have seems quite misleading. It can't be both.

There are really good reasons to avoid caching items over 1MB or so
(depending on your network topology). It stops becoming a cache at some
point and becomes a file server with entirely different semantics. You
no longer get to measure object retrieval latency in microseconds, for
example.

--
dustin

Aliaksandr Valialkin

unread,
Dec 12, 2012, 12:06:13 PM12/12/12
to memc...@googlegroups.com
Hi, Dustin,

On Mon, Dec 3, 2012 at 8:33 PM, Dustin Sallings <dsal...@gmail.com> wrote:
Aliaksandr Valialkin <val...@gmail.com> writes:

> Try go-memcached - fast memcached server written in Go. It can cache
> objects with up to 2Gb sizes. It also has no 250 byte limit on key
> sizes.

  Your description sounds like you've written something very much unlike
memcached.

Yes - go-memcached is just a sample application written on top of YBC ( https://github.com/valyala/ybc ) - a library implementing fast in-process blob cache with persistence support. Initially I started working on caching http proxy for big ISPs on top of YBC. Unlike squid ( http://www.squid-cache.org/ ), this proxy should perfectly deal with multi-TB caches containing big objects such as videos. But then I temporarily switched to go-memcached implementation, since it has larger coverage for YBC API than the caching http proxy. So go-memcached automatically inherited YBC features:
* support for large objects;
* support for persistence;
* support for cache sizes bigger than available RAM.


> According to my performance tests on Ubuntu 12.04 x64, go-memcached's
> speed is comparable to the original memcached.

  Can you publish anything in more detail?  Calling it "fast" with the
feature list you have seems quite misleading.  It can't be both.

Here are more details on these perftests with charts - https://github.com/valyala/ybc/wiki/Creating-scalable-memcache-client-in-Go . These charts show that go-memcached becomes faster than the original memcached in all tests with more than 32 concurrent workers. I suspect the reason is in 'smart' flushing of write buffers - go-memcached flushes them only if there are no incoming requests on the given TCP connection.
These perftests also compare memcache client implementations:
 * https://github.com/bradfitz/gomemcache - This is 'traditional' client, which uses big connection pools and doesn't use requests' pipelining.
 * https://github.com/valyala/ybc/tree/master/libs/go/memcache - This is 'new' client, which uses small connection pools and requests' pipelining.
The conclusion is that the 'new' client scales much better with big number of concurrent workers.

  There are really good reasons to avoid caching items over 1MB or so
(depending on your network topology).  It stops becoming a cache at some
point and becomes a file server with entirely different semantics.  You
no longer get to measure object retrieval latency in microseconds, for
example.

I agree - memcached isn't well suited for caching large objects - we already discussed this on golang-nuts. But, as you already know from this discussion, there is CachingClient ( http://godoc.org/github.com/valyala/ybc/libs/go/memcache#CachingClient ) and bare in-process out-of-gc cache (  http://godoc.org/github.com/valyala/ybc/bindings/go/ybc ) for such cases - these beasts may reduce latency to nanoseconds if properly used.

--
Best Regards,

Aliaksandr

Dustin Sallings

unread,
Dec 13, 2012, 3:11:00 AM12/13/12
to memc...@googlegroups.com
Aliaksandr Valialkin <val...@gmail.com> writes:

>   Can you publish anything in more detail?  Calling it "fast" with
> the
> feature list you have seems quite misleading.  It can't be both.
>
> Here are more details on these perftests with charts
> - https://github.com/valyala/ybc/wiki/Creating-scalable-memcache-client-
> in-Go . These charts show that go-memcached becomes faster than the
> original memcached in all tests with more than 32 concurrent workers.

This is interesting. The numbers seem to be a little off of what we'd
expect given previous tests, but looks like you've given enough
information here for people to understand it. dormando's got tests
that drive the server quite a bit hardr, but I'm not sure how it scales
across different hardware.

A couple things that might be interesting to also look at would be the
latency differences as well as how the difference is with my client.
The model of execution for high throughput is a bit different, though.
It does a reasonable job of keeping latency low as well.

Since my client is binary only, I fully pipeline the client and
separate my reads and writes entirely. On a mostly
set/add/incr/decr/delete/etc.. workload, I almost never have any
responses to read from the socket which tends to make stuff pretty
quick. That said, the last person who wanted to do a lot with my client
made some changes to it haven't quite reviewed yet. You seem to have
some good ideas in there as well.

>  http://godoc.org/github.com/valyala/ybc/bindings/go/ybc ) for such
> cases - these beasts may reduce latency to nanoseconds if properly
> used.

Yep, though I still think sending those requests over the network is
unnecessary even in those cases. :)

--
dustin

dormando

unread,
Dec 13, 2012, 3:22:35 AM12/13/12
to memc...@googlegroups.com
> > Here are more details on these perftests with charts
> > - https://github.com/valyala/ybc/wiki/Creating-scalable-memcache-client-
> > in-Go . These charts show that go-memcached becomes faster than the
> > original memcached in all tests with more than 32 concurrent workers.
>
> This is interesting. The numbers seem to be a little off of what we'd
> expect given previous tests, but looks like you've given enough
> information here for people to understand it. dormando's got tests
> that drive the server quite a bit hardr, but I'm not sure how it scales
> across different hardware.

Older versions can do a few million fetches/sec, newest version was doing
11 million on some decent hardware and had much better thread scalability.
See the list archives and mc-crusher on my github page. Your numbers are
pretty good for a Go thing though? Maybe mc-crusher can push it harder,
too.

Aliaksandr Valialkin

unread,
Dec 14, 2012, 4:49:39 AM12/14/12
to memc...@googlegroups.com
Older versions can do a few million fetches/sec, newest version was doing
11 million on some decent hardware and had much better thread scalability.
See the list archives and mc-crusher on my github page. Your numbers are
pretty good for a Go thing though? Maybe mc-crusher can push it harder,
too.

I wanted comparing mc-cursher with go-memcached-bench, but couldn't build mc-crusher on ubuntu 12.04. Linker shows the following errors:

$ ./compile 
/tmp/ccSoTLEv.o: In function `new_connection':
/home/valyala/work/mc-crusher/./mc-crusher.c:552: undefined reference to `event_set'
/home/valyala/work/mc-crusher/./mc-crusher.c:553: undefined reference to `event_base_set'
/home/valyala/work/mc-crusher/./mc-crusher.c:554: undefined reference to `event_add'
/tmp/ccSoTLEv.o: In function `update_conn_event':
/home/valyala/work/mc-crusher/./mc-crusher.c:108: undefined reference to `event_del'
/home/valyala/work/mc-crusher/./mc-crusher.c:111: undefined reference to `event_set'
/home/valyala/work/mc-crusher/./mc-crusher.c:112: undefined reference to `event_base_set'
/home/valyala/work/mc-crusher/./mc-crusher.c:114: undefined reference to `event_add'
/tmp/ccSoTLEv.o: In function `main':
/home/valyala/work/mc-crusher/./mc-crusher.c:863: undefined reference to `event_init'
/home/valyala/work/mc-crusher/./mc-crusher.c:875: undefined reference to `event_base_loop'
collect2: ld returned 1 exit status

I tried building mc-crusher with both libevent-dev (which is based on libevent 2.0-5) and libevent1-dev (based on libevent 1.4-2) packages without success.

--
Best Regards,

Aliaksandr

Aliaksandr Valialkin

unread,
Dec 14, 2012, 5:59:04 AM12/14/12
to memc...@googlegroups.com
On Thu, Dec 13, 2012 at 10:11 AM, Dustin Sallings <dsal...@gmail.com> wrote:
A couple things that might be interesting to also look at would be the
latency differences as well as how the difference is with my client.
The model of execution for high throughput is a bit different, though.
It does a reasonable job of keeping latency low as well.

I added response time histograms to go-memcached-bench. Here are results:

original memcached (running on port 11211), GetSetRand worker mode:

$ ./go-memcached-bench -maxResponseTime=10ms -workerMode=GetSetRand -serverAddrs=localhost:11211
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[10ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11211]
valueSize=[100]
workerMode=[GetSetRand]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done! 6.505 seconds, 153735 qps
======
Response time histogram
     0 -   1ms:    8.078% ####
   1ms -   2ms:   27.502% ################
   2ms -   3ms:   20.569% ############
   3ms -   4ms:   13.805% ########
   4ms -   5ms:   10.310% ######
   5ms -   6ms:    7.031% ####
   6ms -   7ms:    4.992% ##
   7ms -   8ms:    3.720% ##
   8ms -   9ms:    2.343% #
   9ms -1h0m0s:    1.650% 

go-memcached (running on port 11212), GetSetRand worker mode:

$ ./go-memcached-bench -maxResponseTime=10ms -workerMode=GetSetRand -serverAddrs=localhost:11212
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[10ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11212]
valueSize=[100]
workerMode=[GetSetRand]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done! 6.323 seconds, 158144 qps
======
Response time histogram
     0 -   1ms:    7.183% ####
   1ms -   2ms:   28.385% #################
   2ms -   3ms:   23.208% #############
   3ms -   4ms:   14.602% ########
   4ms -   5ms:    8.901% #####
   5ms -   6ms:    6.471% ###
   6ms -   7ms:    5.205% ###
   7ms -   8ms:    3.330% #
   8ms -   9ms:    1.620% 
   9ms -1h0m0s:    1.096% 

original memcached, Set worker mode:
$ ./go-memcached-bench -maxResponseTime=10ms -workerMode=Set -serverAddrs=localhost:11211
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[10ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11211]
valueSize=[100]
workerMode=[Set]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done! 4.751 seconds, 210478 qps
======
Response time histogram
     0 -   1ms:    4.865% ##
   1ms -   2ms:   41.690% #########################
   2ms -   3ms:   34.394% ####################
   3ms -   4ms:   10.457% ######
   4ms -   5ms:    3.631% ##
   5ms -   6ms:    1.628% 
   6ms -   7ms:    0.800% 
   7ms -   8ms:    0.541% 
   8ms -   9ms:    0.779% 
   9ms -1h0m0s:    1.215% 

go-memcached, Set worker mode:
$ ./go-memcached-bench -maxResponseTime=10ms -workerMode=Set -serverAddrs=localhost:11212
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[10ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11212]
valueSize=[100]
workerMode=[Set]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done! 3.653 seconds, 273750 qps
======
Response time histogram
     0 -   1ms:   14.373% ########
   1ms -   2ms:   52.608% ###############################
   2ms -   3ms:   24.008% ##############
   3ms -   4ms:    5.868% ###
   4ms -   5ms:    1.573% 
   5ms -   6ms:    0.380% 
   6ms -   7ms:    0.090% 
   7ms -   8ms:    0.198% 
   8ms -   9ms:    0.526% 
   9ms -1h0m0s:    0.376% 

original memcached, GetHit worker mode:
$ ./go-memcached-bench -maxResponseTime=10ms -workerMode=GetHit -serverAddrs=localhost:11211
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[10ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11211]
valueSize=[100]
workerMode=[GetHit]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done! 5.548 seconds, 180237 qps
======
Response time histogram
     0 -   1ms:   18.723% ###########
   1ms -   2ms:   28.316% ################
   2ms -   3ms:   16.917% ##########
   3ms -   4ms:   10.734% ######
   4ms -   5ms:    8.590% #####
   5ms -   6ms:    6.320% ###
   6ms -   7ms:    4.390% ##
   7ms -   8ms:    3.401% ##
   8ms -   9ms:    1.671% #
   9ms -1h0m0s:    0.938% 

go-memcached, GetHit worker mode:
$ ./go-memcached-bench -maxResponseTime=10ms -workerMode=GetHit -serverAddrs=localhost:11212
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[10ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11212]
valueSize=[100]
workerMode=[GetHit]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done! 5.170 seconds, 193434 qps
======
Response time histogram
     0 -   1ms:   15.962% #########
   1ms -   2ms:   32.185% ###################
   2ms -   3ms:   20.092% ############
   3ms -   4ms:   11.380% ######
   4ms -   5ms:    7.232% ####
   5ms -   6ms:    5.310% ###
   6ms -   7ms:    4.382% ##
   7ms -   8ms:    2.443% #
   8ms -   9ms:    0.788% 
   9ms -1h0m0s:    0.224% 

original memcached, GetMiss worker mode:
$ ./go-memcached-bench -maxResponseTime=3ms -workerMode=GetMiss -serverAddrs=localhost:11211
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[3ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11211]
valueSize=[100]
workerMode=[GetMiss]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done! 2.184 seconds, 457862 qps
======
Response time histogram
     0 - 300us:    2.086% #
 300us - 600us:    9.957% #####
 600us - 900us:   29.178% #################
 900us - 1.2ms:   29.746% #################
 1.2ms - 1.5ms:   13.432% ########
 1.5ms - 1.8ms:    6.574% ###
 1.8ms - 2.1ms:    3.510% ##
 2.1ms - 2.4ms:    1.781% #
 2.4ms - 2.7ms:    1.140% 
 2.7ms -1h0m0s:    2.594% #

go-memcached, GetMiss worker mode:
$ ./go-memcached-bench -maxResponseTime=3ms -workerMode=GetMiss -serverAddrs=localhost:11212
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[3ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11212]
valueSize=[100]
workerMode=[GetMiss]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done! 1.661 seconds, 601908 qps
======
Response time histogram
     0 - 300us:    1.282% 
 300us - 600us:   19.876% ###########
 600us - 900us:   45.056% ###########################
 900us - 1.2ms:   23.344% ##############
 1.2ms - 1.5ms:    6.728% ####
 1.5ms - 1.8ms:    1.735% #
 1.8ms - 2.1ms:    0.782% 
 2.1ms - 2.4ms:    0.543% 
 2.4ms - 2.7ms:    0.253% 
 2.7ms -1h0m0s:    0.402% 


Traditional memcache client ( https://github.com/bradfitz/gomemcache ) has bad response time distribution starting from certain number of workers:

8 workers, histogram is ok:
$ ./go-memcached-bench -clientType=original -requestsCount=100000 -workersCount=8 -maxResponseTime=5ms -serverAddrs=localhost:11211
Config:
clientType=[original]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[5ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[100000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11211]
valueSize=[100]
workerMode=[GetMiss]
workersCount=[8]
writeBufferSize=[4096]

Preparing...done
starting...done! 1.864 seconds, 53659 qps
======
Response time histogram
     0 - 500us:   96.205% #########################################################
 500us -   1ms:    3.303% #
   1ms - 1.5ms:    0.390% 
 1.5ms -   2ms:    0.090% 
   2ms - 2.5ms:    0.009% 
 2.5ms -   3ms:    0.002% 
   3ms - 3.5ms:    0.001% 
 3.5ms -   4ms:    0.000% 
   4ms - 4.5ms:    0.000% 
 4.5ms -1h0m0s:    0.000% 

32 workers, still good histogram:
$ ./go-memcached-bench -clientType=original -requestsCount=100000 -workersCount=32 -maxResponseTime=5ms -serverAddrs=localhost:11211
Config:
clientType=[original]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[5ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[100000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11211]
valueSize=[100]
workerMode=[GetMiss]
workersCount=[32]
writeBufferSize=[4096]

Preparing...done
starting...done! 2.145 seconds, 46627 qps
======
Response time histogram
     0 - 500us:   50.713% ##############################
 500us -   1ms:   25.730% ###############
   1ms - 1.5ms:   10.910% ######
 1.5ms -   2ms:    6.076% ###
   2ms - 2.5ms:    3.407% ##
 2.5ms -   3ms:    1.752% #
   3ms - 3.5ms:    0.798% 
 3.5ms -   4ms:    0.379% 
   4ms - 4.5ms:    0.139% 
 4.5ms -1h0m0s:    0.096% 

64 workers, not very good response times:
$ ./go-memcached-bench -clientType=original -requestsCount=100000 -workersCount=64 -maxResponseTime=5ms -serverAddrs=localhost:11211
Config:
clientType=[original]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[5ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[100000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11211]
valueSize=[100]
workerMode=[GetMiss]
workersCount=[64]
writeBufferSize=[4096]

Preparing...done
starting...done! 2.234 seconds, 44755 qps
======
Response time histogram
     0 - 500us:   46.608% ###########################
 500us -   1ms:    1.363% 
   1ms - 1.5ms:   12.748% #######
 1.5ms -   2ms:   13.391% ########
   2ms - 2.5ms:    4.616% ##
 2.5ms -   3ms:    6.118% ###
   3ms - 3.5ms:    4.462% ##
 3.5ms -   4ms:    2.845% #
   4ms - 4.5ms:    2.467% #
 4.5ms -1h0m0s:    5.382% ###

512 workers, awful response time distribution:
$ ./go-memcached-bench -clientType=original -requestsCount=100000 -workersCount=512 -maxResponseTime=50ms -serverAddrs=localhost:11211
Config:
clientType=[original]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
keySize=[16]
maxPendingRequestsCount=[1024]
maxResponseTime=[50ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[100000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11211]
valueSize=[100]
workerMode=[GetMiss]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done! 2.342 seconds, 42698 qps
======
Response time histogram
     0 -   5ms:   47.483% ############################
   5ms -  10ms:    0.107% 
  10ms -  15ms:   22.396% #############
  15ms -  20ms:   10.673% ######
  20ms -  25ms:    0.468% 
  25ms -  30ms:    8.028% ####
  30ms -  35ms:    4.378% ##
  35ms -  40ms:    0.466% 
  40ms -  45ms:    2.377% #
  45ms -1h0m0s:    3.624% ##

-- 
Best Regards,

Aliaksandr

Aliaksandr Valialkin

unread,
Dec 14, 2012, 8:56:37 AM12/14/12
to memc...@googlegroups.com
go-memcached-bench now measures maximum response time additionally to response times' distribution. Here are maximum response times for workerMode=GetSetRand, workersCount=512:

workerMode=GetSetRand:
clientType=new, memcached: 223ms
clientType=new, go-memcached: 15ms
clientType=original, memcached: 245ms
clientType=original, go-memcached: 278ms

workerMode=GetHit:
clientType=new, memcached: 215ms
clientType=new, go-memcached: 12ms
clientType=original, memcached: 227ms
clientType=original, go-memcached: 289ms

workerMode=Set
clientType=new, memcached: 15ms
clientType=new, go-memcached: 15ms
clientType=original, memcached: 153ms
clientType=original, go-memcached: 184ms

workerMode=GetMiss
clientType=new, memcached: 10ms
clientType=new, go-memcached: 10ms
clientType=original, memcached: 129ms
clientType=original, go-memcached: 150ms

As you can see, go-memcached demonstrates much smaller maximum response times during tests with new client than memcached.


--
Best Regards,

Aliaksandr
Reply all
Reply to author
Forward
0 new messages