Is memcached unable to handle large number of connections?

3,821 views
Skip to first unread message

Ryan Chan

unread,
Jul 26, 2013, 2:01:20 AM7/26/13
to memc...@googlegroups.com
Actually have been using memcached for years and didn't have any problem, but find a new memcached proxy called "twemproxy" and it said:

 - Maintains persistent server connections.
 - Keeps connection count on the backend caching servers low.

Actually what wrong with memcached on the above two points? 
Anyone have experience to share?

Thanks.

Matt Ingenthron

unread,
Jul 26, 2013, 5:12:27 PM7/26/13
to memc...@googlegroups.com

It is in response to a relatively rare, good problem to have.  Imagine you have so many processes each with a connection on so many servers that its in the 10s of thousands.  Then a proxy/mux makes sense over persistent connections.

Matt

--
 
---
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

dormando

unread,
Jul 27, 2013, 10:07:13 PM7/27/13
to memc...@googlegroups.com
It's fine. It does use some amount of memory per connection. If you have a
huge number of connections, you may want to spread the memory usage around
a bit (by putting proxies on client hosts, or similar).

There's nothing inherent that would stop working at a large number of
connections.

alex...@hti.com.br

unread,
Jul 27, 2013, 10:07:25 PM7/27/13
to memc...@googlegroups.com
Esta conta de email não está mais em uso. A mensagem não foi entregue. Favor reenvie para alexandr...@hti.com.br

This account is no longer in use. The message wasn't delivered. Please resend the message to alexandr...@hti.com.br


Rohit Karlupia

unread,
Jul 28, 2013, 5:17:53 AM7/28/13
to memc...@googlegroups.com
It might have something to do with MAX_SENDBUF_SIZE in memcached.h
Memcached tries to set this value to about 256 MB per socket and if that succeeds, it obviously limits the number of concurrent connections you can have. On the other side, it helps in processing large multi-get requests. I am sure if you decrease this value, memcached would not have much problem handling large concurrent connections, except multi-gets will become slightly slower.

thanks,
rohitk




--

alex...@hti.com.br

unread,
Jul 28, 2013, 5:18:11 AM7/28/13
to memc...@googlegroups.com

dormando

unread,
Jul 28, 2013, 1:34:57 PM7/28/13
to memc...@googlegroups.com
No.

That does not pre-allocate the full buffer. Otherwise memcached servers
wouldn't be able to hold more than 100-200 connections open at once. I've
seen servers run 80,000+ just fine.

Don't guess.
> �
> �
> �
> �
>
>

Rohit Karlupia

unread,
Jul 28, 2013, 2:41:15 PM7/28/13
to memc...@googlegroups.com
Yes. 

OS doesn't pre-allocate the full buffer. 

Yes. 
Memcached should easily handle million concurrent idle connections (given enough memory).

Except, if user has only allocated 512MB/1GB RAM for TCP, it is "possible" at runtime only four connections have eaten up all the memory, which could lead to connection refused or some other errors. 

Lets revisit the question.
Is memcached unable to handle large no of concurrent connection? 
Usually no. Under some circumstances. Yes. 

And yes, don't guess. Just try it. Open <server_tcp_mem/256MB> connections with large multi-get request (response > 256MB)  and don't read them at client side. And then try opening more connections. 

thanks!
rohitk

Roberto Spadim

unread,
Jul 28, 2013, 2:51:32 PM7/28/13
to memc...@googlegroups.com
i think it's a normal problem in any tcp/ip unix/socket daemon... don't have ram? add swap... it's a hardware limit not a software limit...
maybe instead of 'use swap' we could add a 'more than xMB use disk instead of ram', but you will get into a problem of disk latency, etc... it's like any database writing temporary query results in memory and after a limit put the result in disk instead of a memory result

dormando

unread,
Jul 28, 2013, 2:57:00 PM7/28/13
to memc...@googlegroups.com
whaaaaaaaaaaaaaaaaaat the hell are you talking abouuuuuttttt???

It's not a problem! The twemcache people are inept and arrogant at
communicating what the fuck their thing does!

It's a *small* amount of memory. Don't go trying to solve problems because
of a fucking bulletpoint line item in a piece of software. Solve problems
you actually see and actually have. Like if you run out of fucking memory
due to having 100,000 active sockets on your host. If you don't have that,
don't worry about it. You'll be fine.
> �
> �
>
>

dormando

unread,
Jul 28, 2013, 3:00:40 PM7/28/13
to memc...@googlegroups.com
> Yes.�
> OS doesn't pre-allocate the full buffer.�
>
> Yes.�
> Memcached should easily handle million concurrent idle connections (given enough memory).
>
> Except, if user has only allocated 512MB/1GB RAM for TCP, it is "possible" at runtime only four connections have eaten up all the memory, which
> could lead to connection refused or some other errors.�
>
> Lets revisit the question.
> Is memcached unable to handle large no of concurrent connection?�
> Usually no. Under some�circumstances. Yes.�
>
> And yes, don't guess. Just try it. Open <server_tcp_mem/256MB> connections with large multi-get request (response > 256MB) �and don't read them at
> client side. And then try opening more connections.�

Usually no?? Are you sure you mean usually it can't handle it? That is
insanely wrong.

Memcached runs on LAN's almost all of the time. There are almost NO
buffers stuck in use because of the low latency. This isn't an internet
facing tool, wherein you have to tune that more carefully and leave a lot
more free memory for TCP retransmits. Connections to memcached use a
handful of kilobytes.

So very few people are going to run into this problem, complaining about
it is nothing short of alarmist.

It's also never going to be 256MB: The actual memory used is limited (in
linux) by the tcp_rmem and tcp_wmem set of sysctl's. Even when people
aggressively tune those, they set the maximums around 16 megabytes.
Usually it's much lower than that. SENDBUF can't use more than what's in
wmem.

dormando

unread,
Jul 28, 2013, 3:11:38 PM7/28/13
to memc...@googlegroups.com


On Sun, 28 Jul 2013, dormando wrote:

> > Yes.�
> > OS doesn't pre-allocate the full buffer.�
> >
> > Yes.�
> > Memcached should easily handle million concurrent idle connections (given enough memory).
> >
> > Except, if user has only allocated 512MB/1GB RAM for TCP, it is "possible" at runtime only four connections have eaten up all the memory, which
> > could lead to connection refused or some other errors.�
> >
> > Lets revisit the question.
> > Is memcached unable to handle large no of concurrent connection?�
> > Usually no. Under some�circumstances. Yes.�
> >
> > And yes, don't guess. Just try it. Open <server_tcp_mem/256MB> connections with large multi-get request (response > 256MB) �and don't read them at
> > client side. And then try opening more connections.�
>
> Usually no?? Are you sure you mean usually it can't handle it? That is
> insanely wrong.

"is memcached unable to handle" -> "usually no" -> so many negatives.
"memcached is usually able to handle a large number of connections" ->
true.

The rest of my e-mail is correct though. Unless you're on some weird OS
it's not going to use 256 megs of buffer, and unless you're on a very
slow link and not/ever reading the data it's not going to be an issue.

> Memcached runs on LAN's almost all of the time. There are almost NO
> buffers stuck in use because of the low latency. This isn't an internet
> facing tool, wherein you have to tune that more carefully and leave a lot
> more free memory for TCP retransmits. Connections to memcached use a
> handful of kilobytes.
>
> So very few people are going to run into this problem, complaining about
> it is nothing short of alarmist.
>
> It's also never going to be 256MB: The actual memory used is limited (in
> linux) by the tcp_rmem and tcp_wmem set of sysctl's. Even when people
> aggressively tune those, they set the maximums around 16 megabytes.
> Usually it's much lower than that. SENDBUF can't use more than what's in
> wmem.
>

Rohit Karlupia

unread,
Jul 28, 2013, 3:16:55 PM7/28/13
to memc...@googlegroups.com
No ;)
I said "Usually No" to the question "Is memcached UNABLE to handle large number of connections?". 
It is negative of a negative and actually positive answer. 

And yes you are right about all the things you said about the context in which memcached is used. I didn't meant to say anything against memcached and I think it is a wonderful piece of software solving real world problems. 

But I do think 256MB limit is an overkill. It might be useful for testing max throughput, but it can cause practical problems to users. Bug in one the client machines can hold up memory in server, causing problems to other clients. Thread death, busy cpu, memory pressure, anything that slowes down few of the clients can add up memory pressure on memcached.  The reason I mention is this: I can think of this as the ONLY cause of memcached not able to handle large number of connections. I have read the code. Multiple times. Nothing else. 

thanks!
rohitk








dormando

unread,
Jul 28, 2013, 3:21:41 PM7/28/13
to memc...@googlegroups.com
> No ;)I said "Usually No" to the question "Is memcached UNABLE to handle large number of connections?".�
> It is negative of a negative and actually positive answer.�
>
> And yes you are right about all the things you said about the context in which memcached is used. I didn't meant to say anything against memcached
> and I think it is a wonderful piece of software solving real world problems.�
>
> But I do think 256MB limit is an overkill. It might be useful for testing max throughput, but it can cause practical problems to users. Bug in one
> the client machines can hold up memory in server, causing problems to other clients. Thread death, busy cpu, memory pressure, anything that slowes
> down few of the clients can add up memory pressure on memcached. �The reason I mention is this: I can think of this as the ONLY cause of memcached
> not able to handle large number of connections. I have read the code. Multiple times. Nothing else.�
>
> thanks!
> rohitk

I've said a few times now that the limit is actually much lower than what
SENDBUF would like you to allow (16 megabytes if tcp wmem is aggressively
tuned). I'm repeating again so people who read this thread understand :)
> �
> �
>
>

Roberto Spadim

unread,
Jul 28, 2013, 4:01:56 PM7/28/13
to memc...@googlegroups.com
i'm talking about this case:
server with total of 1Gb of ram
four clients reading a multiget, and memory consumed near to 1GB, and no memory to new tcp/ip connections
in this case, add more ram or swap space, there's no other solution
consuming more memory (in linux) kernel will kill process to get more memory space or whatever strategy it is write to do in this condition

i'm not talking about tcp/ip problem, i'm talking about no memory situation and new tcp/ip connections being rejected because kernel can't alloc more memory

Roberto Spadim

unread,
Jul 28, 2013, 4:06:11 PM7/28/13
to memc...@googlegroups.com
about thread question "Is memcached unable to handle large number of connections?"
i respond that it can without problems

the difference from twemcache is how it use memory, choise what will be cached, and other command line options (see the README.md in github)
it's a fork of memcached with other objectives, it's don't solve a problem of memcached, it solve problem of a specific workload

Roberto Spadim

unread,
Jul 28, 2013, 4:13:33 PM7/28/13
to memc...@googlegroups.com
About disk use, it's a idea (maybe in some day a feature request or a fork) about 'storage engines'
Instead of only memory, we could use memory and disk, disk for >x MB key-cache values, but it's not what memcached do today, it's only a cache server, not a storage server that's why it don't use disk
if you have low memory don't waste memory with cache where you can't

Reply all
Reply to author
Forward
0 new messages