|Volume memcached can handle||Ed Hickey||9/21/12 7:48 AM|
The write up on the main page describes "small" chunks of data.
Does anyone have an idea of how much data can be practically stored in memcached? We are looking at upwards of several millions rows of small data (store/product/price).
Are we better off with an in-memory database instead?
|Re: Volume memcached can handle||brianlmoon||9/21/12 8:31 AM|
> Does anyone have an idea of how much data can be practically stored inYou should not "store" anything in memcached. From Wikipedia:
"A cache is a component that improves performance by transparently
storing data such that future requests for that data can be served
faster. The data that is stored within a cache might be values that have
been computed earlier or duplicates of original values that are stored
elsewhere. If requested data is contained in the cache (cache hit), this
request can be served by simply reading the cache, which is comparably
faster. Otherwise (cache miss), the data has to be recomputed or fetched
from its original storage location, which is comparably slower."
The key terms in there are "duplicates of original values that are
stored elsewhere" and "(cache miss), the data has to be recomputed or
fetched from its original storage location".
As for how much data can be cached, as much as you have free memory for.
|Re: Volume memcached can handle||Ed Hickey||9/21/12 9:42 AM|
Thanks very much. You will have to forgive the word "store".
|Re: Volume memcached can handle||Howard Chu||9/21/12 10:55 AM|
memcacheDB exists for people who want to persistently store things with memcache. An "in-memory database" would have the same limits as memcached - both are limited to the size of physical RAM. memcacheDB uses an actual disk-backed database, so it has no such size limits. memcacheDB using OpenLDAP's MDB database uses a memory-mapped database - it is as fast as an in-memory database, but is not limited to the size of physical memory.
Some useful reading:
|Re: Volume memcached can handle||LesMikesell||9/21/12 11:05 AM|
On Fri, Sep 21, 2012 at 12:55 PM, Howard Chu <highl...@gmail.com> wrote:But the main point of memcache is that it is designed to be spread
over multiple servers. So you are only limited by the number of
servers you want to throw into the pool.
|Re: Volume memcached can handle||Aliaksandr Valialkin||12/14/12 8:38 AM|
Use memcached if all your data fits RAM. Otherwise two options exist:
* shard your data into a cluster of memcached instances, so each memcached instance keeps part of data, which fits available RAM on the host it is running.
* use go-memcached ( https://github.com/valyala/ybc/tree/master/apps/go/memcached ). It is optimized for very large caches exceeding available RAM size by multiple orders of magnitude.
And don't use cache as primary storage. Keep in mind - cache may evict arbitrary objects at arbitrary time due to arbitrary reasons.