That's correct. There is the overhead of the TCP connections. On a
local machine, something like BerkeleyDB or mmap'ed files will beat
memcached.
- Perrin
File-based makes sense per-node if the majority of per-node cache is
not redundant.
In general, file-based makes sense if:
* memory is at a premium
* latency to other nodes is high
* shared access to specific keys is easily partitioned to nodes
* disk bandwidth dwarfs cache bandwidth
...
> Anybody care to comment? In a high concurrency situation, does
> memcached perform comparitively better? Are there any other factors we
> should be considering?
If you have a lot of writes, disk is going to bottleneck before memory/network.
Well, OK, if you want to go that far, you could also use mogilefs or
HDFS or many other not-really-files approaches. If you're using
tmpfs, I guess that shows memory is not short-- you still have to
serialize bits, so why not run a single memcached node local?
Or, use an in-process memory caching system. PHP has several available
for example. Not sure what platform you are using.
--
Brian Moon
Senior Web Engineer
------------------------------
When you care enough to spend the very least.
http://dealnews.com/
No need to get into tmpfs. The OS will already cache as much of the
files in RAM as it can, and things like BerkeleyDB manage their own
shared RAM cache.
Running a single memcached node locally is significantly slower than
BerkeleyDB or mmap'ed files. The communication overhead just kills it
compared to in-process calls with an efficient system. The advantage
of memcached is in sharing between machines.
- Perrin