I think some of your assumptions may be incorrect. If you are using
the same file in every request your OS is probably using a cached copy
in memory. It won't hit the disk every time. While memcached does
store everything in memory which is fast, it transmits data over a
socket which is slow. You are really comparing using local memory vs.
using memory on a different machine.
I use memcached to reduce hits to my database which is much slower
than memcached. Sometimes I also use it to store objects that are
expensive to create. This is just a trade off between CPU and network
access.
--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
That doesn't matter - you still go through the same client/server
motions to access it through a socket as if you have distributed storage.
> 2. I am not just reading a single file everytime. I have 3 data files
> that I access in the script (system may be caching files in memory)
> 3. Even if the files are not changing does it mean that disk cache is
> faster than memcached? that too an extent of disk cache being almost
> 50% faster than memcached.
Yes, if you have sufficient RAM, all recently accessed file data will be
cached at the OS level for fast repeated access.
> I am sorry, but I don't think your explanation addresses my concerns.
The part that didn't make sense was that you mentioned memcache having
many failures. Unless you have not provided sufficient RAM, you should
only fail on the first access to new or expired data. Perhaps with
memcached running you don't have enough memory to hold your active data
set in either the memcache cache or the now reduced filesystem buffers
and end up making them both thresh.
--
Les Mikesell
lesmi...@gmail.com
Brian.
--------
http://brian.moonspot.net/
network is network. A localhost vs. local lan requests are the same in
terms of overhead for requests as small as memcached.
> 2. I am not just reading a single file everytime. I have 3 data files
> that I access in the script (system may be caching files in memory)
Yeah, try hundreds of files, being read hundreds of times per second.
> 3. Even if the files are not changing does it mean that disk cache is
> faster than memcached? that too an extent of disk cache being almost
> 50% faster than memcached.
Oh hell yes. This is the entire basis behind Varnish, the caching proxy
server. The kernel and/or the filesystem manages file cache. You don't
get more low level than that.
> I am sorry, but I don't think your explanation addresses my concerns.
If your file based approach works, why are you looking at memcached?
As for failures, I assume you mean failures reported by Apache Bench.
My guess is that you are not using threaded mode or are not tuning your
thread count appropriately for memcached and hitting the same 3 keys for
say 1000 concurrent requests could cause some contention.
Brian.
Syed
Ps: i know i m late to the party and if someone has already pointed in
that direction, i apologize.
--
Best,
- Ali
> There is 8 GB RAM on the server, and on first access apache bench
> doesn't show any failed requests, but if I execute the program again,
> I see that number of failed request show up, this is something that I
> couldn't understand as why didn't the number of failed request show on
> the first run and then appears every subsequent run (Total request for
> each run : 10,000 and 200 cc).
Are these apache errors or errors in memcache? If you are running a
high concurrency in apache benchmark you may be running the server out
of sockets or some other resource.
> "Perhaps with memcached running you don't have enough memory to ...".
> this is not the case in any which ways, have checked the memory and
> had that been an issue even file IO would have more or less given me
> the same performance.
I think you are missing the point that if you had just enough room for
the data in the file buffers without memcache, then allocate a portion
to memcache, neither one would have enough space and both would have to
continuously reload the data as it is evicted.
--
Les Mikesell
lesmi...@gmail.com
Its not a question of whether or not memcached *could* be faster. The
question you should be asking when optimiziing is "What is the
bottleneck in my application?" If it is the file based cache, you need
to fix it. But, if its not, why focus on that?
As for your issues, if you only have one web server, memcached is not
the right tool here and you are wasting your time. You use PHP, use APC
or xcache if you want a memory based cache for PHP on a single web
server. If you have more than one web server, you need to be performing
your tests using more than one web server.
Brian.