--You received this message because you are subscribed to the Google Groups "Distributed Systems" group.To view this discussion on the web visit https://groups.google.com/d/msg/distsys-discuss/-/PbLeVp30ISMJ.
To post to this group, send email to distsys...@googlegroups.com.
To unsubscribe from this group, send email to distsys-discu...@googlegroups.com.
John,Your response made me think that there's probably a difference between how you handle cache invalidation for updates than inserts.Is it fair to say that if you are receiving a heavy dose of inserts, there is no need to spam the cache service requests to invalidate because it will hit the server anyways for the data since it's new? Reason I say this is if you have 200M inserts/day you don't really want/need to flood the cache service right?Your concern about cache hits I imagine would fall more under writes that are updates instead of inserts?
On Wed, Jul 18, 2012 at 8:08 PM, John Meagher <john.m...@gmail.com> wrote:
TTL based caching is used for intermediate server handling caching rather than your own. I haven't used Varnish, but other things like it will use TTL via the expires headers. Allowing invalidation forces your local cache to grab a fresh version. The invalidation is a very cheap operation to call since it just sets a flag on the cached URL telling it that the next time someone requests that URL it needs to get a fresh copy.In write heavy applications the need for the cache needs to be checked out. If every request to the cache is a miss then the cache is only adding to the processing time and complexity. Some benchmarking on your specific application is needed to check if the cache improves things or not.--
On Tuesday, July 17, 2012 11:08:29 PM UTC-4, Kelly Sommers wrote:I saw a tweet from Camille Fournier this evening mentioning Varnish which is a caching HTTP reverse proxy. I should mention I don't have much experience with caching yet.I saw a feature which was the ability to invalidate cache based on a URL and this seems to fit really well with RESTful (or any HTTP service perhaps) services but I wonder what implications this has on scalability. I thought the behaviour of the web was to be more TTL based because this is much easier to scale?I'm thinking of a write-heavy system where updates are happening at a high rate. It seems to me that a push based cache invalidation mechanism would become a new scalability pain point? The problem is the backend may not be aware what the cache has cached so it would be constantly be communicating with the cache service on every single update. Am I correct in this assumption?
You received this message because you are subscribed to the Google Groups "Distributed Systems" group.
To post to this group, send email to distsys-discuss@googlegroups.com.
To unsubscribe from this group, send email to distsys-discuss+unsubscribe@googlegroups.com.
To post to this group, send email to distsys...@googlegroups.com.
To unsubscribe from this group, send email to distsys-discu...@googlegroups.com.