value = [4952 byte string]
Test commands:
Raw String (i.e. run the test code without wrapping value in the Entry class):
Old Implementation:
New Implementation:
The Entry model does provide one other feature which I think also warrants keeping it around and which is even potentially more valuable than the race_condition_ttl. It allows the cache to store nil values.
That being said, there is certainly room for improving the efficiency of the Entry model and since it is something called frequently and in performance critical section of code, I think it is certainly warranted.I've code up an optimized version of it here: https://github.com/bdurand/rails/commit/e78c30100f54ae4366fdb9352987bae9b8c1c1e7.
On Fri, Sep 21, 2012 at 8:48 PM, Brian Durand <br...@embellishedvisions.com> wrote:Hi Brian,
The Entry model does provide one other feature which I think also warrants keeping it around and which is even potentially more valuable than the race_condition_ttl. It allows the cache to store nil values.You are right. That in itself justifies Entry I believe. We could add a comment in the source code of the memcached store that documents these two pros for the next guy wondering about why we use Entry :).
I disagree very strongly with this line of reasoning. If you're using memcached as a cache store, you should be relying on its semantics, which is that nil and 'no value' are identical.
I agree with Michael here. While I appreciate the attempt to eliminate the dog-piling effect (I worked on a project where that was a problem), it seems counterproductive to cache the cache, and end up taking a lot more memory. If we want the cache in app memory we would use the :memory_store.
Brian Morearty
In the example you give, the cache would not expand from 1GB to 5GB unless you're storing very small values with very small keys. If you average key + value size is around 1K, then the cache would only expand by ~4%, not 500%. If you need to store lots of small values, you should probably go directly to storing raw values directly with dalli since any use ActiveSupport::Cache will add the overhead of an extra layer. If you're storing non-Strings to memcache, the overhead of the internal instance variable names are probably adding a far larger overhead than Entry is adding since the names are stored in strings in the serialized object.I think it makes the most sense to look at ActiveSupport::Cache as a generic, consistent interface to a variety of caching back ends and have the default implementations support all features. There's nothing to prevent someone from implementing an optimized memcache backend if they have special needs for performance.
On Saturday, September 22, 2012 1:55:54 PM UTC-7, Michael Koziarski wrote:On Sunday, 23 September 2012 at 12:32 AM, Xavier Noria wrote:On Fri, Sep 21, 2012 at 8:48 PM, Brian Durand <br...@embellishedvisions.com> wrote:Hi Brian,
The Entry model does provide one other feature which I think also warrants keeping it around and which is even potentially more valuable than the race_condition_ttl. It allows the cache to store nil values.You are right. That in itself justifies Entry I believe. We could add a comment in the source code of the memcached store that documents these two pros for the next guy wondering about why we use Entry :).I disagree very strongly with this line of reasoning. If you're using memcached as a cache store, you should be relying on its semantics, which is that nil and 'no value' are identical.Even with the optimised version you're talking storing 58 bytes vs 12 bytes, for a site with a 1GB of stuff to store, should they really need to allocate 5G of cache storage just so people who want to store nil (without using their own wrapper) can?The particular site where I encountered this issue upgraded to a version of rails which used AS::Entry and had their working set expand from 4G to more than 10. Frankly that's not even close to an acceptable trade off.
--
You received this message because you are subscribed to the Google Groups "Ruby on Rails: Core" group.
To view this discussion on the web visit https://groups.google.com/d/msg/rubyonrails-core/-/OOiW4Vsq61oJ.
To post to this group, send email to rubyonra...@googlegroups.com.
To unsubscribe from this group, send email to rubyonrails-co...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/rubyonrails-core?hl=en.
--
You received this message because you are subscribed to the Google Groups "Ruby on Rails: Core" group.
Yes, but only on the basis that relying on this behavior is a truly bad idea if you're actually using memcached at a large scale. But clearly there are other views on this functionality.Given that clearly some people want this then using Entry should be an optional extra which defaults to off. Or perhaps ship another memcached store subclass with the bells and whistles.That let's people use the race condition ttl stuff and the cache-nil stuff if they need it, but doesn't apply a non-trivial per-key overhead to everyone who uses memcached.
I opened a pull request on my changes: https://github.com/rails/rails/pull/7800.I updated the code a bit and fixed the tests so it is slightly different than the branch referenced above. It should now be backward compatible with older cache entries as well.
As for the Array idea, that would certainly reduce overhead a little bit, but I'm not sure it's worth it in terms of maintainability. For what it's worth, 27 of the 58 byte serialization overhead comes just from the class name "ActiveSupport::Cache::Entry".