>>> c = SmartRamCache(cache_size=3)
>>> c.get('foo', lambda: 'bar')
'bar'
>>> c.get('foo', lambda: 'bar', force=True)
'bar'
>>> c.get('foo', lambda: 'bar', force=True)
'bar'
>>> c.get('baz', lambda: 'bar')
'bar'
>>> c.get('baz', lambda: 'bar', force=True)
'bar'
>>> c.get('baz', lambda: 'bar', force=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 20, in get
KeyError: 'foo'
--
-- mail from:GoogleGroups "web2py-developers" mailing list
make speech: web2py-d...@googlegroups.com
unsubscribe: web2py-develop...@googlegroups.com
details : http://groups.google.com/group/web2py-developers
the project: http://code.google.com/p/web2py/
official : http://www.web2py.com/
---
You received this message because you are subscribed to the Google Groups "web2py-developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to web2py-develop...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I think it's great, I'm annoyed that I haven't thought about it, it makes a lot of sense, and it could help web2py gain a few crucial milliseconds per request. If we can refactor Session in the process of implementing this - even better.
isn't a proper LRU best suited for the job ? How about concurrency ?
--
Couldn't we use memcache for this? Basically the sessions would still be saved in disk/db but they would be cached as they're retrieved, the cache would be deleted when modifications in the session are done.
--
The problem with cache.ram and cache.disk is that they do not provide constant size. They can suffer from memory leak unless the developer takes care of cleaning cache explicitly.
On Mar 20, 2015, at 9:06 AM, Michele Comitini <michele....@gmail.com> wrote:yes as a *cache* memcache would be ok, but somewhat slower and needs some system configuration.
--