cache cache cache

146 views
Skip to first unread message

Niphlod

unread,
Mar 4, 2013, 6:04:39 PM3/4/13
to web2py-d...@googlegroups.com
Hi all,
   I'd like to enable setting cache headers in web2py with a little bit more control. Let's say it, client-side caching support in web2py is lacking.
 
Current implementation on decorating a function and caching the result in ram is good **enough**, although I found some problems with the "recommended" way of doing it (e.g. using request.env.path_info as a key). Note: I'm unaware of some advanced cache uses that are not mentioned in the book.

With the new wiki functionality and many blogs-based sites around, the raising presence of semi-magic cache CDNs, Varnish, etc out there, setting manually cache headers in web2py is cumbersome and error prone. The feature should be easy to use, possibly leverage the same syntax as the current @cache. "Gold" patterns are archive(d) pages, threads, non-mutable REST representations, and so on....

So, I'd like to introduce:
a) a key that is worth using for caching the rendered response, including some additional details over request.env.path_info
b) some facility to alter cache-headers only (just because it's right to cache the view inside web2py, but it should be nice too making the browser cache the response directly)
c) when the @newcache decorator (again, I'm not good at naming things ^_^) is used, do a) and b) accordingly

proposed implementation:
from gluon import current
import hashlib

def newcache(time_expire, cache_model=None, session=False, vars=True, lang=False, user_agent=False, public=True):
   
"""
    time_expire: same as @cache
    cache_model: same as @cache
    session: adds response.session_id to the key
    vars: adds request.env.query_string
    lang: adds T.accepted_language
    user_agent: adds user_agent() if session is not True
    public: if False forces the Cache-Control to be 'private'
    """

   
def wrap(f):
       
def wrapped_f(*args):
           
if current.request.env.request_method == 'GET':
               
if time_expire:
                    cache_control
= 'max-age=%(time_expire)s, s-maxage=%(time_expire)s' % dict(time_expire=time_expire)
                   
if not session and public:
                        cache_control
+= ', public'
                   
else:
                        cache_control
+= ', private'
                    current
.response.headers['Pragma'] = None
                    current
.response.headers['Expires'] = None
                    current
.response.headers['Cache-Control'] = cache_control
               
if cache_model:
                    cache_key
= [current.request.env.path_info, current.response.view]
                   
if session:
                        cache_key
.append(current.response.session_id)
                   
elif user_agent:
                        cache_key
.append(str(current.request.user_agent().items()))
                   
if vars:
                        cache_key
.append(current.request.env.query_string)
                   
if lang:
                        cache_key
.append(current.T.accepted_language)
                    cache_key
= hashlib.md5('__'.join(cache_key)).hexdigest()
                   
return cache_model(cache_key, lambda: f(), time_expire=time_expire)
           
return f()
       
return wrapped_f
   
return wrap

As you can see, in addition to a "normal" @cache decorator the differences are:
- only GET requests are cached (POST, PUT, DELETE should never be cached)
- default key contains path_info, response.view and query_string, so there are no problems with, e.g. "search" queries (?q='abcd') and/or pagination ones (?page=0), etc
- user_agent can be added (ideally one should have different views for mobile views, but maybe all the logic is done in the view, i.e. {{if response.user_agent().is_mobile:}})
- lang can be added (also here, ideally one should use language rewrites - so the language part gets included in the path_info - but maybe only "buttons" translations are used)
- max-age and s-maxage are tuned accordingly to the time_expire parameter
- session can be added so it gets cached according to the user (i.e. page layout different for any logged-in user)

Can we discuss it further (both in "conformant-standard headers" and "features" topics) ?

Thanks.

Michele Comitini

unread,
Mar 5, 2013, 5:15:04 AM3/5/13
to web2py-developers
+1

we need this badly.

2013/3/5 Niphlod <nip...@gmail.com>:
> --
> -- mail from:GoogleGroups "web2py-developers" mailing list
> make speech: web2py-d...@googlegroups.com
> unsubscribe: web2py-develop...@googlegroups.com
> details : http://groups.google.com/group/web2py-developers
> the project: http://code.google.com/p/web2py/
> official : http://www.web2py.com/
> ---
> You received this message because you are subscribed to the Google Groups
> "web2py-developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to web2py-develop...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Anthony

unread,
Mar 5, 2013, 10:36:34 AM3/5/13
to web2py-d...@googlegroups.com
Maybe:

def newcache(time_expire, cache_model=None, base_key=DEFAULT, session=False, vars=True, lang=False, user_agent=False, public=True):
   
...
    cache_key
= [current.request.env.path_info, current.response.view] if base_key == DEFAULT else base_key

in case a custom cache_key is desired.

Anthony

Niphlod

unread,
Mar 5, 2013, 11:06:21 AM3/5/13
to web2py-d...@googlegroups.com
good one.

PS1: maybe user_agent=True should take into account just is_tablet and is_mobile, or if a dict is passed only the values of the dict .... I don't think there's much people tuning the response on the os, dist, flavour, etc.
For "I make a different layout for every device" guys there's response.view embedded
For "I make a long list of ifs for every device" the syntax would become not that bad anyway
@newcache(60*60, cache_model='...', session=request.user_agent())

PS: I was thinking about a "speedy shortcut" to pass all options around without having to write those down every time...., i.e. a "quick=None" parameter that takes the initials of the settings....
Session, Vars, Lang, User_Agent, Public
So, if I go for the "cache the result for any user" I could use
@newcache(60*60, cache_model=..., quick="VLP")
or, if I have an user-dependant view
@newcache(60*60, cache_model=..., quick="UVL")
or, again, if I want to return the same page also with different vars, users, languages, etc
@newcache(60*60, cache_model=..., quick="")

The quick tests I did yesteday just before posting this lead to scarying results: when I put the logic just after if current .... == 'GET' (because ideally I'd like to execute that logic only if needed) all kinds of issues came up... I assume it's related to the scope of the variables but I didn't manage to finish exploring all problems. Someone that is a master of @decorators cares to explain why, e.g. a
session = True if 'V' in quick else False
just after
if current.request.env.request_method == 'GET':

fails badly ?

szimszon

unread,
Mar 5, 2013, 2:08:12 PM3/5/13
to web2py-d...@googlegroups.com
Hi!

What about ccache (client-cache) instead of newcache? :)

+1 for the idea!

Anthony

unread,
Mar 5, 2013, 2:37:05 PM3/5/13
to web2py-d...@googlegroups.com
It's not just for client caching -- also handles server-side caching. Maybe we could just alter the existing Cache class instead of creating a new decorator -- this decorator takes the same arguments as Cache.__call__ plus some new ones, so it would be backward compatible.

Anthony

Niphlod

unread,
Mar 5, 2013, 3:10:11 PM3/5/13
to web2py-d...@googlegroups.com
well, I'm not arguing about the name, but the points are kinda:
- if you use @newcache(time_expire=60*60) you just return cache headers according to the duration set
- if you use @newcache(time_expire=60*60, cache_model=cache.something) you return cache headers AND store in the cache the returned result.

Niphlod

unread,
Mar 5, 2013, 3:38:41 PM3/5/13
to web2py-d...@googlegroups.com
PS: smarting out some bugs, will post an updated version soon.

Niphlod

unread,
Mar 5, 2013, 4:29:16 PM3/5/13
to web2py-d...@googlegroups.com
this should play well with services too...
def newcache(time_expire, cache_model=None, base_key=None, session=False, vars=True, lang=True, user_agent=False, public=True, quick=None):
   
"""

    time_expire: same as @cache
    cache_model: same as @cache
    base_key: use this key and nothing else (there's cache.with_prefix for namespacing)

    session: adds response.session_id to the key
    vars: adds request.env.query_string
    lang: adds T.accepted_language
    user_agent: if True, adds is_mobile and is_tablet to the key.
        Pass a dict to use all the needed values (uses str(.items())) (e.g. user_agent=request.user_agent())
        used only if session is not True

    public: if False forces the Cache-Control to be 'private'
    quick: Session,Vars,Lang,User-agent,Public : fast overrides with initial strings, e.g. 'SVLP' or 'VLP'
    """

   
def wrap(f):
       
def wrapped_f():

           
if current.request.env.request_method == 'GET':
               
if time_expire:
                    cache_control
= 'max-age=%(time_expire)s, s-maxage=%(time_expire)s' % dict(time_expire=time_expire)

                   
if quick:
                        session_
= True if 'S' in quick else False
                        vars_
= True if 'V' in quick else False
                        lang_
= True if 'L' in quick else False
                        user_agent_
= True if 'U' in quick else False
                        public_
= True if 'P' in quick else False
                   
else:
                        session_
, vars_, lang_, user_agent_, public_ = session, vars, lang, user_agent, public
                   
if not session_ and public_:

                        cache_control
+= ', public'
                   
else:
                        cache_control
+= ', private'
                    current
.response.headers['Pragma'] = None
                    current
.response.headers['Expires'] = None
                    current
.response.headers['Cache-Control'] = cache_control
               
if cache_model:

                   
if base_key:
                        cache_key
= base_key
                   
else:
                        cache_key
= [current.request.env.path_info, current.response.view]
                       
if session_:
                            cache_key
.append(current.response.session_id)
                       
elif user_agent_:
                           
if user_agent_ is True:
                                cache_key
.append("%(is_mobile)s_%(is_tablet)s" % current.request.user_agent())
                           
else:
                                cache_key
.append(str(user_agent_.items()))
                       
if vars_:
                            cache_key
.append(current.request.env.query_string)
                       
if lang_:

                            cache_key
.append(current.T.accepted_language)
                        cache_key
= hashlib.md5('__'.join(cache_key)).hexdigest()
                   
return cache_model(cache_key, lambda: f(), time_expire=time_expire)
           
return f()

        wrapped_f
.__name__ = f.__name__
        wrapped_f
.__doc__ = f.__doc__
       
return wrapped_f
   
return wrap

Anthony

unread,
Mar 5, 2013, 4:58:07 PM3/5/13
to web2py-d...@googlegroups.com
base_key was intended to replace only the request.env.path_info and response.view portion of the cache_key -- even when base_key is specified, it should still be possible to optionally included session, user agent, etc.

Anthony

Niphlod

unread,
Mar 5, 2013, 5:15:29 PM3/5/13
to
ok.

def newcache(time_expire, cache_model=None, base_key=None, session=False, vars=True, lang=True, user_agent=False, public=True, quick=None):
   
"""

    time_expire: same as @cache
    cache_model: same as @cache
    base_key: optionally replaces the default key (request.env.path_info, request.view)

                    cache_key
= [current.request.env.path_info, current.response.view] if not base_key else [base_key]
                   
if session_:

                        cache_key
.append(current.response.session_id)
                   
elif user_agent_:
                       
if user_agent_ is True:
                            cache_key
.append("%(is_mobile)s_%(is_tablet)s" % current.request.user_agent())
                       
else:
                            cache_key
.append(str(user_agent_.items()))
                   
if vars_:
                        cache_key
.append(current.request.env.query_string)
                   
if lang_:
                        cache_key
.append(current.T.accepted_language)
                    cache_key
= hashlib.md5('__'.join(cache_key)).hexdigest()
                   
return cache_model(cache_key, lambda: f(), time_expire=time_expire)
           
return f()
        wrapped_f
.__name__ = f.__name__
        wrapped_f
.__doc__ = f.__doc__
       
return wrapped_f
   
return wrap

Massimo DiPierro

unread,
Mar 5, 2013, 5:16:03 PM3/5/13
to web2py-d...@googlegroups.com
When done please send it as apatch.

On Mar 5, 2013, at 4:14 PM, Niphlod wrote:

ok.

def newcache(time_expire, cache_model=None, base_key=None, session=False, vars=True, lang=True, user_agent=False, public=True, quick=None):
   
"""

    time_expire: same as @cache
    cache_model: same as @cache
    base_key: optionally replaces the default key (request.env.path_info, request.view)

                    cache_key
= [current.request.env.path_info, current.response.view] if not base_key else base_key
                   
if session_:

                        cache_key
.append(current.response.session_id)
                   
elif user_agent_:
                       
if user_agent_ is True:
                            cache_key
.append("%(is_mobile)s_%(is_tablet)s" % current.request.user_agent())
                       
else:
                            cache_key
.append(str(user_agent_.items()))
                   
if vars_:
                        cache_key
.append(current.request.env.query_string)
                   
if lang_:
                        cache_key
.append(current.T.accepted_language)
                    cache_key
= hashlib.md5('__'.join(cache_key)).hexdigest()
                   
return cache_model(cache_key, lambda: f(), time_expire=time_expire)
           
return f()
        wrapped_f
.__name__ = f.__name__
        wrapped_f
.__doc__ = f.__doc__
       
return wrapped_f
   
return wrap
On Tuesday, March 5, 2013 10:58:07 PM UTC+1, Anthony wrote:
base_key was intended to replace only the request.env.path_info and response.view portion of the cache_key -- even when base_key is specified, it should still be possible to optionally included session, user agent, etc.

Anthony

Niphlod

unread,
Mar 5, 2013, 5:24:02 PM3/5/13
to web2py-d...@googlegroups.com
Gosh, google groups interface messes around with repeated texts.... here's the final version (added a prefix parameter so it can be cleared)


On Tuesday, March 5, 2013 11:16:03 PM UTC+1, Massimo Di Pierro wrote:
When done please send it as apatch.
 
I'm a bit doubtful on the headers returned (luckily noone had something to say on the general idea). Also, do we want to include it in gluon/cache.py  ?

PS: what a @cache decorator needs to "do" in order to be fully valid ? (e.g. I found out that the first implementation didn't play well with services.....)
newcache_decorator.py

Anthony

unread,
Mar 6, 2013, 11:55:10 AM3/6/13
to web2py-d...@googlegroups.com
I see you'd like cache_model=None to mean not to use the server-side cache system, whereas the current @cache decorator takes cache_model=None to mean cache.ram should be used. I know it's not ideal, but as a way to avoid adding a redundant cache decorator, maybe we could simply build this new functionality into the current @cache decorator by adding an argument like cache_type, which could be a string or list containing 'client' and/or 'server' (e.g., 'client server' or ['client', 'server']). The code could then test as follows:

    if 'client' in cache_type and time_expire:
       
...
   
if 'server' in cache_type and cache_model:
       
...

To do client-only caching, you would then have to do cache_type='client' rather than cache_model=None.

Another option might be to let cache_model=None default to cache.ram (as it currently does), but let cache_model=False (or maybe even cache_model='client') turn off server caching and only set the client headers.

If that's not desirable, then perhaps we can come up with a better name than newcache for this new decorator (maybe something like cscache for "client-server cache"). I would lean toward adapting the current @cache decorator if possible, though.

Anthony

Niphlod

unread,
Mar 6, 2013, 12:24:06 PM3/6/13
to web2py-d...@googlegroups.com
I tend to avoid cache.ram and cache.disk as much as I can cause of the blocking involved in both (I got bit one time only and I don't want to replicate the experience ^_^).
 
It's quite "a shame" that cache_model=None resolves in cache.ram by default.... I'd guess that without distinguishing names we'll face a chicken-and-egg problem. The whole point was "patching" the current cache decorator to cache only GETs (including vars by default, language and user-agent) with the added bonus of being able to set without hassles the pertinent headers.

Anthony

unread,
Mar 6, 2013, 3:12:24 PM3/6/13
to web2py-d...@googlegroups.com
The whole point was "patching" the current cache decorator to cache only GETs (including vars by default, language and user-agent) with the added bonus of being able to set without hassles the pertinent headers.

Oh, yeah, forgot you're limiting it to GET requests, so that's another difference in default behavior -- but not sure that would really count as breaking backward compatibility (should POST requests really ever be cached?). We could always add an option to allow non-GET requests in case someone really needs that.

Anyway, assuming we allow the GET-only change, I think you could still use the current cache decorator without hassle, with the exception that the default behavior would have to include server-side caching, so you'd have to explicitly set a flag to do client-only caching.

Maybe we could replace the current @cache decorator with something like this new one, but maintain all the current default behavior and require explicit arguments for the additional functionality. Then we could also add a new @cache.client decorator that calls the new @cache decorator but sets the arguments to do client-only caching.

Another question -- do we need both base_key and prefix? The idea of base_key was to allow the user to provide a custom base rather than the default. If they also want a prefix, they could simply include it in the base_key. I see some utility in having both (in case you want the default base key but also need an additional prefix, and you don't want the hassle of manually specifying the whole thing), but that may be overkill for a limited use case.

Anthony

Niphlod

unread,
Mar 6, 2013, 4:50:43 PM3/6/13
to web2py-d...@googlegroups.com
well, the implementation varies a little bit from mine to yours ....I added the prefix just as a facility because I could have used it some times (just to allow a rapid "cache.clear('prefix.*')").
On the other side, I'd never use the base_key argument if given the chance to have a prefix.... base_key in your implementation would render the same representation given different urls, that is not something I envisioned. Can you provide an example of that being useful ?

Another difference is that this works ok only for decorated functions in web2py (and decorated services). Don't know if the decorator is supposed to work also in modules.....

Anyway, let's vote it and see where it goes ?

Anthony

unread,
Mar 6, 2013, 5:52:43 PM3/6/13
to web2py-d...@googlegroups.com
On Wednesday, March 6, 2013 4:50:43 PM UTC-5, Niphlod wrote:
well, the implementation varies a little bit from mine to yours ....I added the prefix just as a facility because I could have used it some times (just to allow a rapid "cache.clear('prefix.*')").
On the other side, I'd never use the base_key argument if given the chance to have a prefix.... base_key in your implementation would render the same representation given different urls, that is not something I envisioned. Can you provide an example of that being useful ?

Not off the top of my head -- just trying to allow some flexibility in case you don't want a key that has the url plus view and need some alternative instead.
 

Another difference is that this works ok only for decorated functions in web2py (and decorated services). Don't know if the decorator is supposed to work also in modules.....

The @cache decorator is the same, no? It's just for decorating web2py functions.

Anthony

Niphlod

unread,
Mar 6, 2013, 6:15:25 PM3/6/13
to web2py-d...@googlegroups.com
Forgive me Anthony, you're rock solid.....It's late here and I wrestled with git and the diffbook, but let's be pedantic (so I can adjust the patch properly)

@cache(request.env.path_info, time_expire=60) was the one until now.

Let's say we include this in gluon/cache.py .... to not break backward compatibility we set this @newcache as @cache.client instead.....

Should we inform the users that:
a) it's safer to use @cache.client(time_expire=60, cache_model=cache.ram) instead of @cache(request.env.path_info, time_expire=60)
b) there's a useful @cache.client(time_expire=60) used on controller functions is a shortcut to set cache headers
c) @cache.client(time_expire=60, quick='SVL') takes care of user-depending functions
d) @cache.client needs to be used only on controller functions or services (tl;dr there should be valids request-response) .... in which case base_key is kinda redundant if prefix is provided
e) ... ?

a, b, c, or d (or combination of) ?

Anthony

unread,
Mar 6, 2013, 10:52:42 PM3/6/13
to web2py-d...@googlegroups.com
I guess I was thinking to replace Cache.__call__ with a close variation of your new function, but with all the defaults set so it doesn't break backward compatibility (so there would also need to be a way to turn off the header setting for client-side caching, possibly via a cache_type argument, as I suggested). Then create a separate Cache.client that simply calls Cache.__call__, but with the arguments set to get the default behavior you prefer (i.e.,  no server-side caching). So, Cache.client becomes just a shortcut way to make a particular call to Cache.__call__.

Anthony

Paolo valleri

unread,
Mar 7, 2013, 2:48:01 AM3/7/13
to web2py-d...@googlegroups.com
Hi all, I like this new cache.
I haven't tried it yet, right now I am just wondering about errors that could occur in the function wrapped by @newcache.... etc. Namely if the function fails, are we going to cache the error or are we just avoiding it?

Paolo

Niphlod

unread,
Mar 7, 2013, 3:52:36 AM3/7/13
to web2py-d...@googlegroups.com
ATM it behaves exactly like @cache for exceptions, but if the current behaviour needs fixing, now is the time to deal with it.

PS: let's say it caches the error response (don't know, didn't test it): it varies from case to case adding a cache decorator to something that sometimes works and sometimes doesn't ^_^
e.g. good behaviour:
- /myapp/default/index/a, function expects an integer as args(0), http error is raised (either a 404 if managed in the function or a 500 if not managed). Caching the 404 or the 500 is good
bad behaviour:
- /myapp/default/index relies on an external service (e.g. twitter). Service is down, your fetch to twitter fails, ideally you should raise a redirect but you didn't do it, so it returns a 500 error.
Caching the 500 is bad

paolo....@gmail.com

unread,
Mar 7, 2013, 4:12:28 AM3/7/13
to web2py-d...@googlegroups.com
I have exactly the problem n. 2 that you described. Basically I have an app that gets data through an xml_rpc server.


 Paolo


2013/3/7 Niphlod <nip...@gmail.com>
--
-- mail from:GoogleGroups "web2py-developers" mailing list
make speech: web2py-d...@googlegroups.com
unsubscribe: web2py-develop...@googlegroups.com
details : http://groups.google.com/group/web2py-developers
the project: http://code.google.com/p/web2py/
official : http://www.web2py.com/
---
You received this message because you are subscribed to a topic in the Google Groups "web2py-developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/web2py-developers/Jwf7WVByqLw/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to web2py-develop...@googlegroups.com.

Niphlod

unread,
Mar 7, 2013, 4:42:20 AM3/7/13
to web2py-d...@googlegroups.com
well, I think we can inspect the status code...
let's walk into darkness... http specs : status codes beginning with 1,2 should be cache-safe . 3 too, cause 303 and 307 should never be cached (i.e. clients will discard it anyway), 4 ideally should be cached anyway and 5 is error (cache-safe depending on the implementation)
Solution: provide a status parameter that takes a list of status codes to exclude from caching (and by default caches 1,2,3,4 leaving out 5)?

PS: you get the data through an xml_rpc server. if you can't fetch it you return a 500-ish or a 400-ish status ?

paolo....@gmail.com

unread,
Mar 7, 2013, 5:27:57 AM3/7/13
to web2py-d...@googlegroups.com
I return a 400 in both cases, I put the code in a try-except so that when it fails I return a string 'data not available'. But the current behavior is that, the string 'data not available' is cached as the normal ones, a wrapped the function with @cache etc. and this is want I would like to avoid.

 Paolo


2013/3/7 Niphlod <nip...@gmail.com>

Niphlod

unread,
Mar 7, 2013, 4:43:56 PM3/7/13
to web2py-d...@googlegroups.com
umpf. I need help ^_^

newcache.py is the decorator function working properly.
I decided to try and take a spin to implement @cache.client separately, without touching the @cache decorator.
We'd need only to inform users that if the function is an action, we recommend @cache.client instead of using request.env.path_info as a key (plus all the goodies associated with client-side caching)

I'm a bit lost with all the wrapping, though, rough day at work.... somebody wants to help ?

cache.closest.patch is the patch to apply to gluon/cache.py to introduce @cache.client.

The bit of a problem I'm facing is that it gets called as soon as the controller is executed (while I'd like to execute it only when the decorated function gets called)
cache.almostthere.patch is the attempt, but it returns CacheAction as a string ... what am I missing ? I know it's just a bit away ^_^



cache.almostthere.patch
cache.closest.patch
newcache.py

Massimo DiPierro

unread,
Mar 10, 2013, 9:10:42 AM3/10/13
to web2py-d...@googlegroups.com
Is the proposal to include it in cache.py? Should it replace/merge existing cache function?

--
-- mail from:GoogleGroups "web2py-developers" mailing list
make speech: web2py-d...@googlegroups.com
unsubscribe: web2py-develop...@googlegroups.com
details : http://groups.google.com/group/web2py-developers
the project: http://code.google.com/p/web2py/
official : http://www.web2py.com/
---
You received this message because you are subscribed to the Google Groups "web2py-developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to web2py-develop...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 
<newcache_decorator.py>

Niphlod

unread,
Mar 10, 2013, 9:19:33 AM3/10/13
to web2py-d...@googlegroups.com
I can pack it as a plugin, but it would be nice to address client-side caching for once as something supported natively by web2py, because as of right now  its counterintuitive and the hints in the book lead to a totally wrong result.
until now @cache has been used to cache results and views, for views the book recommends request.env.path_info as a key: it's not enough and it doesn't respect the http method involved (as for a lot of other things)
The idea is to have @cache for "raw results" and @cache.client for either (or both) alter client-cache headers and store computed views.
I had no time to review the implementation yet to make it work with services too, and to execute the wrap() just when the function is called, rather than executing it when the controller is called. I have some time now to see if I can make it work correctly, but if something wrong about the current implementation kinda "jumps up" as an error to the eye of a python decorators master, maybe he can chime in ^_^


Niphlod

unread,
Mar 10, 2013, 9:36:20 AM3/10/13
to web2py-d...@googlegroups.com
I stand corrected, I think I found the problem.
Now, what I am missing is some tests to see if the cache decorator works as expected like the "normal" one.....I didn't see any tests or the book covering it.....

i.e. should it be
@cache.client(....)
@service.json
def whatever():
     return dict()

or
@service.json
@cache.client(....)
def whatever():
     return dict()

?

Same thing goes for @auth decorators.....

Anthony

unread,
Mar 10, 2013, 9:54:21 AM3/10/13
to web2py-d...@googlegroups.com
My suggestion for an @cache.client method was for a version that only did the client-side caching by default, which was your original goal. If that method is to handle both client and server caching, I'm not sure it makes sense to call it @cache.client. I'd still lean toward the approach I described here: https://groups.google.com/d/msg/web2py-developers/Jwf7WVByqLw/4MCIXLv8zWAJ.

Anthony

Niphlod

unread,
Mar 10, 2013, 10:02:56 AM3/10/13
to web2py-d...@googlegroups.com
the problem I faced for that is that you must execute the function first to inspect the status code. This can't be accomplished by the @cache call without breaking backward compatibility.

Anthony

unread,
Mar 10, 2013, 10:17:58 AM3/10/13
to web2py-d...@googlegroups.com
Maybe I'm not following correctly -- if the function generates an error, won't it occur (and response.status get changed to an error code) after you check for a valid response.status?

Also, are you no longer providing a client-only option (looks like headers are only set when cache_model is specified)?

Anthony

Niphlod

unread,
Mar 10, 2013, 10:29:54 AM3/10/13
to web2py-d...@googlegroups.com


On Sunday, March 10, 2013 3:17:58 PM UTC+1, Anthony wrote:
Maybe I'm not following correctly -- if the function generates an error, won't it occur (and response.status get changed to an error code) after you check for a valid response.status?

the execution model is tricky, I'm fighting against it right now .....
if the function generates an error, it gets called "early" and the cache won't even get called . I'm almost fine with that, cause peoples in the need of returning a meaningful response will likely have wrapped the "problem" in a try: except.
But, there's always a "but" ...
let's say I am a perfectly legit user trying to do

def whatever():
     
try:
         rtn
= sometimes_fails()
     
except:
         rtn
= 'try again in a few minutes'
         response
.status = 503
     
return rtn

the normal @cache decorator does.....
1st call --> 503, try again in a few minutes
2nd call --> 200, try again in a few minutes (whhhhaaaaattt ????)

@cache.client should do...
1st call --> 503, try again in a few minutes, no cache whatsoever (not even headers)
2nd call --> 200, this time didn't fail, cache (headers and body)
 

Also, are you no longer providing a client-only option (looks like headers are only set when cache_model is specified)?

corrected that part in the last iteration ... you're totally right, cache_model=None means headers only, cache_model=something means headers AND cache the returned value.

Niphlod

unread,
Mar 10, 2013, 12:09:31 PM3/10/13
to web2py-d...@googlegroups.com
here's the "final" patch. works with services and @auth decorators (assuming the correct way is @cache.client \n @service.json)

PS: I know that the preferred way to prefix a key is use cache_model=cache.with_prefix() but then clearing out the cache for "bad response statuses" doesn't work as expected, so there's a handy prefix parameter for it
cache.client.patch
Reply all
Reply to author
Forward
0 new messages