> Could we get a quick vote from any core committers and other
> interested parties, so that we can move forward to fixing and closing
> these tickets?
From what I've seen so far, your proposal sounds good. Is there a way
that we can keep the performance benefits of ConstantIncludeNode? For
example, if an IncludeNode detects that it resolves to the same template
path as last time, it doesn't load the template from disk again? If so,
that would remove my only objection.
Regards,
Luke
--
"Underachievement: The tallest blade of grass is the first to be
cut by the lawnmower." (despair.com)
Luke Plant || http://lukeplant.me.uk/
> From what I've seen so far, your proposal sounds good.
I agree — consistency is good.
> Is there a way
> that we can keep the performance benefits of ConstantIncludeNode? For
> example, if an IncludeNode detects that it resolves to the same template
> path as last time, it doesn't load the template from disk again? If so,
> that would remove my only objection.
ConstantIncludeNode improves performance because it calls django.template.loader.get_template() only once in __init__() and not in each call to render().
get_template() invokes the template loading machinery to create a compiled Template object, given a template path. If it's a performance bottleneck, we can memoize its results. That will improve performance of all template lookup operations, not only {% include %}.
We must ensure that during development, if we edit a template without touching the Python code, we get the new version and not the old compiled, memoized version. The easiest is to memoize only when TEMPLATE_DEBUG is False.
There is already some code to cache loaded templates in django.template.loaders.cached.Loader, but as far as I can tell, it's only used during Django's own tests. This is also true for the reset() method of template loaders. If memoizing doesn't work — it's a bit crude — maybe we could extend this code to cache all template loading operations.
Best regards,
--
Aymeric Augustin.
> ConstantIncludeNode improves performance because it calls
> django.template.loader.get_template() only once in __init__() and not
> in each call to render().
>
> get_template() invokes the template loading machinery to create a
> compiled Template object, given a template path. If it's a
> performance bottleneck, we can memoize its results. That will improve
> performance of all template lookup operations, not only {% include
> %}.
We already have the cached template loader, and we don't want to invoke
that kind of behaviour unless asked for (it's up to users to put it in
their TEMPLATE_LOADERS settings), as it has significant memory overhead.
I was thinking something simpler, like this:
class IncludeNode(BaseIncludeNode):
def __init__(self, template_name, *args, **kwargs):
super(IncludeNode, self).__init__(*args, **kwargs)
self.template_name = template_name
def render(self, context):
try:
template_name = self.template_name.resolve(context)
if getattr(self, 'last_template_name', None) == \
template_name:
template = self.template
else:
template = get_template(template_name)
self.template = template
self.last_template_name = template_name
return self.render_template(template, context)
except:
if settings.TEMPLATE_DEBUG:
raise
return ''
If I'm thinking correctly, that will keep the included template in
memory only for the lifetime of the parent template, and will avoid
loading it more than once if it is a static string and the include is in
a loop.