Memory, Cache, Pruning and RequestScope / HttpContext

202 views
Skip to first unread message

karl

unread,
Nov 14, 2009, 1:00:16 PM11/14/09
to ninject
I was running a load tester against an application and looking at a
memory profiler. I noticed pretty significant memory growth and
starting digging into it. My instances were being rooted by Ninject.
This was worrisome. After changing code to hardcoded dependencies, I
re-ran the test and the memory leak was gone.

I started to dig more deeply into the issue, specifically paying
attention to what was holding on to my references - CacheEntry
objects. Looking at how Ninject manages scoped objects, I realized
that Ninject essentially adds the scope to an internal collection via
a WeakReference. Then, via a timer, the collection is pruned of
unreferenced objects.

This seems like a clever solution to me (the bad kind of clever). I
see some benefits to the approach, but I decided to make a slight
changes (this is all off of 2.0 trunk code). In the Cache object, I
changed the Remember and TryGet method to store objects directly in
the HttpContext.Items collection (when appropriate).

The beginning of Remember looks like:
Ensure.ArgumentNotNull(context, "context");
var scope = context.GetScope();
#if !NO_WEB
if (scope is System.Web.HttpContext)
{
((System.Web.HttpContext)scope).Items.Add(context.Binding,
reference.Instance);
return;
}
#endif

The beginning of TryGet looks like:
Ensure.ArgumentNotNull(context, "context");
var scope = context.GetScope();
#if !NO_WEB
if (scope is System.Web.HttpContext)
{
return ((System.Web.HttpContext) scope).Items
[context.Binding];
}
#endif

I ran my load test with and without this change, you can see the
difference in memory usage here:
http://twitpic.com/pi7pr/full

The load test attempts to simulate real load, so there's a lot of
randomness to it. The difference is spikes is understandable given the
how items are purged from Ninject (on a timer); however, it does seem
like the original code is taking up more memory and (top pink), has
more live instances (bottom pink) and more total instances (middle
red) even after the purges.

Phil Haack

unread,
Nov 23, 2009, 12:19:10 AM11/23/09
to ninject
I ran into the same problem and the solution described by Karl here
made a huge difference.
My code was slightly different in that I avoided the double cast.
Perhaps a premature optimization as I didn't measure the impact.

public void Remember(IContext context, InstanceReference
reference)
{
Ensure.ArgumentNotNull(context, "context");
var scope = context.GetScope();
var httpScope = scope as HttpContext;
if(httpScope != null)
{
httpScope.Items.Add(context.Binding,
reference.Instance);
return;
}

var entry = new CacheEntry(context, reference);

lock(_entries)
{
_entries[context.Binding].Add(entry);
}

var disposableScope = scope as INotifyWhenDisposed;
if(disposableScope != null)
{
disposableScope.Disposed += (o, e) => Forget(entry);
}
}

public object TryGet(IContext context)
{
Ensure.ArgumentNotNull(context, "context");
var scope = context.GetScope();

var httpScope = scope as HttpContext;
if(httpScope != null)
{
return httpScope.Items[context.Binding];
}

lock(_entries)
{
foreach(CacheEntry entry in _entries[context.Binding])
{
if(!ReferenceEquals(entry.Scope.Target, scope))
continue;

if(context.HasInferredGenericArguments)
{
var cachedArguments =
entry.Context.GenericArguments;
var arguments = context.GenericArguments;

if(!cachedArguments.SequenceEqual(arguments))
continue;
}

return entry.Reference.Instance;
}

return null;
}
}

Another semi-related question. In the Remember function, there's was a
lock around a bunch of code, but it seems to me you only really intend
to lock around _entries. I couldn't find any reason to take such a big
lock so in my version, I scoped it to just around the addition of an
entry to _entries. Is what I did a mistake?

Phli

Nate Kohari

unread,
Nov 23, 2009, 12:53:20 AM11/23/09
to nin...@googlegroups.com
Karl and Phil:

Thanks for this patch; I've applied it to the git master.

The cache-and-collect idea that drives Ninject 2 was intended to be a
general-purpose solution to replace the crappy behavior classes that
were in Ninject 1. Unfortunately, it does mean that Ninject relies
heavily on the GC to clean up instances, which can result in the choppy
memory graph like Karl posted.

Going forward, I think we need to introduce pluggable cache strategies
to allow for for overrides in these special-case scenarios, but for the
meantime this handles the most egregious problem. Thanks again!


-Nate
> --
>
> You received this message because you are subscribed to the Google Groups "ninject" group.
> To post to this group, send email to nin...@googlegroups.com.
> To unsubscribe from this group, send email to ninject+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/ninject?hl=.
>
>
>

Nate Kohari

unread,
Nov 23, 2009, 1:18:25 AM11/23/09
to nin...@googlegroups.com
Hmm... I looked a little closer at the patch and realized that it's
incomplete. It won't handle bindings to a generic type definition.
(That's what the HasInferredGenericArguments check is all about.) Let me
get some test coverage around this and improve the solution.

-Nate

Phil Haack

unread,
Nov 23, 2009, 2:01:05 AM11/23/09
to nin...@googlegroups.com
Thanks Nate.

Did you see my question about the locking at the bottom of the thread? It seemed to me that the code holds a lock longer than it needs to. I was curious about why that is. Maybe I'm not understanding something.

Phil

________________________________________
From: Nate Kohari [nko...@gmail.com]
Sent: Sunday, November 22, 2009 10:18 PM
To: nin...@googlegroups.com
Subject: Re: [ninject] Re: Memory, Cache, Pruning and RequestScope / HttpContext

Nate Kohari

unread,
Nov 23, 2009, 2:01:53 AM11/23/09
to nin...@googlegroups.com
That was probably just me being overeager with locking. :)

Nate Kohari

unread,
Nov 23, 2009, 2:03:02 AM11/23/09
to nin...@googlegroups.com
By the way, I meant to ask, has anyone tried adjusting the cache pruning
interval to see if that would improve the memory strain? By default it's
set at 30 seconds, but for something like a high-traffic website you
might want it to be much lower.

-Nate

Nate Kohari

unread,
Nov 23, 2009, 2:15:59 AM11/23/09
to nin...@googlegroups.com
I've reverted the patch (but fixed the over-eager locking) for now.
Although the current version might not manage memory very well, I don't
want to introduce sneaky bugs related to generic parameter inference,
which can be really hard to track down. I'll take a closer look at this
tomorrow to see if I can come up with a better solution.

Phil Haack

unread,
Nov 23, 2009, 2:17:17 AM11/23/09
to nin...@googlegroups.com
Hi Nate,

I didn't try this yet. Personally, I think when adding an object intended for http request scope, I want to let ASP.NET handle the cache by its own management of HttpContext.Items. That way the objects are removed as soon as the request is complete rather than on some interval.

Is the cache pruning on a global timer or is it staggered based on the time an item is added to the cache?

One last question, I see that the change you made still has you placing items in _entries even if it's scoped to HttpContext. That doesn't fix the problem for me. The way I see it, if you're using request scope, perhaps you should let HttpContext.Items itself manage the lifetime of the object.

Perhaps the only objects you need to track in this case are objects that are IDisposable.

Phil

karl

unread,
Nov 23, 2009, 10:06:48 AM11/23/09
to ninject
I shortened it to 10 seconds in one of the initial tests...pretty much
behaved as you'd expect - the spikes are shorter (and thus less
pronounced). In that sense, the prune timer works, but in my case, my
solution is still better.

On Nov 23, 2:15 am, Nate Kohari <nkoh...@gmail.com> wrote:
> I've reverted the patch (but fixed the over-eager locking) for now.
> Although the current version might not manage memory very well, I don't
> want to introduce sneaky bugs related to generic parameter inference,
> which can be really hard to track down. I'll take a closer look at this
> tomorrow to see if I can come up with a better solution.
>
> -Nate
>
> Phil Haack wrote:
> > Thanks Nate.
>
> > Did you see my question about the locking at the bottom of the thread? It seemed to me that the code holds a lock longer than it needs to. I was curious about why that is. Maybe I'm not understanding something.
>
> > Phil
>
> > ________________________________________
> > From: Nate Kohari [nkoh...@gmail.com]
> >> For more options, visit this group athttp://groups.google.com/group/ninject?hl=.
>
> > --
>
> > You received this message because you are subscribed to the Google Groups "ninject" group.
> > To post to this group, send email to nin...@googlegroups.com.
> > To unsubscribe from this group, send email to ninject+u...@googlegroups.com.
> > For more options, visit this group athttp://groups.google.com/group/ninject?hl=.

Nate Kohari

unread,
Nov 23, 2009, 10:09:36 AM11/23/09
to nin...@googlegroups.com
Phil:

I applied, and then un-applied, the patch. :) The latest commit does not
have the patch on it. I agree on storing the items in HttpContext.Items;
I just need to come up with a way to do it that supports generic
parameters. The problem is this: if someone binds to a generic type
definition, like:

Bind(typeof(IRepository<>)).To(typeof(NHibernateRepository<>));

...there can actually be more than one concrete type created for a given
binding. So, when you check to see if an instance has been cached for a
binding, you have to compare the generic arguments of the context in
which the cached instance was activated to the generic arguments of the
"current" context (the one that resulted in the cache lookup). If they
don't match, it's a cache miss, even though there might be a cached
instance for that binding. This is why the cache uses a Multimap as its
data storage instead of a Dictionary.

To fix this, we just need to store a Multimap of instances in
HttpContext.Items, but I'd like to clean the code up a little.



Thanks,
Nate

Nate Kohari

unread,
Nov 23, 2009, 10:11:34 AM11/23/09
to nin...@googlegroups.com
Karl:

Thanks. I was just curious, really. I'd love to find a better way to do
it, but there's no way to get notifications when the GC runs. I think
the correct solution is to provide scope-aware strategies for special
case scenarios like HttpContext, and then fall back to the
general-purpose strategy for the base case.

-Nate

Phil Haack

unread,
Nov 23, 2009, 11:00:36 AM11/23/09
to <ninject@googlegroups.com>, nin...@googlegroups.com
I believe .net 4 adds the ability to subscribe to GC notifications. :)

So you have that to look forward to. :)

Sent from my mobile phone
>> For more options, visit this group at http://groups.google.com/group/ninject?hl=
>> .
>>
>>
>>
>
> --
>
> You received this message because you are subscribed to the Google
> Groups "ninject" group.
> To post to this group, send email to nin...@googlegroups.com.
> To unsubscribe from this group, send email to ninject+u...@googlegroups.com
> .

Phil Haack

unread,
Nov 23, 2009, 11:02:31 AM11/23/09
to <ninject@googlegroups.com>, nin...@googlegroups.com
Ah, I understand. I was thinking about that this morning. It makes
sense to store just one multimap in httpcontext.items rather than a
bunch o items in the context. It sounds like the patch was hasty :)

Phil

Sent from my mobile phone

Reply all
Reply to author
Forward
0 new messages