Investigating Memory Leak - Why does spark have so many allocations outstanding

69 views
Skip to first unread message

Steve Stevenson

unread,
Apr 20, 2011, 6:39:08 PM4/20/11
to Spark View Engine Dev
000007ff004af518 216,836 5,204,064
Spark.Parser.Markup.SparkTokenType
000007ff0044f698 198,586 7,943,440 Spark.Parser.Code.Snippet
000007ff012d52a8 216,836 8,673,440
Spark.Parser.Paint`1[[Spark.Parser.Markup.SparkTokenType, Spark]]
000007ff00abdcc0 305,395 9,772,640 Spark.Parser.PaintLink
000007ff004ac690 730,294 35,054,112 Spark.Parser.Position
000007fef885ae68 370,517 37,697,192 System.Object[]
000007fef8861ea8 22,677 39,686,304 System.Collections.Hashtable
+bucket[]
0000000003774500 553 86,123,936 Free
000007fef8856960 1,447,384 130,933,664 System.String
000007fef8860bb0 14,249 163,950,272 System.Byte[]


Usually things like this stick out like a sore thumb. I am trying to
understand why spark.parser.position has so many allocations
outstanding. Anyone else see this behavior in dumps?

Louis DeJardin

unread,
Apr 20, 2011, 7:24:02 PM4/20/11
to Steve Stevenson, Spark View Engine Dev
That's all of the parse tree state created as the file is crunched...
Probably kept resident by a reference that could be traced back to the
compiled view holder entry.

There are a few members that could be nulled after compilation that
would let all of this be collected. Wouldn't apply to people with
precomp views, of course.

--
Louis - sent from mobile From: Steve Stevenson
Sent: Wednesday, April 20, 2011 3:39 PM
To: Spark View Engine Dev
Subject: Investigating Memory Leak - Why does spark have so many
allocations outstanding

--
You received this message because you are subscribed to the Google
Groups "Spark View Engine Dev" group.
To post to this group, send email to spar...@googlegroups.com.
To unsubscribe from this group, send email to
spark-dev+...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/spark-dev?hl=en.

Rob G

unread,
Apr 20, 2011, 7:45:38 PM4/20/11
to spar...@googlegroups.com
I tend to use precompiled views as a rule in production so I don't tend to run into these issues when profiling, and I don't bother profiling development so I can't really say I've paid much attention to the parser efficiency. Whilst it is quick and comparable to pretty much anything out there, the views are an order of magnitude quicker when precompiled, so I would recommend doing that anyway. 

We just have it as part of our build script for deployment to staging and production and never really think about it again.

Bill Barry

unread,
Apr 21, 2011, 9:05:17 AM4/21/11
to spar...@googlegroups.com
Is there a way to get a best-of-both-worlds approach?

I'd like to use precompiled views, but our application needs to support full customization. To do this we have 2 view locations:
~/Views/Controller/Action.spark.format
and
~/Customized/Views/Controller/Action.spark.format

where format is html, excel, or pdf (or technically json, but there is an automatic handler for that that we don't ever need to override).

It would be nice if we could use a compiled view if a cuatomized one does not exist.

Rob G

unread,
Apr 23, 2011, 7:42:48 AM4/23/11
to spar...@googlegroups.com
Hey Bill,

I don't see why not - it would just require some aggressive profiling and code reference clean-up. If someone wants to have a crack at it by all means, otherwise probably best to log it as an issue and that will put it on the radar for us to get to it.

Cheers,
Rob
Reply all
Reply to author
Forward
0 new messages