Leak in 0.6 and 0.7

99 views
Skip to first unread message

Dave Grijalva

unread,
May 27, 2011, 6:47:38 PM5/27/11
to doozer
There appears to be a memory leak in the current versions of doozer.
If you start the service and do nothing, you can watch the memory
usage slowly rise. If you want to see it happen more quickly,
increase the pulse rate:
> doozerd -pulse=0.01

This will cause the applied value to iterate about 100x faster. You
can watch memory usage rise by several megabytes per minute.

Keith Rarick

unread,
May 28, 2011, 7:17:58 PM5/28/11
to doo...@googlegroups.com
This isn’t a leak; it’s just doozerd using a lot of memory. Memory use grows steadily until rev 360,000, which is when doozerd 0.6 and 0.7 start deleting history. After that the process keeps only the last 360,000 revs, and memory use is roughly constant.

This is obviously a large constant factor of overhead, and we want to reduce it. In the meantime, we’ve changed the default history size to only 2000 revs and added a flag to change that number.

kr

Dave Grijalva

unread,
May 31, 2011, 3:39:24 PM5/31/11
to doozer
Understood. Is this a one time hit of 1.6GB (for the 360,000 config)
or would that be per key? Is there an equation to estimate the memory
usage as the number of entries/revisions grows?

-dave

On May 28, 4:17 pm, Keith Rarick <k...@xph.us> wrote:
> This isn’t a leak; it’s just doozerd using a lot of memory. Memory use grows steadily until rev 360,000, which is when doozerd 0.6 and 0.7 start deleting history. After that the process keeps only the last 360,000 revs, and memory use is roughly constant.
>
> This is obviously a large constant factor of overhead, and we want to reduce it. In the meantime, we’ve changed the default history size to only 2000 revs and added a flag to change that number.
>
> kr
>

Keith Rarick

unread,
May 31, 2011, 6:27:02 PM5/31/11
to doo...@googlegroups.com
On Tue, May 31, 2011 at 12:39 PM, Dave Grijalva <dgri...@ngmoco.com> wrote:
> Understood.  Is this a one time hit of 1.6GB (for the 360,000 config)
> or would that be per key?

Doozer shares as much of the data structure as possible between
revisions, so the overhead should be about the same regardless
of how many files exist in total.

> Is there an equation to estimate the memory
> usage as the number of entries/revisions grows?

Not that I know of. The memory use is going to change significantly
anyway, as we optimize.

kr

Prat

unread,
Dec 5, 2014, 4:49:03 AM12/5/14
to doo...@googlegroups.com, k...@xph.us
What exactly is it storing for the memory to grow so high? 
Reply all
Reply to author
Forward
0 new messages