RavenDb Memory Increasing Constantly

508 views
Skip to first unread message

aa

unread,
Jul 29, 2011, 11:42:53 AM7/29/11
to ravendb
Hello

I am using RavenDb in embedded mode, the process where Raven is
running is constantly growing in memory, it grows indefinitely until
eventually it will crash with OutOfMemoryError.

The application is doing constant writes to the DB, approximately it
is inserting around 10 documents per minute, this document collection
has indexes.

After doing some memory profiling it seems a Lucene related Dictionary
is taking all the memory.

I am using build 412.

The memory starts at 50K and grows passes the 6Gigs and even more.

I am trying to limit this using the following parameters:
<add key="Raven/Esent/CacheSizeMax" value="256"/>
<add key="Raven/Esent/MaxVerPages" value="32"/>

They however do not seem to make a difference, memory still passes way
beyond 256MB.

What could be the issue.

Itamar Syn-Hershko

unread,
Jul 29, 2011, 11:57:43 AM7/29/11
to rav...@googlegroups.com
Can you please share the complete profiling findings?

Also, is it possible to create a minimalistic repro for this, i.e. a test case or a sample project?

Matt Warren

unread,
Jul 29, 2011, 12:45:10 PM7/29/11
to ravendb
Someone else has a similar issue the other week, take a look here
http://groups.google.com/group/ravendb/browse_thread/thread/eb24bcc97a1fa2b.

If you have the same issue, it's because the cache is growing faster
than the GC can free it.

The fix for that issue was to do this:

using (DocumentCacher.SkipSettingDocumentsInDocumentCache())
{

...

Tobi

unread,
Jul 29, 2011, 1:38:10 PM7/29/11
to rav...@googlegroups.com
On 29.07.2011 18:45, Matt Warren wrote:

> Someone else has a similar issue the other week, take a look here
> http://groups.google.com/group/ravendb/browse_thread/thread/eb24bcc97a1fa2b.

I guess that was me.

> using (DocumentCacher.SkipSettingDocumentsInDocumentCache())
> {
> }

And you can now configure the cache size and sweep interval with the
MemoryCacheLimitPercentage and MemoryCacheLimitCheckInterval settings.

But my problem was with querying the docs and that I queried them so fast,
that the cache was filled to it's limit, before the cache sweep interval
could kick in (which is 2 min by default).

>> The application is doing constant writes to the DB, approximately it
>> is inserting around 10 documents per minute, this document collection
>> has indexes.

>> I am using build 412.

Build 412 is from 2011-07-28 19:00, so with 10 docs/min there can't be
much more than 14400 documents.

>> The memory starts at 50K and grows passes the 6Gigs and even more.

Do you have super-large docs?

Are you disposing your sessions? (A session will cache every single
inserted/queried doc!)

Tobias


Albert Attia

unread,
Jul 29, 2011, 1:52:05 PM7/29/11
to rav...@googlegroups.com
I dont have very large docs and I am disposing the session.

I am also querying a large document collection every minute, the same collection into which I insert 10 documents every minute.

What do this variables do and how could they help?

Let me know
Regards

Tobi

unread,
Jul 29, 2011, 2:03:52 PM7/29/11
to rav...@googlegroups.com
On 29.07.2011 19:52, Albert Attia wrote:

> I am also querying a large document collection every minute, the same
> collection into which I insert 10 documents every minute.

How much is "large"?

> What do this variables do and how could they help?

/// <summary>
/// Percentage of physical memory used for caching
/// Allowed values: 0-99 (0 = autosize)
/// Default: 99 (or value provided by system.runtime.caching app config)
/// </summary>
public long MemoryCacheLimitPercentage { get; set; }

/// <summary>
/// Interval for checking the memory cache limits
/// Allowed values: max precision is 1 second
/// Default: 00:02:00 (or value provided by system.runtime.caching app config)
/// </summary>
public TimeSpan MemoryCacheLimitCheckInterval { get; set; }

Set MemoryCacheLimitCheckInterval to TimeSpan.FromSeconds(30) and
MemoryCacheLimitPercentage to 50 and see if this makes a difference.

Tobias

Albert Attia

unread,
Jul 29, 2011, 2:11:27 PM7/29/11
to rav...@googlegroups.com
Well the collection starts with no documents are we are inserting 10 documents or even more every minute, the application is a service and will run indefinitely, so the collection will grow indefinitely forever.

Will this be an issue?

Tobi

unread,
Jul 29, 2011, 2:22:41 PM7/29/11
to rav...@googlegroups.com
On 29.07.2011 20:11, Albert Attia wrote:

> Well the collection starts with no documents are we are inserting 10
> documents or even more every minute, the application is a service and will
> run indefinitely, so the collection will grow indefinitely forever.
>
> Will this be an issue?

Usually not. The memory consumption should be more or less constant. I'm
running RavenDB embedded on 2GB machines (sometimes even less) and didn't
had any problems yet with bulk-inserting 100.000 docs (at least not with
the recent builds).

Can you reproduce this in a unit test or a small console app, which you
could share?

Tobias

Albert Attia

unread,
Jul 29, 2011, 2:25:39 PM7/29/11
to rav...@googlegroups.com
I will test first with the settings you provided and see if they make a different, if not then I will create a sample project reproducing the issue.

I however had MaxCacheSize setup to 256MB, why is this value getting ignored?

Also on MemoryCacheLimitPercentage, what will 0 (autosize) do exactly?

Many thanks for your prompt help,

Regards

Tobi

unread,
Jul 29, 2011, 2:54:56 PM7/29/11
to rav...@googlegroups.com
On 29.07.2011 20:25, Albert Attia wrote:

> I however had MaxCacheSize setup to 256MB, why is this value getting ignored?

How? There's no setting named "MaxCacheSize", that I'm aware of.

> Also on MemoryCacheLimitPercentage, what will 0 (autosize) do exactly?

Not sure, how MS exactly handles this. This is all, I know about it:

http://msdn.microsoft.com/en-us/library/dd941874.aspx

> Many thanks for your prompt help,

you're welcome.

Tobias

Albert Attia

unread,
Jul 29, 2011, 3:01:08 PM7/29/11
to rav...@googlegroups.com
I am referring to the settings described here:

http://ravendb.net/faq/low-memory-footprint

These were ignored.

Sent from my iPhone

Tobi

unread,
Jul 29, 2011, 3:09:14 PM7/29/11
to rav...@googlegroups.com
On 29.07.2011 21:01, Albert Attia wrote:

> I am referring to the settings described here:
>
> http://ravendb.net/faq/low-memory-footprint

Ah, ok CacheSizeMax vs. MaCacheSize :-)

These settings are referring to the Esent storage only. There's more
caching happening at higher levels.

Tobias

Matt Warren

unread,
Jul 29, 2011, 3:15:44 PM7/29/11
to ravendb
> I am referring to the settings described here:
> http://ravendb.net/faq/low-memory-footprint

Those settings are only related to the datastore, so if it's a Lucene
"memory leak" they won't have an effect. Roughly how large are the
"index" folders where Lucene is storing it's indexes.

> After doing some memory profiling it seems a Lucene related Dictionary
> is taking all the memory.

Do you happen to know what the variable name or type was?

If the Cache settings that Tobi posted don't help, if you could post a
test repo that would help.

On Jul 29, 8:01 pm, Albert Attia <albert.at...@gmail.com> wrote:
> I am referring to the settings described here:
>
> http://ravendb.net/faq/low-memory-footprint
>
> These were ignored.
>
> Sent from my iPhone
>

Albert Attia

unread,
Jul 29, 2011, 4:01:14 PM7/29/11
to rav...@googlegroups.com
Ok thanks

I will test with the adjusted settings and let you know.

Regards

Sent from my iPhone

Ayende Rahien

unread,
Jul 30, 2011, 7:28:49 AM7/30/11
to rav...@googlegroups.com
If you can send us a repro, that would be very useful.
Another thing to consider, by default, temp indexes reside in memory until they pass the 25 MB size. If you keep creating new temporary indexes, it might be why you are seeing increased memory.

At any rate, I can assure you that this shouldn't be happening, so _something_ is wrong. If you have more info / repro, we will be happy to look into that.

Albert Attia

unread,
Jul 30, 2011, 11:54:35 AM7/30/11
to rav...@googlegroups.com
I left it running at the office. 

I will see if it crashed Monday or if memory increased significally. 

If it did I will send a test project. 

Regards


Sent from my iPhone

aa

unread,
Aug 1, 2011, 5:26:47 PM8/1/11
to ravendb
The changes did not help, memory grew to over 2GB over the weekend,
the app did not crash but the memory grew beyond the established
limits.

I am working on a project that I can send to reproduce this.



On Jul 30, 10:54 am, Albert Attia <albert.at...@gmail.com> wrote:
> I left it running at the office.
>
> I will see if it crashed Monday or if memory increased significally.
>
> If it did I will send a test project.
>
> Regards
>
> Sent from my iPhone
>
> On 07/30/2011, at 06:29 a.m., Ayende Rahien <aye...@ayende.com> wrote:
>
> If you can send us a repro, that would be very useful.
> Another thing to consider, by default, temp indexes reside in memory until
> they pass the 25 MB size. If you keep creating new temporary indexes, it
> might be why you are seeing increased memory.
>
> At any rate, I can assure you that this shouldn't be happening, so
> _something_ is wrong. If you have more info / repro, we will be happy to
> look into that.
>
> On Fri, Jul 29, 2011 at 11:01 PM, Albert Attia <albert.at...@gmail.com>wrote:
>
>
>
>
>
>
>
> > Ok thanks
>
> > I will test with the adjusted settings and let you know.
>
> > Regards
>
> > Sent from my iPhone
>

Itamar Syn-Hershko

unread,
Aug 1, 2011, 5:55:06 PM8/1/11
to rav...@googlegroups.com
Please send us the repro, as contained as possible. You can send privately if required

Ayende Rahien

unread,
Aug 2, 2011, 12:56:22 AM8/2/11
to rav...@googlegroups.com
Albert,
The 2GB limit is for the storage cache, not for the overall memory usage. 

On Tue, Aug 2, 2011 at 12:26 AM, aa <albert...@gmail.com> wrote:

Albert Attia

unread,
Aug 2, 2011, 1:14:19 AM8/2/11
to rav...@googlegroups.com
Ok but I set all limits mentioned in this thread well below 2GB. 


Sent from my iPad

Ayende Rahien

unread,
Aug 2, 2011, 1:57:57 AM8/2/11
to rav...@googlegroups.com
As Itamar said, we would need to see your full configuration and some way to reproduce your work load.

Albert Attia

unread,
Aug 2, 2011, 8:59:37 AM8/2/11
to rav...@googlegroups.com
I will let you know when is ready. 

Sent from my iPhone
Reply all
Reply to author
Forward
0 new messages