> Someone else has a similar issue the other week, take a look here
> http://groups.google.com/group/ravendb/browse_thread/thread/eb24bcc97a1fa2b.
I guess that was me.
> using (DocumentCacher.SkipSettingDocumentsInDocumentCache())
> {
> }
And you can now configure the cache size and sweep interval with the
MemoryCacheLimitPercentage and MemoryCacheLimitCheckInterval settings.
But my problem was with querying the docs and that I queried them so fast,
that the cache was filled to it's limit, before the cache sweep interval
could kick in (which is 2 min by default).
>> The application is doing constant writes to the DB, approximately it
>> is inserting around 10 documents per minute, this document collection
>> has indexes.
>> I am using build 412.
Build 412 is from 2011-07-28 19:00, so with 10 docs/min there can't be
much more than 14400 documents.
>> The memory starts at 50K and grows passes the 6Gigs and even more.
Do you have super-large docs?
Are you disposing your sessions? (A session will cache every single
inserted/queried doc!)
Tobias
> I am also querying a large document collection every minute, the same
> collection into which I insert 10 documents every minute.
How much is "large"?
> What do this variables do and how could they help?
/// <summary>
/// Percentage of physical memory used for caching
/// Allowed values: 0-99 (0 = autosize)
/// Default: 99 (or value provided by system.runtime.caching app config)
/// </summary>
public long MemoryCacheLimitPercentage { get; set; }
/// <summary>
/// Interval for checking the memory cache limits
/// Allowed values: max precision is 1 second
/// Default: 00:02:00 (or value provided by system.runtime.caching app config)
/// </summary>
public TimeSpan MemoryCacheLimitCheckInterval { get; set; }
Set MemoryCacheLimitCheckInterval to TimeSpan.FromSeconds(30) and
MemoryCacheLimitPercentage to 50 and see if this makes a difference.
Tobias
> Well the collection starts with no documents are we are inserting 10
> documents or even more every minute, the application is a service and will
> run indefinitely, so the collection will grow indefinitely forever.
>
> Will this be an issue?
Usually not. The memory consumption should be more or less constant. I'm
running RavenDB embedded on 2GB machines (sometimes even less) and didn't
had any problems yet with bulk-inserting 100.000 docs (at least not with
the recent builds).
Can you reproduce this in a unit test or a small console app, which you
could share?
Tobias
> I however had MaxCacheSize setup to 256MB, why is this value getting ignored?
How? There's no setting named "MaxCacheSize", that I'm aware of.
> Also on MemoryCacheLimitPercentage, what will 0 (autosize) do exactly?
Not sure, how MS exactly handles this. This is all, I know about it:
http://msdn.microsoft.com/en-us/library/dd941874.aspx
> Many thanks for your prompt help,
you're welcome.
Tobias
http://ravendb.net/faq/low-memory-footprint
These were ignored.
Sent from my iPhone
> I am referring to the settings described here:
>
> http://ravendb.net/faq/low-memory-footprint
Ah, ok CacheSizeMax vs. MaCacheSize :-)
These settings are referring to the Esent storage only. There's more
caching happening at higher levels.
Tobias
I will test with the adjusted settings and let you know.
Regards
Sent from my iPhone