RavenDB build 3684 - memory issues after documents removed

111 views
Skip to first unread message

Eric Cotter

unread,
May 28, 2015, 3:09:12 PM5/28/15
to rav...@googlegroups.com
Hello,

After some questions by Oren I am using the latest build to see if a high volume (hundreds of thousands a day) and small size (4k or less) documents for NServiceBus Saga's can be used with RavenDB.

We run up to about 14GB of ram on the machine with only a few documents (3,000) across the various databases.  I'm using NServiceBus with Saga's so we have about 32 databases on this one box. The volume gets as high as 2,000 at any given point for a few of them.
Overal most of the databases stay low like maybe a couple hundreds.  The documents are very small "just simple properties".

We keep having to recycle the instance because it's running away with RAM over a day or two.   

We have tried to tweek some of the cache parameters but it seems to have no affect.   The only thing on this Windows box is Raven.

I think were going to have look at alternatives like sql persistence or roll our own persistence layer for NServiceBus sagas if we can't get this to be configured correctly. 
We did not have this issue with our earlier version running stand alone version 2.x

License Status: Commercial- Subscription  Server Build #3684   Client Build 3684


  • ServerNamenull,
  • TotalNumberOfRequests27076865,
  • Uptime"6.21:53:35.4134310",
  • Memory
    {
    • DatabaseCacheSizeInMB50.39,
    • ManagedMemorySizeInMB12725.32,
    • TotalProcessMemorySizeInMB13107.39
    },
  • LoadedDatabases
    [
    • {
      • Namenull,
      • LastActivity"2015-05-28T18:57:11.5205729",
      • TransactionalStorageAllocatedSize1056768,
      • TransactionalStorageAllocatedSizeHumaneSize"1.01 MBytes",
      • TransactionalStorageUsedSize1048576,
      • TransactionalStorageUsedSizeHumaneSize"1,024 KBytes",
      • IndexStorageSize2759,
      • IndexStorageHumaneSize"2.69 KBytes",
      • TotalDatabaseSize1059527,
      • TotalDatabaseHumaneSize"1.01 MBytes",
      • CountOfDocuments23,
      • CountOfAttachments0,
      • DatabaseTransactionVersionSizeInMB0,
      • Metrics
        {
        • DocsWritesPerSecond0,
        • IndexedPerSecond0,
        • ReducedPerSecond0,
        • RequestsPerSecond0.086,
        • Requests
          {
          • Type"Meter",
          • Count1856,
          • MeanRate0.003,
          • OneMinuteRate0.308,
          • FiveMinuteRate0.1,
          • FifteenMinuteRate0.039
          },
        • RequestsDuration
          {
          • Type"Historgram",
          • Counter1855,
          • Max3167,
          • Min0,
          • Mean6.31644204851752,
          • Stdev79.24791181679804,
          • Percentiles
            {
            • 50%3,
            • 75%3,
            • 95%5,
            • 99%94.20000000000073,
            • 99.9%3097.4870000000087,
            • 99.99%3167
            }
          },
        • StaleIndexMaps
          {
          • Type"Historgram",
          • Counter996,
          • Max0,
          • Min0,
          • Mean0,
          • Stdev0,
          • Percentiles
            {
            • 50%0,
            • 75%0,
            • 95%0,
            • 99%0,
            • 99.9%0,
            • 99.99%0
            }
          },

Federico Lois

unread,
May 28, 2015, 4:01:51 PM5/28/15
to rav...@googlegroups.com
Assuming the issue there is a memory leak, a memory dump would allow to find out who is holding the memory hostage. I would suggest to send a link to download it to the support email if that is a production environment.

Sent from my Phone.

From: Eric Cotter
Sent: ‎28/‎05/‎2015 16:09
To: rav...@googlegroups.com
Subject: [RavenDB] RavenDB build 3684 - memory issues after documents removed

--
You received this message because you are subscribed to the Google Groups "RavenDB - 2nd generation document database" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Eric Cotter

unread,
May 28, 2015, 4:25:41 PM5/28/15
to rav...@googlegroups.com
Federico,

It is a production DB instance.  How is the best way to get you the information you need? I just sent you an email from my corporate account.

:)

--
You received this message because you are subscribed to a topic in the Google Groups "RavenDB - 2nd generation document database" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ravendb/yJSxwQNvnWQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ravendb+u...@googlegroups.com.

Brett Nagy

unread,
May 29, 2015, 2:03:35 AM5/29/15
to rav...@googlegroups.com
We're also seeing memory leak type issues, with the latest stable release of RavenDB simply eating memory until all the memory of the server is consumed and the Raven process dies.

I haven't taken a memory dump on a prod box before, so I will have to research that first.

Brett

Eric Cotter

unread,
May 29, 2015, 2:42:22 PM5/29/15
to rav...@googlegroups.com
We have sent in the Process dump to RavenDB support will let you know when we hear something Brett.

Eric

Eric Cotter

unread,
May 29, 2015, 2:43:18 PM5/29/15
to rav...@googlegroups.com
Brett,

We just uses the SysInternals ProcDump ;)


On Thursday, May 28, 2015 at 11:03:35 PM UTC-7, Brett Nagy wrote:

Oren Eini (Ayende Rahien)

unread,
May 30, 2015, 5:15:27 AM5/30/15
to ravendb
Or just use task manager, right click the process and select Create Dump file.

Hibernating Rhinos Ltd  

Oren Eini l CEO Mobile: + 972-52-548-6969

Office: +972-4-622-7811 l Fax: +972-153-4-622-7811

 


--

Eric Cotter

unread,
Jun 2, 2015, 1:31:10 PM6/2/15
to rav...@googlegroups.com
Has anything come of the process dump we provided last Friday to Federico?

I think he was saying that RavenDB teams would be looking at it.   We posted the link to core dump to the RavenDB support address.

Thanks!

Federico Lois

unread,
Jun 2, 2015, 1:51:45 PM6/2/15
to rav...@googlegroups.com
I downloaded it and looking at it. No clear yet of what is happening it.

From: Eric Cotter
Sent: ‎02/‎06/‎2015 14:31
To: rav...@googlegroups.com
Subject: [RavenDB] Re: RavenDB build 3684 - memory issues after documentsremoved

--

Federico Lois

unread,
Jun 2, 2015, 4:45:39 PM6/2/15
to rav...@googlegroups.com
BTW I didnt mention it, but the amount of objects in the dump is massive, to the point that Visual Studio cannot load it up. So I am exploring it by hand with WinDbg, which takes a lot of time :( 

I have a couple of ideas, but nothing definitive because I am watching data nodes in the Gen3 (which is, strange at best). 

Eric Cotter

unread,
Jun 2, 2015, 6:14:34 PM6/2/15
to rav...@googlegroups.com
​I had a feeling objects are not being removed on Saga MarkAsComplete().  ​
The actual documents listed in the DB's are relatively small but the memory foot  print is massive..it's like its not releasing them.

Eric

You received this message because you are subscribed to a topic in the Google Groups "RavenDB - 2nd generation document database" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ravendb/yJSxwQNvnWQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ravendb+u...@googlegroups.com.

Oren Eini (Ayende Rahien)

unread,
Jun 3, 2015, 6:24:05 AM6/3/15
to ravendb
We are seeing a lot of memory being held by prefetching, I'm guessing it is possible that this is related. We are still looking into what exactly we are seeing there, though.

Can you send the debug package info? 
I'm also seeing a relatively high number of prefetching behaviors 

000007fe97b57248       29         3016 Raven.Database.Prefetching.PrefetchingBehavior

Do you use SQL Repliction as well?

Hibernating Rhinos Ltd  

Oren Eini l CEO Mobile: + 972-52-548-6969

Office: +972-4-622-7811 l Fax: +972-153-4-622-7811

 


Oren Eini (Ayende Rahien)

unread,
Jun 3, 2015, 6:45:26 AM6/3/15
to ravendb
This is the result from checking one of the prefetching behaviors:

0:000> !objsize 0000000081449580 
sizeof(0000000081449580) = 1136069592 (0x43b70bd8) bytes (Raven.Database.Prefetching.PrefetchingBehavior)


So that is about 1.1GB for held by a single object.
I think we need the debug info package to figure out why
Reply all
Reply to author
Forward
0 new messages