I've found one potential problem in raven.I noticed that memory is very quickly consumed due to - what I think may be an error in the code - for setting default value for:line 53 in InMemoryRavenConfigurationAvailableMemoryForRaisingIndexBatchSizeLimit = Math.Min(768, MemoryStatistics.TotalPhysicalMemory / 2);
documentation states:Raven/AvailableMemoryForRaisingIndexBatchSizeLimit The minimum amount of memory available for us to double the size of InitialNumberOfItemsToIndexInSingleBatch if we need to.
Default: 50% of total system memory
Minimum: 768
I think the code should be Math.Max ?Setting an reasonable value in config - 4 Gb : makes raven behave as expexted - and stop increasing batch index size when half of the memory is consumed...
The only config modifications we make is to allow “All” for the Anon user. Everything else is straight out of the box, as unzipped from the download. No bundles enabled and a new DB created that we import our dump into.
Brett
--
--
re: the root cause, is this something I could try changing on my setup, too? Happy to test / change anything at this point.Thanks
--
Using Build 2267, I am getting this issue as well. I am using stock RavenDB.Server running as a Windows service with no configuration changes other than setting the port to 20031. The test server is virtualized under VMWare. It has 16GB of RAM and 4 cores (2 Intel Xeon E5-2650 @2.0GHz with two cores each assigned to the VM) running Windows Server 2008 R2 Standard Edition.The documents are Accounts each with a list of Tax Records. The accounts have 34 properties including a couple of addresses and a list of Tax Records. I am loading 43,829 accounts with 595,692 tax records. This is just one load of 14 that I am loading to test performance. In total, I have 509,921 accounts with 2,930,220 tax records (about half of the data in our Oracle database).When I run locally (16GB of RAM with an 256GB SSD, not virtualized), I can load all 14 data loads individually (waiting for the indexes to update between each run) without error. However, if I use BulkInsert locally, I get the "Version store out of memory (cleanup already attempted)" error about half way through all loads. When bulk loading, I run each data load manually, but I don't wait for the indexes to finish.I am working on creating a github repository with a cut-down version of the project to reproduce the problem.-Weston
--
You received this message because you are subscribed to the Google Groups "ravendb" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.