RavenDB 4.0 Docker vs not on docker

226 views
Skip to first unread message

Derek den Haas

unread,
Mar 13, 2018, 4:38:50 PM3/13/18
to RavenDB - 2nd generation document database
I'm currently running RavenDB 4.0 on Linux (Ubuntu 16.04), which is running perfect! I should have done this earlier.

When using RavenDB 4.0 on docker:
Committed memory is always higher than RavenDB memory, and seems to not release memory to RavenDB in time. Committed memory after a few big patches equals or almost equals the available memory, which then is on the verge of being killed by OOM.* When only indexing, or applying a patch after reaching this limit, RavenDB is being killed, because it's using more than the allowed bit of memory, when assigning 10GB of memory, I sometimes see it reaching 10.5GB before being killed (which is higher than the limit set by docker itself!).

Now when runnning RavenDB 4.0 on Linux Ubuntu 16.04, with the same available RAM (12.71GB) it's working great. Memory can fill up, but will always keep some distance of the upper limit, will use 12.0GB at the most, when I let it work on pretty much the highest load I can think of. Also it is returning the memory after a while, which just never happened on the docker version.

I don't know if there is any planned work on the docker version (or it might just be the configuration that's being set by kubernetes) but wanted to let you know the huge difference. When applying little work to the docker version, memory will queue up and almost never is being released. When applying little workload, it only survives for a day or a day and a half with a database with 1 simple index, and 10GB of memory. It should be easy to mimic this behaviour (at least it is in Kubernetes on GKE, and I think also on Azure and Amazon). Currently going to test further on the non docker linux version which seems to be stable.

* I'm running the nightly docker in Kubernetes, with both limit and request variables for memory set to the same values, which will set the oom_score_adj to -998(*1) which should relax the oom_killer as much as possible(*2).

*1 https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/
*2 https://lwn.net/Articles/391222/

Derek den Haas

unread,
Mar 13, 2018, 4:42:28 PM3/13/18
to RavenDB - 2nd generation document database
p.s. for the first time I'm happy to have made the switch, it's so much faster, cleaner and with the added bits to projections, I can finally see some light at the end of the tunnel. For now its a no go for the docker version for me.

Op dinsdag 13 maart 2018 21:38:50 UTC+1 schreef Derek den Haas:

Derek den Haas

unread,
Mar 13, 2018, 6:02:28 PM3/13/18
to RavenDB - 2nd generation document database
While we're at it, can someone please explain me how I should read these values:


WS: 10.69GB Working set, total memory used(I thought this was the total memory used)

UM: 76.05MB Unmanaged memory
M: 3.51GB Managed memory

MP: 13.06GB Memory mapped, the memory mapped from disk in memory

How does this reflect the displayed values in the GUI?

Op dinsdag 13 maart 2018 21:42:28 UTC+1 schreef Derek den Haas:

Oren Eini (Ayende Rahien)

unread,
Mar 14, 2018, 2:15:23 AM3/14/18
to ravendb
We are doing a lot of work on Docker, and the exact version you are using is very important.
4.0.2 had a bunch of fixes around memory utilization in Docker, but you are using the nightly, so that is probably not there.
Are you running a single node or a cluster? Can you provide the exact configuration for the container?

What are the values in :


As well as:

*  /admin/debug/proc/meminfo
*  /admin/debug/proc/stats
*  /admin/debug/proc/status
*  /admin/debug/threads/runaway
*  /admin/debug/memory/smaps
*  /admin/debug/memory/low-mem-log

The underlying problem is that a container share some of the details with its parent, and some not.
When I ask for how much memory is there, depending on how I do it, I can get the host or the guest values. And it is actually hard to know how much memory is being committed inside the container.


Hibernating Rhinos Ltd  

Oren Eini l CEO Mobile: + 972-52-548-6969

Office: +972-4-622-7811 l Fax: +972-153-4-622-7811

 


--
You received this message because you are subscribed to the Google Groups "RavenDB - 2nd generation document database" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Oren Eini (Ayende Rahien)

unread,
Mar 14, 2018, 2:20:26 AM3/14/18
to ravendb
Working set - how much of the memory used by this process is actually on physical RAM (and not paged).
Unmanaged memory - the unmanaged allocations by ravendb, mostly for doing JSON parsing, reading / writing docs, etc.
Managed memory - Obvious, but this is a bit high. How many indexes do you have here? 
Memory mapped - the size of all the data files

The values on the GUI are a bit different, because they reflect operational behaviors.

The blue 12.71GB is the amount of physical memory on the box.
The white 12.71GB is the commit limit for the system (mem + swap, or the memory limit in a container)
The 11.34 GB is how much is committed (not used, mind, how much memory the OS promised away). If this is at the commit limit, OOM on Linux, allocation failures on Windows.
The 10.68 GB is the working set as used by RavenDB. 

Hibernating Rhinos Ltd  

Oren Eini l CEO Mobile: + 972-52-548-6969

Office: +972-4-622-7811 l Fax: +972-153-4-622-7811

 


--

Derek den Haas

unread,
Mar 14, 2018, 2:51:23 AM3/14/18
to RavenDB - 2nd generation document database
Thanks I was trying to connect the two, but the two are different.

For now I'll stick to the non-docker linux version, since this is giving me the stability that the docker couldn't give me. The stresslevel until it breaks is so much higher (currently doing 8 heavy indexes on 12 db's with 12gb of mem), which it is doing fine, and couldn't be done on docker. So I'm finally at a state to call it stable. The first post was not really a blame to the docker version, more a "I can finally see the real power of RavenDB 4.0", since I really thought it just wasn't stable yet. Testing the non-docker version gave me a whole other perspective of the database (I wish I tried it sooner :) ).

We will however try to eventually make the switch to the docker version (since it will better fit in our new environment) but first I want to give some focus to our application.

Op woensdag 14 maart 2018 07:20:26 UTC+1 schreef Oren Eini:
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+u...@googlegroups.com.

Derek den Haas

unread,
Mar 14, 2018, 11:45:45 AM3/14/18
to RavenDB - 2nd generation document database
Randomized the data you've recieved and imported it 15 times (added some extra random data to some, and removed some data to others)

After this was done, compiled the indexes in the other database to check it's stability, after 30 minutes it crashed (well ok, maybe asking way to much of RavenDB), letting it spin 3 databases at a time, all indexes were loaded correctly (no data corruption this time).
Rebooted the server, because of the memory that was used (both where red, and remained on 14GB / 12GB (of 12.71GB), to cleanly check if it's better on linux than docker:


7 hours later, and only writing 2 tiny documents to the DirectScheduler database a minute, this were the statistics.




I also wanted to include the memory usage of it in the console and from the statistics, but during the writing of this, the server crashed by only modifying and adding a few orders:
```
Voron.Exceptions.VoronUnrecoverableErrorException: Error syncing the data file. The last sync tx is 14, but the journal's last tx id is 11, possible file corruption?
   at Voron.Exceptions.VoronUnrecoverableErrorException.Raise(StorageEnvironment env, String message) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Exceptions\VoronUnrecoverableErrorException.cs:line 23
   at Voron.Impl.Journal.WriteAheadJournal.JournalApplicator.SyncOperation.TryGatherInformationToStartSync(Int64& syncCounter) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Impl\Journal\WriteAheadJournal.cs:line 997
   at Voron.Impl.Journal.WriteAheadJournal.JournalApplicator.SyncOperation.SyncDataFile() in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Impl\Journal\WriteAheadJournal.cs:line 857
   at Voron.GlobalFlushingBehavior.SyncEnvironment(EnvSyncReq req) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\GlobalFlushingBehavior.cs:line 227 
```
after it the server became unresponsive (again).

When rebooting most of it seems fine, though still having 26% CPU load (about 3% for every database that is active), without doing anything (logs also shows me nothing it's working on, as well as traffic (only 1 request every 1 minute) and indexing (all 0 / 0 / 0). So for huge workloads the non-docker version exceeds the docker version, but for long-running I'm still having the same issues and about to give it up.


Op woensdag 14 maart 2018 07:51:23 UTC+1 schreef Derek den Haas:

Arkadiusz Palinski

unread,
Mar 15, 2018, 4:50:48 AM3/15/18
to rav...@googlegroups.com
Hi Derek,

Just to be sure, the import you made (15 times) was to different databases, right?

Arek

Derek den Haas

unread,
Mar 15, 2018, 7:04:04 AM3/15/18
to RavenDB - 2nd generation document database
Yes, it's the database you got, times 15, but removed some debtors or copied some, after that compiled the indexes around it.

Got a corrupt database again by the way, but your issues is down. This time it's in a Auto generated index, which shouldn't have received a lowMemoryNotification:

First exception I received:
Raven.Client.Exceptions.RavenException: 'Voron.Exceptions.VoronUnrecoverableErrorException: Asked to load a past the allocated values: 1085845920 from page 536
   at Voron.Exceptions.VoronUnrecoverableErrorException.Raise(StorageEnvironment env, String message) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Exceptions\VoronUnrecoverableErrorException.cs:line 23
   at Voron.Data.RawData.RawDataSection.DirectRead(LowLevelTransaction tx, Int64 id, Int32& size) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Data\RawData\RawDataSection.cs:line 214
   at Voron.Data.Tables.Table.DirectRead(Int64 id, Int32& size) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Data\Tables\Table.cs:line 191
   at Voron.Data.Tables.Table.ReadByKey(Slice key, TableValueReader& reader) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Data\Tables\Table.cs:line 136
   at Raven.Server.Documents.DocumentsStorage.GetTableValueReaderForDocument(DocumentsOperationContext context, Slice lowerId, Boolean throwOnConflict, TableValueReader& tvr) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DocumentsStorage.cs:line 765
   at Raven.Server.Documents.DocumentsStorage.Get(DocumentsOperationContext context, Slice lowerId, Boolean throwOnConflict) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DocumentsStorage.cs:line 713
   at Raven.Server.Documents.DocumentsStorage.Get(DocumentsOperationContext context, String id, Boolean throwOnConflict) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DocumentsStorage.cs:line 705
   at Raven.Server.Documents.Handlers.DocumentHandler.<GetDocumentsByIdAsync>d__4.MoveNext() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\Handlers\DocumentHandler.cs:line 164
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Raven.Server.Documents.Handlers.DocumentHandler.<Get>d__1.MoveNext() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\Handlers\DocumentHandler.cs:line 69
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Raven.Server.Routing.RequestRouter.<HandlePath>d__6.MoveNext() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Routing\RequestRouter.cs:line 97
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
   at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
   at Raven.Server.RavenServerStartup.<RequestHandler>d__11.MoveNext() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\RavenServerStartup.cs:line 159
Response.StatusCode - ServiceUnavailable'

Exception in RavenDB itself
Voron.Exceptions.VoronUnrecoverableErrorException: Index points to a non leaf page 0
   at Voron.Exceptions.VoronUnrecoverableErrorException.Raise(StorageEnvironment env, String message) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Exceptions\VoronUnrecoverableErrorException.cs:line 23
   at Voron.Data.BTrees.Tree.SearchForPage(Slice key, TreeNodeHeader*& node) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Data\BTrees\Tree.cs:line 688
   at Voron.Data.BTrees.Tree.DirectRead(Slice key) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Data\BTrees\Tree.cs:line 1145
   at Voron.Impl.Transaction.ReadTree(Slice treeName, RootObjectType type, Boolean isIndexTree, NewPageAllocator newPageAllocator) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Impl\Transaction.cs:line 76
   at Voron.Impl.Transaction.OpenTable(TableSchema schema, Slice name) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Impl\Transaction.cs:line 149
   at Voron.Impl.Transaction.OpenTable(TableSchema schema, String name) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Impl\Transaction.cs:line 134
   at Raven.Server.Documents.Indexes.IndexStorage.UpdateStats(DateTime indexingTime, IndexingRunStats stats) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\Indexes\IndexStorage.cs:line 411
   at Raven.Server.Documents.Indexes.Index.ExecuteIndexing() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\Indexes\Index.cs:line 953

And:
Raven.Server.Exceptions.IndexOpenException: Could not open index from '/mnt/disks/ravendb/Databases/EasyFlor-Fleuren/Indexes/Auto_ListPreOrderGroups_By'Order'AndDebtorsAndHideAndLists'. ---> System.IO.InvalidDataException: Transaction has valid(!) hash with invalid transaction id 83, the last valid transaction id is 79. Journal file /mnt/disks/ravendb/Databases/EasyFlor-Fleuren/Indexes/Auto_ListPreOrderGroups_By'Order'AndDebtorsAndHideAndLists/Journals/0000000000000000011.journal might be corrupted
   at Voron.Impl.Journal.JournalReader.TryReadAndValidateHeader(StorageEnvironmentOptions options, TransactionHeader*& current) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Impl\Journal\JournalReader.cs:line 337
   at Voron.Impl.Journal.JournalReader.ReadOneTransactionToDataFile(StorageEnvironmentOptions options) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\Impl\Journal\JournalReader.cs:line 60
   at Voron.Impl.Journal.WriteAheadJournal.RecoverDatabase(TransactionHeader* txHeader, Action`1 addToInitLog)
   at Voron.StorageEnvironment.LoadExistingDatabase() in C:\Builds\RavenDB-4.0-Nightly\src\Voron\StorageEnvironment.cs:line 260
   at Voron.StorageEnvironment..ctor(StorageEnvironmentOptions options) in C:\Builds\RavenDB-4.0-Nightly\src\Voron\StorageEnvironment.cs:line 145
   at Raven.Server.Storage.Layout.LayoutUpdater.OpenEnvironmentInternal(DirectoryStorageEnvironmentOptions options) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Storage\Layout\LayoutUpdater.cs:line 36
   at Raven.Server.Storage.Layout.LayoutUpdater.OpenEnvironment(StorageEnvironmentOptions options) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Storage\Layout\LayoutUpdater.cs:line 24
   at Raven.Server.Documents.Indexes.Index.Open(String path, DocumentDatabase documentDatabase) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\Indexes\Index.cs:line 277
   --- End of inner exception stack trace ---
   at Raven.Server.Documents.Indexes.Index.Open(String path, DocumentDatabase documentDatabase) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\Indexes\Index.cs:line 315
   at Raven.Server.Documents.Indexes.IndexStore.OpenIndex(PathSetting path, String indexPath, List`1 exceptions, String name, IndexDefinition staticIndexDefinition, AutoIndexDefinition autoIndexDefinition) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\Indexes\IndexStore.cs:line 940

After it a out of memory exception and the whole server went down.

Again, only 10 documents added or modified, this time in a collection of 3 documents, in which I first was able to do 2 modifications, and the third gave me this error. Will post it on issues.hibernating... when it's available again.

Op donderdag 15 maart 2018 09:50:48 UTC+1 schreef Arkadiusz Palinski:

Derek den Haas

unread,
Mar 15, 2018, 7:46:22 AM3/15/18
to RavenDB - 2nd generation document database
the database in 15 different databases ;) to be 100% clear :)

Op donderdag 15 maart 2018 12:04:04 UTC+1 schreef Derek den Haas:

Oren Eini (Ayende Rahien)

unread,
Mar 15, 2018, 8:21:50 AM3/15/18
to ravendb
the issues are up, we had some network trouble with our ISP
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Mar 19, 2018, 9:27:57 AM3/19/18
to RavenDB - 2nd generation document database
When patching (on windows, moved again...)

WS:14.7 GBytes|UM:19.24 GBytes|M:5.69 GBytes|MP:29.59 GBytes

UM, which is called NativeMem in the stats view is a bit high.. Higher than the 16GB server I have... it started with 80.02MBytes, and gained a few GB's in patching 8.000.000 records.
Also the system commit limit was 18GB and now is 30GB, which is kinda strange. Furthermore, I had expected to see RavenDB stop using memory after 6GB (hence the licence I have) but it's now up to (almost) the limit of the server itself (not that I'm complaining that I get more than my license gives me).

It is a database of 8.000.000 records, zero indexes and used only one patch script which was started at 3000 docs/s, but is now up to a great 13docs/s. It's the database you have times 4, 

With a "dynamic" patch script to get the relations of each collection inside the DirectCode-Relations metadata:

                                var command = new StringBuilder(@"
                            var output = [];
                            var convert = function(input) {
                                if(!input)
                                    return;

                                if(Array.isArray(input))
                                {
                                    for(var x = 0; x < input.length; x++) {
                                        convert(input[x]);
                                    }
                                } else {
                                    output.push(input);
                                }
                            }
                        ");

                            foreach (var property in properties)
                            {
                                command.AppendLine($"convert(this.{property.Name});");
                            }

                            command.AppendLine("if(output.length > 0) { this[\"@metadata\"][\"DirectCode-Relations\"] = output; }");
                            var updateQuery = $"FROM {collection.Key} UPDATE {{ {command.ToString()} }}";

though it is still running, something seems to leak?


Op donderdag 15 maart 2018 13:21:50 UTC+1 schreef Oren Eini:

Oren Eini (Ayende Rahien)

unread,
Mar 19, 2018, 12:35:12 PM3/19/18
to ravendb
Can you send the full patch script that you are running? 
It is possible that it is keeping modifications until the end, but it is supposed to work in batches here.
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Mar 19, 2018, 2:38:05 PM3/19/18
to RavenDB - 2nd generation document database
FROM purchases UPDATE { 

var output = [];
                            var convert = function(input) {
                                if(!input)
                                    return;

                                if(Array.isArray(input))
                                {
                                    for(var x = 0; x < input.length; x++) {
                                        convert(input[x]);
                                    }
                                } else {
                                    output.push(input);
                                }
                            }

convert(this.EmployeeId);
convert(this.ArticleSortId);
convert(this.SupplierId);
convert(this.ManufacturerSupplierId);
convert(this.Packagings);
convert(this.Features);
convert(this.ListArticleSortTags);
convert(this.LocationId);
convert(this.CurrencyId);
if(output.length > 0) { this["@metadata"]["My-Relations"] = output; }
}

I hope your collection is big enough, the bigger the slower it gets, it's starting off with around 5000 docs/s ends with 30 docs/s on 800.000 documents.

Create a document with properties above, and fill some with an array ["key/1","key/1","key/1","key/1"] and others with "key/1", maybe make some null or undefined.

Op maandag 19 maart 2018 17:35:12 UTC+1 schreef Oren Eini:

Arkadiusz Palinski

unread,
Mar 20, 2018, 4:35:33 AM3/20/18
to RavenDB - 2nd generation document database
Thanks Derek. I've created the following ticket for tracking this: http://issues.hibernatingrhinos.com/issue/RavenDB-10775

Arkadiusz Palinski

unread,
Mar 21, 2018, 6:30:00 AM3/21/18
to RavenDB - 2nd generation document database
The patching issue got fixed and it's available in the latest nightly

Derek den Haas

unread,
Mar 21, 2018, 10:03:10 AM3/21/18
to RavenDB - 2nd generation document database
Already got that version up and running, since you also fixed the indexes which were waiting on each other to finish and some other related memory issues (10732 (lowmem calculated on smaps on linux), 10775 (this issue), 10777 (lowMem halts indexing)) which seems to solve most of the problems. Can't wait to try it on linux, though Windows currently didn't throw me any data corruptions, while in linux it was a whole different story, so I stick to what's currently stable.

Anyhow, thanks for addressing those issues! Any leads on the linux data corruption? Isn't it related to the lowmem and throwing away memory without writing it to the disk? I saw it happen almost instantly when adding or removing docs from the DB, while on windows I didn't see any of those related issues (never had the Catastrophy error, while when just inserting a few documents I had them on linux).

Op woensdag 21 maart 2018 11:30:00 UTC+1 schreef Arkadiusz Palinski:

Derek den Haas

unread,
Mar 21, 2018, 10:10:40 AM3/21/18
to RavenDB - 2nd generation document database
Whoops, I wasn't that clear on my last sentence, what I mean; I saw the Catastrophy error happen after only a few inserts and updates of documents, but not on windows, in which I was able to test things more thoroughly and still have not found such error (Catastrophy error).

Anyway, thanks for the amazing work! While I'm already typing, any idea what the database is doing on idle, it uses between 20 and 30% CPU and the only thing it's doing is raising:
2018-03-21T14:07:55.7105724Z, 927, Information, ServerStore, Raven.Server.NotificationCenter.NotificationsStorage, Saving notification 'AlertRaised/DatabaseTopologyWarning'. (A lot by the way, ~10 times a second, if not, a lot more)


Op woensdag 21 maart 2018 15:03:10 UTC+1 schreef Derek den Haas:

Oren Eini (Ayende Rahien)

unread,
Mar 21, 2018, 10:47:22 AM3/21/18
to ravendb
It looks like something is happening during low memory event that is causing us to do something, that later on in another action does something bad.
We have been able to narrow it down to some interaction of mapreduce indexes with low memory, but we are still investigating exactly what is going on.
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Oren Eini (Ayende Rahien)

unread,
Mar 21, 2018, 10:48:13 AM3/21/18
to ravendb
On idle, RavenDB should only be running the Raft engine and the cluster observer.


To tell you takes the CPU.
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Arkadiusz Palinski

unread,
Mar 21, 2018, 11:15:07 AM3/21/18
to rav...@googlegroups.com
Hi Derek,

Regarding the alerts. What is the alert message you see in the studio which is constantly updated? There are two options:

1) $"Topology of database '{command.Update.DatabaseName}' was changed",
2) $"Could not reach any node of '{dbName}' database",


Derek den Haas

unread,
Mar 21, 2018, 11:50:15 AM3/21/18
to RavenDB - 2nd generation document database
2, Couldnt reach any node of... I see it happening when the database is going offline (guess ravendb is doing this). After that the error will appear, mostly I click on postpone (for a week), so I don't see it active in my Notifications, else it's constantly flashing.

P.s. I'm not using a cluster, so it's correct that it cannot reach any node of ''

and Oren, mostly filled with (times almost every index, times almost every database):
[
{"Id":604,"ManagedThreadId":780,
"Name":"Indexing of DistributedPerDebtorAll of A-Database",
"StartingTime":"2018-03-21T10:54:31.8392548Z",
"Duration":1978375.0,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:32:58.3750000","PrivilegedProcessorTime":"00:09:40.6093750","UserProcessorTime":"00:23:17.7656250"},
{"Id":6400,"ManagedThreadId":852,"Name":"Indexing of DistributedAll of

Though 0 stale indexes, and 0 currently indexing. Should be visible on the database I've send you (I hope), all in state wait, no active threads running. Only the message

Op woensdag 21 maart 2018 16:15:07 UTC+1 schreef Arkadiusz Palinski:

Derek den Haas

unread,
Mar 22, 2018, 2:13:01 AM3/22/18
to RavenDB - 2nd generation document database
Now when idling, 19 hours later:
18.89GB committed
+10GB swapping file (system commit limit is now at 28GB)
14GB ravendb working set

Started at 8GB when I left it the other evening. So still leaking memory (or however you want to call it). If you want I can add an IP to the firewall to inspect it yourself. I however will reboot it now to continue development.

Op woensdag 21 maart 2018 16:50:15 UTC+1 schreef Derek den Haas:

Oren Eini (Ayende Rahien)

unread,
Mar 22, 2018, 2:17:47 AM3/22/18
to ravendb
Look at the order of the threads, they are sorted by how much CPU time they take
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Oren Eini (Ayende Rahien)

unread,
Mar 22, 2018, 2:18:00 AM3/22/18
to ravendb
That would b great to check, yes
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Mar 22, 2018, 4:15:02 AM3/22/18
to RavenDB - 2nd generation document database
No threads are in a state other then wait, they seem to be ordered on duration, highest is on top.

Which really looks like this
{"Runaway Threads":[
{"Id":3564,"ManagedThreadId":null,"Name":"Unmanaged Thread","StartingTime":"2018-03-22T06:14:50.4650049Z","Duration":94359.375,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:01:34.3593750","PrivilegedProcessorTime":"00:00:02.0937500","UserProcessorTime":"00:01:32.2656250"},{"Id":10388,"ManagedThreadId":null,"Name":"Unmanaged Thread","StartingTime":"2018-03-22T06:14:50.4648283Z","Duration":88125.0,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:01:28.1250000","PrivilegedProcessorTime":"00:00:01.5625000","UserProcessorTime":"00:01:26.5625000"},{"Id":11148,"ManagedThreadId":null,"Name":"Unmanaged Thread","StartingTime":"2018-03-22T06:14:50.4652097Z","Duration":87296.875,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:01:27.2968750","PrivilegedProcessorTime":"00:00:01.6562500","UserProcessorTime":"00:01:25.6406250"},{"Id":9760,"ManagedThreadId":null,"Name":"Unmanaged Thread","StartingTime":"2018-03-22T06:14:50.4653695Z","Duration":73437.5,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:01:13.4375000","PrivilegedProcessorTime":"00:00:01.8750000","UserProcessorTime":"00:01:11.5625000"},{"Id":868,"ManagedThreadId":986,"Name":"Unknown","StartingTime":"2018-03-22T07:20:37.9450486Z","Duration":63828.125,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:01:03.8281250","PrivilegedProcessorTime":"00:00:15.0156250","UserProcessorTime":"00:00:48.8125000"},{"Id":11716,"ManagedThreadId":953,"Name":"Unknown","StartingTime":"2018-03-22T07:36:57.1616568Z","Duration":37250.0,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:37.2500000","PrivilegedProcessorTime":"00:00:08.9531250","UserProcessorTime":"00:00:28.2968750"},{"Id":8328,"ManagedThreadId":884,"Name":"Unknown","StartingTime":"2018-03-22T07:37:32.1831080Z","Duration":34781.25,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:34.7812500","PrivilegedProcessorTime":"00:00:08.7500000","UserProcessorTime":"00:00:26.0312500"},{"Id":10960,"ManagedThreadId":969,"Name":"Unknown","StartingTime":"2018-03-22T07:34:38.1569884Z","Duration":31828.125,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:31.8281250","PrivilegedProcessorTime":"00:00:07.4531250","UserProcessorTime":"00:00:24.3750000"},{"Id":8256,"ManagedThreadId":49,"Name":"Indexing of JobAll of DirectScheduler","StartingTime":"2018-03-22T06:15:10.7384731Z","Duration":29718.75,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:29.7187500","PrivilegedProcessorTime":"00:00:22.2187500","UserProcessorTime":"00:00:07.5000000"},
{"Id":4352,"ManagedThreadId":null,"Name":"Unmanaged Thread","StartingTime":"2018-03-22T06:14:51.4625481Z","Duration":28187.5,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:28.1875000","PrivilegedProcessorTime":"00:00:24.1718750","UserProcessorTime":"00:00:04.0156250"},{"Id":3188,"ManagedThreadId":899,"Name":"Unknown","StartingTime":"2018-03-22T07:42:30.2279940Z","Duration":24656.25,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:24.6562500","PrivilegedProcessorTime":"00:00:06.3750000","UserProcessorTime":"00:00:18.2812500"},{"Id":10996,"ManagedThreadId":12,"Name":"Voron Global Flushing Thread","StartingTime":"2018-03-22T06:14:53.1847622Z","Duration":23921.875,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:23.9218750","PrivilegedProcessorTime":"00:00:02.8281250","UserProcessorTime":"00:00:21.0937500"},{"Id":3824,"ManagedThreadId":31,"Name":"Unknown","StartingTime":"2018-03-22T07:41:38.2675756Z","Duration":22625.0,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:22.6250000","PrivilegedProcessorTime":"00:00:05.8281250","UserProcessorTime":"00:00:16.7968750"},{"Id":10220,"ManagedThreadId":28,"Name":"Unknown","StartingTime":"2018-03-22T07:42:30.2369666Z","Duration":21796.875,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:21.7968750","PrivilegedProcessorTime":"00:00:04.2187500","UserProcessorTime":"00:00:17.5781250"},{"Id":6984,"ManagedThreadId":43,"Name":"Unknown","StartingTime":"2018-03-22T07:40:17.1719097Z","Duration":21593.75,"State":"Running","WaitReason":null,"TotalProcessorTime":"00:00:21.5937500","PrivilegedProcessorTime":"00:00:04.1875000","UserProcessorTime":"00:00:17.4062500"},{"Id":4744,"ManagedThreadId":864,"Name":"Unknown","StartingTime":"2018-03-22T07:42:30.2344286Z","Duration":21437.5,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:21.4375000","PrivilegedProcessorTime":"00:00:05.5312500","UserProcessorTime":"00:00:15.9062500"},{"Id":2004,"ManagedThreadId":64,"Name":"Unknown","StartingTime":"2018-03-22T07:42:11.5852089Z","Duration":21171.875,"State":"Running","WaitReason":null,"TotalProcessorTime":"00:00:21.1718750","PrivilegedProcessorTime":"00:00:04.9218750","UserProcessorTime":"00:00:16.2500000"},{"Id":9644,"ManagedThreadId":914,"Name":"Unknown","StartingTime":"2018-03-22T07:42:30.2312371Z","Duration":18796.875,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:18.7968750","PrivilegedProcessorTime":"00:00:04.4843750","UserProcessorTime":"00:00:14.3125000"},{"Id":11820,"ManagedThreadId":874,"Name":"Unknown","StartingTime":"2018-03-22T07:42:30.2382374Z","Duration":18156.25,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:18.1562500","PrivilegedProcessorTime":"00:00:04.1875000","UserProcessorTime":"00:00:13.9687500"},{"Id":10436,"ManagedThreadId":63,"Name":"Unknown","StartingTime":"2018-03-22T07:42:11.5852911Z","Duration":17593.75,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:17.5937500","PrivilegedProcessorTime":"00:00:04.8437500","UserProcessorTime":"00:00:12.7500000"},{"Id":4468,"ManagedThreadId":15,"Name":"Consensus Leader - A in term 14","StartingTime":"2018-03-22T06:14:53.3397976Z","Duration":17546.875,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:17.5468750","PrivilegedProcessorTime":"00:00:05.9531250","UserProcessorTime":"00:00:11.5937500"},{"Id":11940,"ManagedThreadId":919,"Name":"Unknown","StartingTime":"2018-03-22T07:42:30.2327155Z","Duration":17109.375,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:17.1093750","PrivilegedProcessorTime":"00:00:04.0312500","UserProcessorTime":"00:00:13.0781250"},{"Id":11204,"ManagedThreadId":null,"Name":"Unmanaged Thread","StartingTime":"2018-03-22T06:15:01.7204444Z","Duration":16062.5,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:16.0625000","PrivilegedProcessorTime":"00:00:01.0937500","UserProcessorTime":"00:00:14.9687500"},{"Id":9220,"ManagedThreadId":54,"Name":"Indexing of Auto/Jobs/ByCountReducedByState of DirectScheduler","StartingTime":"2018-03-22T06:15:11.5186960Z","Duration":15937.5,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:15.9375000","PrivilegedProcessorTime":"00:00:08.2656250","UserProcessorTime":"00:00:07.6718750"},
{"Id":6288,"ManagedThreadId":281,"Name":"Unknown","StartingTime":"2018-03-22T07:42:30.2297981Z","Duration":14906.25,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:14.9062500","PrivilegedProcessorTime":"00:00:03.4687500","UserProcessorTime":"00:00:11.4375000"},{"Id":1840,"ManagedThreadId":65,"Name":"Unknown","StartingTime":"2018-03-22T07:42:30.2279694Z","Duration":14031.25,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:14.0312500","PrivilegedProcessorTime":"00:00:03.4531250","UserProcessorTime":"00:00:10.5781250"},{"Id":11120,"ManagedThreadId":13,"Name":"RavenDB Tasks Executer","StartingTime":"2018-03-22T06:14:53.1950957Z","Duration":11765.625,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:11.7656250","PrivilegedProcessorTime":"00:00:02.6093750","UserProcessorTime":"00:00:09.1562500"},{"Id":2996,"ManagedThreadId":null,"Name":"Unmanaged Thread","StartingTime":"2018-03-22T06:15:01.7209556Z","Duration":9890.625,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:09.8906250","PrivilegedProcessorTime":"00:00:01.7031250","UserProcessorTime":"00:00:08.1875000"},{"Id":2908,"ManagedThreadId":null,"Name":"Unmanaged Thread","StartingTime":"2018-03-22T06:15:01.7207346Z","Duration":9156.25,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:09.1562500","PrivilegedProcessorTime":"00:00:00.9062500","UserProcessorTime":"00:00:08.2500000"},{"Id":3388,"ManagedThreadId":null,"Name":"Unmanaged Thread","StartingTime":"2018-03-22T06:15:01.7200027Z","Duration":9078.125,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:09.0781250","PrivilegedProcessorTime":"00:00:01.7343750","UserProcessorTime":"00:00:07.3437500"},{"Id":11968,"ManagedThreadId":867,"Name":"Logging Thread","StartingTime":"2018-03-22T06:18:23.2317021Z","Duration":8093.75,"State":"Wait","WaitReason":"UserRequest","TotalProcessorTime":"00:00:08.0937500","PrivilegedProcessorTime":"00:00:00.5156250","UserProcessorTime":"00:00:07.5781250"},

It's adding 1% per database on load (if I disable all databases it's 10% on avg), to 30% when all enabled, but since it's spawning the errors i mentioned earlier, that might be the problem. On idle, it's adding 3 docs a minute, on a simple map reduce, which is done within a second (if not, much less). To me it seems related to the issue:

2018-03-21T14:07:55.7105724Z, 927, Information, ServerStore, Raven.Server.NotificationCenter.NotificationsStorage, Saving notification 'AlertRaised/DatabaseTopologyWarning'. 

which is throwing for every offline database, multiple times a second.

Op donderdag 22 maart 2018 07:18:00 UTC+1 schreef Oren Eini:

Arkadiusz Palinski

unread,
Mar 22, 2018, 7:15:50 AM3/22/18
to rav...@googlegroups.com
Can you send us the full logs once this happen again? I created the following ticket for tracking this: http://issues.hibernatingrhinos.com/issue/RavenDB-10788 so you can attach it there.

Oren Eini (Ayende Rahien)

unread,
Mar 22, 2018, 9:52:10 AM3/22/18
to ravendb
We managed to figure out the underlying cause of the catastrophic error, this will be in the next nightly.
I would really appreciate it if you could test it on both Windows and Linux, to verify that we caught it all
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Mar 23, 2018, 2:45:41 AM3/23/18
to RavenDB - 2nd generation document database
I will.

Did you also find what is eating my memory? Leaving it idle for 1 night (only 13 hours, 5 online databases), will eat all memory + more (windows creates a paging file). After that my logs only spawn low memory messages. 
Only thing is running are 4 subscriptions.

Op donderdag 22 maart 2018 14:52:10 UTC+1 schreef Oren Eini:

Arkadiusz Palinski

unread,
Mar 23, 2018, 2:56:02 AM3/23/18
to rav...@googlegroups.com
Derek,

The fix Oren mentioned isn't included in today nightly yet. We'll release a bit later today and I'll let you know.

Regarding memory, what does admin/debug/memory/stats endpoint say?

--

Derek den Haas

unread,
Mar 23, 2018, 3:11:05 AM3/23/18
to RavenDB - 2nd generation document database
I can tell you tomorrow (the memory issue), till now I have to reset it each and every day (yes I know, you map files into memory, but it's now creating a 10GB page file on my hard disk to map the memory back to disk ;) ), will post it here. 

Hope to hear from you when you got the nightly, and thanks for letting me know (also couldn't find any commit related to the issue, so thought it might be a mix of using smaps on linux and the other fixes for map/reduces).

Op vrijdag 23 maart 2018 07:56:02 UTC+1 schreef Arkadiusz Palinski:

Arkadiusz Palinski

unread,
Mar 23, 2018, 10:17:19 AM3/23/18
to rav...@googlegroups.com
It's available in 4.0.3-nightly-20180323-1248

--

Derek den Haas

unread,
Mar 23, 2018, 11:04:56 AM3/23/18
to RavenDB - 2nd generation document database
Here is the memory log, same database as you have, times 20, 10 hours later it's storing memory on disk (pagefile), this time only a few GB's, but when I wait longer, it's going to get worse (about 1 gig every hour disappears from the memory). 
Besides the modification of a few documents we aren't doing anything, and no indexes are stale. It however is constantly doing something (hence the CPU load), so they might be related (which by the way isn't the mentioned database is unreachable, I found why it's throwing, and how to stop RavenDB from doing it, see RavenDB-10788).

Also included the admin-log on RavenDB-10788, seems to me it is because there are databases not loaded at startup, therefore remain in state R (Rehab), which will throw the line I mentioned. 
They will disappear by manually activating all the databases (by clicking on them in the RavenDB Studio).

Op vrijdag 23 maart 2018 15:17:19 UTC+1 schreef Arkadiusz Palinski:
THISTOUPLOADONGOOGLE.txt

Derek den Haas

unread,
Mar 25, 2018, 2:27:14 PM3/25/18
to RavenDB - 2nd generation document database
Docker still crashed on me (on linux), restored the database you got, 5 times, and now got the following error on one of the 5 databases:

Raven.Client.Exceptions.Database.DatabaseLoadFailureException: Failed to start database EasyFlor-Instance04
At /ravendb/Databases/EasyFlor-Instance04 ---> System.ArgumentException: An item with the same key has already been added. Key: Debetinvoices
   at System.ThrowHelper.ThrowAddingDuplicateWithKeyArgumentException(Object key)
   at System.Collections.Generic.Dictionary`2.TryInsert(TKey key, TValue value, InsertionBehavior behavior)
   at Raven.Server.Documents.DocumentsStorage.ReadCollections(Transaction tx) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DocumentsStorage.cs:line 1613
   at Raven.Server.Documents.DocumentsStorage.Initialize(StorageEnvironmentOptions options) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DocumentsStorage.cs:line 304
   at Raven.Server.Documents.DocumentsStorage.Initialize(Boolean generateNewDatabaseId) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DocumentsStorage.cs:line 241
   at Raven.Server.Documents.DocumentDatabase.Initialize(InitializeOptions options) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DocumentDatabase.cs:line 271
   at Raven.Server.Documents.DatabasesLandlord.CreateDocumentsStorage(StringSegment databaseName, RavenConfiguration config) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DatabasesLandlord.cs:line 544
   --- End of inner exception stack trace ---
   at Raven.Server.Documents.DatabasesLandlord.CreateDocumentsStorage(StringSegment databaseName, RavenConfiguration config) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DatabasesLandlord.cs:line 561
   at Raven.Server.Documents.DatabasesLandlord.ActuallyCreateDatabase(StringSegment databaseName, RavenConfiguration config) in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DatabasesLandlord.cs:line 497
   at Raven.Server.Documents.DatabasesLandlord.<>c__DisplayClass29_0.<CreateDatabaseUnderResourceSemaphore>b__0() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Documents\DatabasesLandlord.cs:line 449
   at System.Threading.Tasks.Task`1.InnerInvoke()
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot)
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Raven.Server.Routing.RouteInformation.<UnlikelyWaitForDatabaseToLoad>d__14.MoveNext() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Routing\RouteInformation.cs:line 120
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Raven.Server.Routing.RouteInformation.<WaitForDb>d__19.MoveNext() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Routing\RouteInformation.cs:line 158
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Raven.Server.Routing.RequestRouter.<HandlePath>d__6.MoveNext() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\Routing\RequestRouter.cs:line 63
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
   at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
   at Raven.Server.RavenServerStartup.<RequestHandler>d__11.MoveNext() in C:\Builds\RavenDB-4.0-Nightly\src\Raven.Server\RavenServerStartup.cs:line 159

Imported it using the "Import from another RavenDB server", unable to access the database completely. 
P.s. server crashed due to an out of memory. Currently retrying, to see if a fixed memory limit will solve this (this time kubernetes was reclaiming memory).
You'll get the results tomorrow, hopefully it will stick this time.

Op vrijdag 23 maart 2018 16:04:56 UTC+1 schreef Derek den Haas:

Oren Eini (Ayende Rahien)

unread,
Mar 25, 2018, 10:42:28 PM3/25/18
to ravendb
Okay, that is an interesting failure.
To start with, a very easy one (the previous issue was subtle). The issue is that this looks like you have multiple collections with the same name, separated only by case?
--
You received this message because you are subscribed to the Google Groups "RavenDB - 2nd generation document database" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Mar 26, 2018, 1:09:18 AM3/26/18
to RavenDB - 2nd generation document database
I've reimported it, and now the database is running fine, so I guess it's not the lower/upercase of the same collection, since it's the same data, though the process isn't killed. The only thing is, it's almost full of memory, using 8.65 out of the 9.76gb available, therefore only indexing 36 docs a second.

While on windows it goes a lot faster, indexing 1000 docs/s, so it also seems to not release enough memory to index at "full" speed, though it is setting some not used memory in the page file (windows, linux doesn't have one).

So the good: It's not crashed, the bad: these 5 databases normally ends indexing in about 4 hours (total) on the same spec PC on windows, though on linux we are now 10 hours in and your stats are telling me it's done in 1 month and 15 days.

You can investigate the db yourself if you want, since I don't know where to start.

P.s. I'm 100% sure the database is 100% the same (first import and second import) so the collections should be 100% identical, though the first import ended in OOM (due to my configuration) and the second is still running on very low memory, (too low to do something).

Op maandag 26 maart 2018 04:42:28 UTC+2 schreef Oren Eini:

Derek den Haas

unread,
Mar 26, 2018, 6:33:38 AM3/26/18
to RavenDB - 2nd generation document database
Looks like I wasn't waiting long enough, it's not working now (same database). Will check whats up with the collections.

Op maandag 26 maart 2018 07:09:18 UTC+2 schreef Derek den Haas:

Oren Eini (Ayende Rahien)

unread,
Mar 26, 2018, 8:31:05 AM3/26/18
to ravendb
What db is that? The one you sent us?
If you restart the db after the import, does this reproduce?
--

Derek den Haas

unread,
Mar 27, 2018, 1:58:57 AM3/27/18
to RavenDB - 2nd generation document database
Yes, it restarted a bunch of times, it's an import using the "from another Raven Instance", which is a RavenDB 4.0 nightly 20180324 (windows), to a 20180325 (linux). On the windows nothing is wrong, on the linux there is. I can send it over to you, by the end of the day.

Op maandag 26 maart 2018 14:31:05 UTC+2 schreef Oren Eini:

Hibernating Rhinos Ltd  

Oren Eini l CEO l </s

Oren Eini (Ayende Rahien)

unread,
Mar 27, 2018, 2:02:46 AM3/27/18
to ravendb
That would be great, thanks

--
Reply all
Reply to author
Forward
0 new messages