RavenDB 4.0 Docker stability

160 views
Skip to first unread message

Derek den Haas

unread,
Aug 16, 2017, 10:31:37 AM8/16/17
to RavenDB - 2nd generation document database
Ok, so the new 4.0.0018 fixed some things for Docker, though it's still having a hard time.

It's now up 4 days, rebooted 69 times (it reboots itself at random intervals, because it has reached the memory limit), since it will auto come back online, there isn't a real problem for me. Though now I also see this error:
An exception of type 'Raven.Client.Exceptions.RavenException' occurred in System.Private.CoreLib.ni.dll but was not handled in user code: 'Voron.Exceptions.VoronUnrecoverableErrorException: Error syncing the data file. The last sync tx is 1608039, but the journal's last tx id is 1607675, possible file corruption?

Which gives me the creeps, loosing data...

Adi Avivi

unread,
Aug 17, 2017, 10:35:47 AM8/17/17
to rav...@googlegroups.com, Derek den Haas
Hi,
In order to reduce memory usage on low end machines you may use in settings:
--Storage.ForceUsing32BitsPager=True
This way RavenDB will try to reduce mapped memory usage, and also other allocations are smaller, so you might not reach the OOM killer on 1GB RAM machine.

Regarding the journal Tx Id exception : 
1. Can you please 'ls -la' the entire /databases directory recursively? (I want especially look on the journal files)
2. I see you are at tx #1.6 Million.  Can you confirm you've made a lot of work (relatively of course) that matches this number?


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hibernating Rhinos Ltd                       cid:image001.png@01CF95E2.8ED1B7D0
Avivi Adi l Core Team
Office: +972-4-622-7811 l Fax: +972-153-4-622-7811
RavenDB paving the way to "Data Made Simple"   http://ravendb.net

--
You received this message because you are subscribed to the Google Groups "RavenDB - 2nd generation document database" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Derek den Haas

unread,
Aug 17, 2017, 12:29:44 PM8/17/17
to RavenDB - 2nd generation document database, derek...@gmail.com
I've thrown it away, I will fetch it next time. If Tx will also be updated by indexes, yes this might be the case. I've imported 200.000 documents to test from and there were some indexes on it. Only inserted ~10 documents, and queried maybe 10.000 times.

About the memory, I've set it to 3gB which still will be killed, the 64 reboots were actually running it at 3gig, for just one database, 200.000 documents and it's even getting killed over night... And no, our developers do not work at night, nor anything/anyone else is connected to it at that point.

But I will create some logging for it. The corruption might be because RavenDB was killed for the 64th time, though it should not happen to get a corrupt database.

Op donderdag 17 augustus 2017 16:35:47 UTC+2 schreef Adi Avivi:
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+u...@googlegroups.com.

Oren Eini (Ayende Rahien)

unread,
Aug 17, 2017, 1:29:03 PM8/17/17
to ravendb, Derek den Haas
Indexes do not cause transactions (they have separate storage for that).
What is it doing when this happens? Is it indexing / replicating / etc?

Hibernating Rhinos Ltd  

Oren Eini l CEO Mobile: + 972-52-548-6969

To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Oren Eini (Ayende Rahien)

unread,
Aug 17, 2017, 1:29:19 PM8/17/17
to ravendb, Derek den Haas
Does it also report correctly the size of memory it has available?

Derek den Haas

unread,
Aug 17, 2017, 2:28:00 PM8/17/17
to RavenDB - 2nd generation document database, derek...@gmail.com
It should do nothing, there were indexes with errors, don't know if it gets stuck on that part. Not replicating (only one RavenDB instance activated). CPU also is pretty much idle, so I don't suspect it to do much.

It's doing absolutely nothing when it happens. It almost instantly eats all the memory on bootup, then it remains in that state (consuming 2 ~ 3gb of RAM) and after x hours (x is relatively short, around 8 hours max) it's getting killed by kubernetes/cgroup/whatever memory management it's on. It also happens over night, where nobody is accessing the server, and there are no active services accessing the server, so it's completely unknown to me why it's getting killed.

About the correct size report: Yes, when I had it on 2000mb it reported 1.9xGIG RAM, and 3000 reported 2.9xGIG RAM on the console output.

Op donderdag 17 augustus 2017 19:29:03 UTC+2 schreef Oren Eini:

Oren Eini (Ayende Rahien)

unread,
Aug 17, 2017, 2:38:17 PM8/17/17
to ravendb, Derek den Haas
Can you enable the log and see what it is doing? As well as /admin/debug/memory/stats and running "stats" in the console?
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Aug 17, 2017, 3:13:06 PM8/17/17
to RavenDB - 2nd generation document database, derek...@gmail.com
I will, already fixed adding ENV statements to RavenDB which will convert to configuration? Or should I compile a docker myself? See our last discussion about this topic.

Op donderdag 17 augustus 2017 20:38:17 UTC+2 schreef Oren Eini:

Oren Eini (Ayende Rahien)

unread,
Aug 17, 2017, 3:42:16 PM8/17/17
to ravendb, Derek den Haas
We fixed that, IIRC, not in the released beta.
To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Aug 21, 2017, 10:21:32 AM8/21/17
to RavenDB - 2nd generation document database, derek...@gmail.com
Ok let me get back on the data corruption, got it broken again, by doing not much actually:

Raven.Client.Exceptions.Server.ServerLoadFailureException: Failed to load system storage
At /var/lib/ravendb/System ---> System.IO.InvalidDataException: Transaction has valid(!) hash with invalid transaction id 950392, the last valid transaction id is 950392. Journal file /var/lib/ravendb/System/0000000000000000963.journal might be corrupted

I'm mounting it to another working system, to retrieve some data.. Anything else than an "ls -la" from the /databases/?

Op donderdag 17 augustus 2017 21:42:16 UTC+2 schreef Oren Eini:

Derek den Haas

unread,
Aug 21, 2017, 10:28:56 AM8/21/17
to RavenDB - 2nd generation document database, derek...@gmail.com
System:
total 27684
drwxr-xr-x 2 root root    20480 Aug 21 14:22 .
drwxr-xr-x 5 root root     4096 Aug 21 14:22 ..
-rw------- 1 root root   524288 Aug 16 19:32 0000000000000000017.recovery
-rw------- 1 root root   524288 Aug 17 00:41 0000000000000000022.recovery
-rw------- 1 root root    65536 Aug 17 00:41 0000000000000000023.recovery
-rw------- 1 root root    65536 Aug 17 10:32 0000000000000000113.recovery
-rw------- 1 root root   524288 Aug 17 13:31 0000000000000000115.recovery
-rw------- 1 root root   524288 Aug 17 13:31 0000000000000000116.recovery
-rw------- 1 root root   524288 Aug 17 17:49 0000000000000000120.recovery
-rw------- 1 root root   524288 Aug 17 17:49 0000000000000000121.recovery
-rw------- 1 root root   524288 Aug 17 21:43 0000000000000000130.recovery
-rw------- 1 root root   524288 Aug 18 00:54 0000000000000000131.recovery
-rw------- 1 root root   524288 Aug 18 00:54 0000000000000000132.recovery
-rw------- 1 root root   524288 Aug 18 05:46 0000000000000000143.recovery
-rw------- 1 root root    65536 Aug 18 10:37 0000000000000000147.recovery
-rw------- 1 root root    65536 Aug 18 17:08 0000000000000000156.recovery
-rw------- 1 root root   524288 Aug 18 21:57 0000000000000000160.recovery
-rw------- 1 root root   524288 Aug 18 21:57 0000000000000000161.recovery
-rw------- 1 root root   524288 Aug 19 01:53 0000000000000000167.recovery
-rw------- 1 root root    65536 Aug 19 01:53 0000000000000000168.recovery
-rw------- 1 root root  4194304 Aug 21 14:11 0000000000000000963.journal
-rw------- 1 root root    65536 Aug 21 14:22 0000000000000000963.recovery
-rw------- 1 root root 16777216 Aug 21 14:22 Raven.voron
-rw------- 1 root root    65536 Aug 21 14:22 compression.0000000000.buffers
-rw------- 1 root root      162 Aug 21 14:10 headers.one
-rw------- 1 root root      162 Aug 21 14:07 headers.two
-rw------- 1 root root    65536 Aug 21 14:22 scratch.0000000000.buffers

Databases/EasyBase:
total 16672
drwxr-xr-x 5 root root     4096 Aug 21 14:07 .
drwxr-xr-x 4 root root     4096 Aug 16 14:40 ..
-rw------- 1 root root    65536 Aug 16 14:48 0000000000000000001.journal
-rw------- 1 root root    65536 Aug 21 14:07 0000000000000000001.recovery
drwxr-xr-x 2 root root     4096 Aug 21 14:07 Configuration
drwxr-xr-x 2 root root     4096 Aug 21 14:07 Indexes
drwxr-xr-x 2 root root     4096 Aug 21 14:07 PeriodicBackupTemp
-rw------- 1 root root 16777216 Aug 16 14:48 Raven.voron
-rw------- 1 root root    65536 Aug 21 14:07 compression.0000000000.buffers
-rw------- 1 root root      162 Aug 21 10:15 headers.one
-rw------- 1 root root      162 Aug 21 14:07 headers.two
-rw------- 1 root root    65536 Aug 21 14:07 scratch.0000000000.buffers

Databases/EasyBase-ef-area51:
total 557280
drwxr-xr-x   5 root root      4096 Aug 21 14:07 .
drwxr-xr-x   4 root root      4096 Aug 16 14:40 ..
-rw-------   1 root root  16777216 Aug 21 13:45 0000000000000000006.journal
-rw-------   1 root root     65536 Aug 21 14:07 0000000000000000006.recovery
drwxr-xr-x   2 root root      4096 Aug 21 14:07 Configuration
drwxr-xr-x 111 root root      4096 Aug 21 14:07 Indexes
drwxr-xr-x   2 root root      4096 Aug 21 14:07 PeriodicBackupTemp
-rw-------   1 root root 536870912 Aug 21 13:45 Raven.voron
-rw-------   1 root root     65536 Aug 21 14:07 compression.0000000000.buffers
-rw-------   1 root root       162 Aug 21 14:07 headers.one
-rw-------   1 root root       162 Aug 21 13:46 headers.two
-rw-------   1 root root  16777216 Aug 16 14:56 recyclable-journal.0000000000000000006
-rw-------   1 root root     65536 Aug 21 14:07 scratch.0000000000.buffers

Op maandag 21 augustus 2017 16:21:32 UTC+2 schreef Derek den Haas:

Oren Eini (Ayende Rahien)

unread,
Aug 22, 2017, 9:07:31 AM8/22/17
to ravendb, Derek den Haas
This make no sense, you don't have enough journals to get to this tx id.

To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Aug 22, 2017, 9:18:00 AM8/22/17
to RavenDB - 2nd generation document database, derek...@gmail.com
Since it happened twice... You may join and get a full replication the third time, or inspect the server. I cannot access the whole data directory by default, since kubernetes will mount it only to the apropriate docker. 

To me it's weird that I've written 900.000 times to the database, while I've only written 1 document (imported 70.000 documents using import), 100 transformers and 70 indexes (yes... 70). If I check the recovery files, I see multiple skips in numbering

Op dinsdag 22 augustus 2017 15:07:31 UTC+2 schreef Oren Eini:

Derek den Haas

unread,
Aug 24, 2017, 9:19:11 AM8/24/17
to RavenDB - 2nd generation document database, derek...@gmail.com
RavenException: Voron.Exceptions.VoronUnrecoverableErrorException: Error syncing the data file. The last sync tx is 273966, but the journal's last tx id is 273661, possible file corruption?

Op dinsdag 22 augustus 2017 15:18:00 UTC+2 schreef Derek den Haas:

Derek den Haas

unread,
Aug 24, 2017, 9:25:26 AM8/24/17
to RavenDB - 2nd generation document database, derek...@gmail.com
At /var/lib/ravendb/System ---> System.IO.InvalidDataException: Transaction has valid(!) hash with invalid transaction id 273624, the last valid transaction id is 273613. Journal file /var/lib/ravendb/System/0000000000000000307.journal might be corrupted

total 35988
drwxr-xr-x 2 root root     4096 Aug 24 13:23 .
drwxr-xr-x 5 root root     4096 Aug 24 13:23 ..
-rw------- 1 root root    65536 Aug 21 19:52 0000000000000000017.recovery
-rw------- 1 root root    65536 Aug 22 00:13 0000000000000000025.recovery
-rw------- 1 root root   524288 Aug 22 02:57 0000000000000000027.recovery
-rw------- 1 root root   524288 Aug 22 06:11 0000000000000000030.recovery
-rw------- 1 root root   524288 Aug 22 06:11 0000000000000000031.recovery
-rw------- 1 root root   524288 Aug 22 09:29 0000000000000000058.recovery
-rw------- 1 root root   524288 Aug 22 09:29 0000000000000000059.recovery
-rw------- 1 root root   524288 Aug 22 13:05 0000000000000000087.recovery
-rw------- 1 root root   524288 Aug 22 13:05 0000000000000000088.recovery
-rw------- 1 root root   524288 Aug 22 15:57 0000000000000000093.recovery
-rw------- 1 root root   524288 Aug 22 18:36 0000000000000000095.recovery
-rw------- 1 root root   524288 Aug 22 18:36 0000000000000000096.recovery
-rw------- 1 root root   524288 Aug 22 21:30 0000000000000000102.recovery
-rw------- 1 root root   524288 Aug 22 21:30 0000000000000000103.recovery
-rw------- 1 root root   524288 Aug 23 00:40 0000000000000000104.recovery
-rw------- 1 root root   524288 Aug 23 00:40 0000000000000000105.recovery
-rw------- 1 root root   524288 Aug 23 03:54 0000000000000000131.recovery
-rw------- 1 root root    65536 Aug 23 03:54 0000000000000000132.recovery
-rw------- 1 root root   524288 Aug 23 06:43 0000000000000000151.recovery
-rw------- 1 root root   524288 Aug 23 06:43 0000000000000000152.recovery
-rw------- 1 root root   524288 Aug 23 09:58 0000000000000000175.recovery
-rw------- 1 root root    65536 Aug 23 09:58 0000000000000000176.recovery
-rw------- 1 root root   524288 Aug 23 13:35 0000000000000000178.recovery
-rw------- 1 root root   524288 Aug 23 15:48 0000000000000000179.recovery
-rw------- 1 root root   524288 Aug 23 15:48 0000000000000000180.recovery
-rw------- 1 root root   524288 Aug 23 18:25 0000000000000000181.recovery
-rw------- 1 root root   524288 Aug 23 21:14 0000000000000000182.recovery
-rw------- 1 root root   524288 Aug 23 21:14 0000000000000000183.recovery
-rw------- 1 root root   524288 Aug 24 00:27 0000000000000000207.recovery
-rw------- 1 root root    65536 Aug 24 00:27 0000000000000000208.recovery
-rw------- 1 root root    65536 Aug 24 04:03 0000000000000000236.recovery
-rw------- 1 root root   524288 Aug 24 08:16 0000000000000000267.recovery
-rw------- 1 root root   524288 Aug 24 08:16 0000000000000000268.recovery
-rw------- 1 root root   524288 Aug 24 11:09 0000000000000000291.recovery
-rw------- 1 root root   524288 Aug 24 11:09 0000000000000000292.recovery
-rw------- 1 root root    65536 Aug 24 13:12 0000000000000000306.recovery
-rw------- 1 root root  4194304 Aug 24 13:14 0000000000000000307.journal
-rw------- 1 root root    65536 Aug 24 13:23 0000000000000000307.recovery
-rw------- 1 root root 16777216 Aug 24 13:23 Raven.voron
-rw------- 1 root root    65536 Aug 24 13:23 compression.0000000000.buffers
-rw------- 1 root root      162 Aug 24 13:09 headers.one
-rw------- 1 root root      162 Aug 24 13:12 headers.two
-rw------- 1 root root    65536 Aug 24 13:23 scratch.0000000000.buffers

Op donderdag 24 augustus 2017 15:19:11 UTC+2 schreef Derek den Haas:

Adi Avivi

unread,
Aug 24, 2017, 9:43:27 AM8/24/17
to rav...@googlegroups.com, Derek den Haas
Hi Derek,
This is a good finding.
The error means that the recovered journal skipped transactions.
It is meant to suggest we might have a journal file corruption.
However, this is not a file corruption issue (the error is skipped TXs with valid hash, which means this is not reused journal file).

It might be a problem with reusing journal files.
Another direction will be the incremental backup.

Question: Do you have incremental backup enabled ?


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hibernating Rhinos Ltd                       cid:image001.png@01CF95E2.8ED1B7D0
Avivi Adi l Core Team
Office: +972-4-622-7811 l Fax: +972-153-4-622-7811
RavenDB paving the way to "Data Made Simple"   http://ravendb.net

To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Aug 24, 2017, 9:52:04 AM8/24/17
to RavenDB - 2nd generation document database, derek...@gmail.com
Not that I'm aware, I only pulled the default docker image 4.0.00018, added 2 databases, imported 70.000 documents to one database (Not system db).

Can't find the incremental backup option in your new 4.0 version.

I only find it weird that the system database is getting written so much. I'm doing next to nothing with it, it's only quering from another database, though it keeps on breaking on the System database.

Op donderdag 24 augustus 2017 15:43:27 UTC+2 schreef Adi Avivi:

Adi Avivi

unread,
Aug 24, 2017, 10:15:42 AM8/24/17
to rav...@googlegroups.com
And the above fix might explain why you had big Tx number.

After upgrading to 40018 from previous build - did you empty the entire server's data directory and stated from scratch ?

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hibernating Rhinos Ltd                       cid:image001.png@01CF95E2.8ED1B7D0
Avivi Adi l Core Team
Office: +972-4-622-7811 l Fax: +972-153-4-622-7811
RavenDB paving the way to "Data Made Simple"   http://ravendb.net

To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Derek den Haas

unread,
Aug 25, 2017, 5:36:08 AM8/25/17
to RavenDB - 2nd generation document database
Yep, I've renewed the whole "disk", since the previous build would not read the disk the second time around (something to do with unable to find key to... read the data). I use it on docker, so I'm 100% sure it had a new data directory and started over. The first and second error in this thread were on completely new and empty disks. The third time I got lazy and just deleted the System/ directory to get it working again.

Op donderdag 24 augustus 2017 16:15:42 UTC+2 schreef Adi Avivi:

Adi Avivi

unread,
Aug 27, 2017, 4:04:19 AM8/27/17
to Derek den Haas, rav...@googlegroups.com
I've managed to reproduce it on docker (with 1GB RAM, 2 CPUs), and both problems are under investigation
1. Respect memory boundaries
2. Invalid journal after crash

Both problems seems to related only when running in docker.

I will inform you when we will have progress on this.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hibernating Rhinos Ltd                       cid:image001.png@01CF95E2.8ED1B7D0
Avivi Adi l Core Team
Office: +972-4-622-7811 l Fax: +972-153-4-622-7811
RavenDB paving the way to "Data Made Simple"   http://ravendb.net

To unsubscribe from this group and stop receiving emails from it, send an email to ravendb+unsubscribe@googlegroups.com.

Oren Eini (Ayende Rahien)

unread,
Aug 27, 2017, 6:12:17 AM8/27/17
to ravendb, Derek den Haas
For reference, how exactly are you running docker?
On what host and what guest?
Where is the data located?

Adi Avivi

unread,
Aug 27, 2017, 6:13:43 AM8/27/17
to Derek den Haas, rav...@googlegroups.com
In addition, on what OS the docker is running on?
And the container?  (is it ubuntu16 ?)

Also I see raven database storage path is located at var/lib/ravendb/System.  Is it on purpose?

Derek den Haas

unread,
Aug 27, 2017, 6:30:38 AM8/27/17
to Adi Avivi, rav...@googlegroups.com

Running on Google Cloud Container Engine, so custom Linux version based on the Chrome OS image. Docker is being maintained by kubernetes which also runs on AWS.

I'm using your provider docker Ubuntu image.

The path is mounted from a google storage drive (SSD persistant disk), I believe the path itself was your default, if not than someone thought this would be a nice place to mount the disk to.

P.s. you can try Google Cloud for free, if you want to give it a spin.

Derek den Haas

unread,
Aug 27, 2017, 2:00:53 PM8/27/17
to RavenDB - 2nd generation document database, a...@hibernatingrhinos.com
Some typo's, due to autocorrection:

But let me redo this, Docker is running on Container-Optimized OS from Google: https://cloud.google.com/container-optimized-os/docs/

The path is mounted on a Google Cloud Volume (disk), SSD, 50gB. 

The docker image we are using is the build on your Docker hub, which is running Ubuntu.

The data is formatted in ext4.

The disk is mounted as:
/dev/sdb on /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/ravendb-home type ext4 (rw,relatime,data=ordered)
which is then mounted to the docker image as /var/lib/ravendb/

RavenDB inside is running as:
root     11315  0.0  0.0  18052  2652 ?        S    17:46   0:00 /bin/bash /opt/run-raven.sh
root     11316  8.3 40.5 11146076 1538096 ?    SLl  17:46   1:06 ./Raven.Server --ServerUrl=http://0.0.0.0:8080 --ServerUrl.Tcp=tcp://0.0.0.0:38888 --PublicServerUrl=http://ravendb-service.development.svc.cluster.local:8080 --Security.UnsecuredAccessAllowed=PublicNetwork --DataDir=/var/lib/ravendb/ --print-id --daemon

Op zondag 27 augustus 2017 12:30:38 UTC+2 schreef Derek den Haas:

Adi Avivi

unread,
Aug 28, 2017, 7:58:15 AM8/28/17
to Derek den Haas, RavenDB - 2nd generation document database
Regarding the recovery issue: a fix was applied and will be available sometime near the end of next week.
When you upgrade the release, please make sure to delete _everything_,
The /var/lib (in docker image) belongs to prior to 40018 release.   By default it should go to /opt/RavenDB/Server, and database to /databases.
RavenDB should run as follows (by default) :

root     11315  0.0  0.0  18052  2652 ?        S    17:46   0:00 /bin/bash /opt/run-raven.sh
root     11316  8.3 40.5 11146076 1538096 ?    SLl  17:46   1:06 ./Raven.Server --ServerUrl=http://0.0.0.0:8080 --ServerUrl.Tcp=tcp://0.0.0.0:38888 --PublicServerUrl=http://ravendb-service.development.svc.cluster.local:8080 --Security.UnsecuredAccessAllowed=PublicNetwork --DataDir=/var/lib/ravendb/ --print-id --daemon


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hibernating Rhinos Ltd                       cid:image001.png@01CF95E2.8ED1B7D0
Avivi Adi l Core Team
Office: +972-4-622-7811 l Fax: +972-153-4-622-7811
RavenDB paving the way to "Data Made Simple"   http://ravendb.net

Derek den Haas

unread,
Aug 28, 2017, 8:03:43 AM8/28/17
to Adi Avivi, RavenDB - 2nd generation document database

I shall change the directory and test the new version (when available). Any news on the memory issue (the whole reason this is happening, since ravendb is getting killed when using only 2gb of RAM)?

Oren Eini (Ayende Rahien)

unread,
Aug 28, 2017, 8:26:41 AM8/28/17
to ravendb, Adi Avivi
It looks like we are properly recognizing the actual limits on the system, and we are also properly behaving.
We have a few endpoints that should shed some light into what is going on, "/admin/debug/memory/stats" in particular.

Hibernating Rhinos Ltd  

Oren Eini l CEO Mobile: + 972-52-548-6969

Reply all
Reply to author
Forward
0 new messages