Performance question regarding replication

45 views
Skip to first unread message

Alexander Harm

unread,
Mar 15, 2016, 5:53:19 AM3/15/16
to PouchDB
In my setup I replicate several small databases to in-memory cache-database. What disturbs me is the really poor performance. Some numbers:

doc_count = 1063

Loading dump file into in-memory PouchDB 1: 2.276 sec
Replicating from in-memory PouchDB 1 to in-memory PouchDB 2: 13.16 sec

I expected in-memory to be quite performant and the amount of time it takes for 1.000 docs kind of scares me since I'm planning to use more like a 100.000 docs in the future.

Is this normal behaviour?

And on a sidenote, are there any plans to use LokiJS as in-memory adapter?

Regards, Alexander

Nolan Lawson

unread,
Mar 16, 2016, 1:41:09 PM3/16/16
to PouchDB
What device and browser are you testing this on? Also the speed of PouchDB will depend a lot on the size of your documents.

In-memory PouchDB isn't as fast as it could be. We're aware of it, but there's no fix planned in the future. There are no plans for a LokiJS adapter either. Sorry about that!

- Nolan

Alexander Harm

unread,
Mar 16, 2016, 1:45:52 PM3/16/16
to pou...@googlegroups.com
I use it on

device:
Apple MacBook Pro Core2Duo, 6GB RAM, SSD

browser:
lastest Chrome

documents:
smallish, maybe 10 key-value pairs (just short strings like name etc)

Alexander
 
--
You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/BdqeJoTsauE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pouchdb/18a77bb3-1655-436f-9da9-85260634740f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Nolan Lawson

unread,
Mar 16, 2016, 1:46:38 PM3/16/16
to PouchDB
I'll also point out that if you plan on having 100k separate documents, then PouchDB will probably not suit your needs. PouchDB is optimized for syncing and revision management, e.g. a user with ~20 documents that are shared among colleagues and may go through many revisions. Unless you expect your users to individually modify 100k documents and then track changes across all of those 100k documents, then PouchDB's revision-management overhead will indeed cause a performance hit.

LokiJS may be a better solution in your case, if you're targeting desktop browsers. If you're targeting mobile devices, then you will probably run out of memory very quickly, so you may want to consider LocalForage, DexieJS, or YDN-DB instead, which do not have revision semantics.

- Nolan

Alexander Harm

unread,
Mar 16, 2016, 1:50:44 PM3/16/16
to pou...@googlegroups.com
90% of the docs will probably be read-only with very few changes. Can Pouch on node handle such amounts better?

Alexander

--
You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/BdqeJoTsauE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.

Jan Lehnardt

unread,
Mar 16, 2016, 2:56:20 PM3/16/16
to pou...@googlegroups.com
On 16 Mar 2016, at 18:50, Alexander Harm <harm.al...@gmail.com> wrote:

90% of the docs will probably be read-only with very few changes. Can Pouch on node handle such amounts better?

If you are on a server/desktop environment, I’d recommend good ol’ CouchDB.

Best
Jan
--

You received this message because you are subscribed to the Google Groups "PouchDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pouchdb+u...@googlegroups.com.

To post to this group, send email to pou...@googlegroups.com.

Nolan Lawson

unread,
Mar 20, 2016, 9:07:44 AM3/20/16
to PouchDB
If 90% of the documents never change, then an even better solution in my opinion is pouchdb-dump and pouchdb-load: https://github.com/nolanlawson/pouchdb-load

Else you can wait for CouchDB 2.0 which will have faster replication. Or you can start using it in alpha right now; it's as simple as:

docker run -d -p 3001:5984 klaemo/couchdb:2.0-dev --with-haproxy --with-admin-party-please -n 1

Alexander Harm

unread,
Mar 20, 2016, 9:35:03 AM3/20/16
to pou...@googlegroups.com
Hello Nolan,

I currently use the following setup:

- a node process that listens to changes and appends them to a ndj-file
- clients use pouchdb-load to get this file from server
- clients start replicating

The idea was to load several “project”-dbs and then build from these small dbs one "cache”-db. My problem is not the server-client replication but the replication between these “project"-dbs and the “cache”-db. I just expected a replication between two in-memory-pouches to be blazingly fast. The roughly 13 seconds it took for a 1000 docs left me startled.

 Alexander

--
You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/BdqeJoTsauE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.

Nolan Lawson

unread,
Apr 20, 2016, 10:17:50 AM4/20/16
to PouchDB
Hi Alexander,

That's a great test scenario. I agree that replication between two in-memory PouchDBs should be fast. If it's not, then it's a great test case for profiling, etc., and I encourage anybody with the free time to do so. :)

Cheers,
Nolan
Reply all
Reply to author
Forward
0 new messages