How to compute secondary indexes faster

1,097 views
Skip to first unread message

Katie Egervari

unread,
Jul 12, 2015, 10:16:17 AM7/12/15
to pou...@googlegroups.com
Hi Nolan & Gang,

I am working with fairly large databases, and I was wondering if I could please get some tips on how to improve the performance of secondary index creation.

The databases I am using consist of thousands of documents, and it's quite possible that at some point there might be 20,000 to 30,000 documents in the database. I need to create around 7 indexes on this database - beyond abusing the _id property, which I simply can't do for all of them.

Most of the performance cost seems to come from iterating over all of the documents - not just the emit() part or the writing of the index to the disk. The base amount of time seems to be around 26 seconds right now - and I can see this going to several minutes per index at some point when I start working with those 30k document databases.

Is there anything I can do to speed this up? I don't think my users will want to wait 10 minutes for their database to load before they can start using the application.

Could I instead pre-compute the index on a Java server and just insert the final result into pouchdb? Is that possible, and would it be faster? I think the time it takes to generate the index more intelligently as well as the time to download it would be far less than several minutes per index.

Or do I need to put different document types into different databases altogether, just to cut down on the size? I can see how most of the indexes would be generated quickly if I did that.

Anything else that makes sense? The app needs to go in production in about a month, so this is a very important problem for me to solve. I would appreciate any help you can offer.

Thank you!

Katie :)

Dale Harvey

unread,
Jul 12, 2015, 10:25:21 AM7/12/15
to pou...@googlegroups.com
So in the long term I am fairly confident we can get this down to negligible delay, by changing the disk format we can 1, cut read and write time by half, and 2, use native indexes.

In the short term that doesnt help you a lot though, precomputed indexes seems like on possible option but may be quite an involved setup, one super simple solution, you are replicating this user data from an online couchdb instance right? how about using the (precomputed) online version of the views while your local ones are building, then switching to local once everything has caught up?

Cheers
Dale

--
You received this message because you are subscribed to the Google Groups "PouchDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pouchdb/f32f0809-9f22-49a8-92e1-d582562dce71%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Katie Egervari

unread,
Jul 12, 2015, 10:42:55 AM7/12/15
to pou...@googlegroups.com
Thanks so much for the awesomely quick response =)

Our setup is a bit unusual. We have a MS SQL database, and I wrote an exporter that converts it to exactly how I want to store it in pouch, so basically I just download this enormous .json file and put all of the documents into the database at the same time. Unfortunately, we are not using couchdb on the server. But maybe we should be?

Also, our application has to run standalone - meaning it won't have access to the server at all times. So, the plan is for them to download everything while they have access to wifi, but then they can use the application offline after that.

So yeah - very weird setup compared to your average cases. The reason is that the software is a mobile/tablet version of an existing 10-year old software application.

Katie

Nolan Lawson

unread,
Jul 14, 2015, 9:42:38 AM7/14/15
to pou...@googlegroups.com, katie.e...@gmail.com
Hi Katie,

If you are not using PouchDB's sync capabilities, then it is probably giving you overhead that you don't need. PouchDB is optimized for sync, which means that, under the hood, it uses a lot of data structures that are ideal for syncing (e.g. revision trees), but not ideal for querying and indexing.

There are other databases that are more optimized for secondary indexes and querying, e.g. YDN-DB, Lovefield, or Dexie.js. I would recommend switching to one of those, if you're not using sync and you need rich querying.

Cheers,
Nolan

Alexander Gabriel

unread,
Jul 14, 2015, 11:16:06 AM7/14/15
to pou...@googlegroups.com

what a great answer
ever got such an answer from Microsoft?
Not that we could even ask there...

Dale Harvey

unread,
Jul 14, 2015, 11:56:48 AM7/14/15
to pou...@googlegroups.com
Hah yup apologies I had the same email written in drafts

I am fairly confident we can get to comparative performance in future, but in the short term its likely easier to implement with another database, there are some optimisations we can make but Nolan has done a lot to ensure they are around as fast as they can be given the current architecture 

Katie Egervari

unread,
Jul 14, 2015, 12:00:03 PM7/14/15
to pou...@googlegroups.com

That is somewhat disappointing news.... most of the app is built. I did design it very well... replacing  pouch would cause the lowest amount of disruption possible, but I was hoping it would not come to that. And to be honest, the app has to get into production soon 😦

Thanks for the response though

Katie

You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/enYAKsUPpis/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.

To post to this group, send email to pou...@googlegroups.com.

Nate Dudenhoeffer

unread,
Jan 14, 2016, 9:33:26 AM1/14/16
to PouchDB
Katie,
I was wondering what you decided in this case. Were you able to make some improvements in the speed? Did decide to use something other than pouch? I have a very similar situation. We decided couch wasn't practical server-side for our application.

Thanks,
Nate

Katie Egervari

unread,
Jan 14, 2016, 9:42:17 AM1/14/16
to pou...@googlegroups.com

I have not managed to solve this issue yet. We are just living with it for now. I sure wish it was faster 😕

Nate Dudenhoeffer

unread,
Jan 14, 2016, 9:52:24 AM1/14/16
to PouchDB
Thanks for the response. I think we will probably do the same for now. Once you get the hang of it PouchDB has a great api. 

We are sharing some code between cordova and a web application, which pouch makes easy. Long term, we will probably take the couple of big tables out of pouch in the cordova app (they don't exist in the web app), and just use sqlite.

Nate 

Nolan Lawson

unread,
Jan 18, 2016, 11:45:10 AM1/18/16
to PouchDB
Yep, I agree with what everyone in this thread is saying. PouchDB is optimized for certain use cases (e.g. sync, cross-browser support) at the expense of other use cases (e.g. fast secondary indexes).

Every database comes with tradeoffs, and PouchDB's tradeoffs come with a big penal for secondary indexes. However, that doesn't mean you need to avoid PouchDB entirely; you just need to avoid secondary indexes.

For an example of a fast usage of PouchDB, check out my Pokedex.org app (http://www.pocketjavascript.com/blog/2015/11/23/introducing-pokedex-org), which uses PouchDB but divides all the data into many small databases and then only makes use of get() and allDocs() on the primary index, never query() on a secondary index. Another example of "fast" indexes is relational-pouch (https://github.com/nolanlawson/relational-pouch), which again is built on only using allDocs().

N.B.: I am the primary author of PouchDB secondary indexes, and I am telling you that I would not use them in my own app because they are too slow. :)

Cheers,
Nolan

Nolan Lawson

unread,
Jan 18, 2016, 11:47:46 AM1/18/16
to PouchDB
*penalty

Katie Egervari

unread,
Jan 18, 2016, 12:03:13 PM1/18/16
to pou...@googlegroups.com
Hi Nolan,

I am not sure I could use multiple databases to splice up our data. Well, I could... but then it's actually going to be quite unwieldy in my case. I actually have to manage multiple databases as it is - each one is quite massive, ranging from 50mb to hundreds of mbs. All of the 'schemas' in each database are the same - the app only uses one a time, but the user can switch between them. If I were to use multiple databases per document type and rely on allDocs(), then I'd probably have to manage 150 databases (10 databases * 15 document types, more or less). Would that actually pose a problem? 

Katie

On Mon, Jan 18, 2016 at 11:47 AM, Nolan Lawson <no...@nolanlawson.com> wrote:
*penalty

--
You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/enYAKsUPpis/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.

Nolan Lawson

unread,
Jan 20, 2016, 7:23:32 PM1/20/16
to PouchDB
Hi Katie,

If each of those 150 databases were simultaneously live-syncing to a remote CouchDB, then yes, that would be a problem. Sounds like the answer is no, though, so you should be fine (the app only uses ~15 at a time?).

As for local operations: you may be able to get away with it (would depend on the browser/adapter), but TBH I've never heard of someone having so many databases (>100), so I can't say for sure how it would perform in practice. In my Pokedex app I've got ~10 databases and it runs like a champ on all browsers. You'd have to test it out to see.

BTW instead of 150 databases, another option is the relational-pouch style of overloading IDs, which would allow you to use one single database while still using only allDocs() instead of query(). There is also some advice about that here (http://pouchdb.com/2014/05/01/secondary-indexes-have-landed-in-pouchdb.html - scroll down to "When not to use map/reduce").

Another option entirely is to use another database. For highly relational data, constantly changing data, or data that needs lots of secondary indexes, a database closer to the metal like Dexie (with IndexedDB shim for Safari) would probably be best. However, if you can overload IDs or use many separate databases, then you can stick with PouchDB and have much better performance.

Hope that helps!

Cheers,
Nolan

Katie Egervari

unread,
Feb 7, 2016, 3:16:17 AM2/7/16
to pou...@googlegroups.com
Hi Nolan,

I finally managed to find time to get around to trying this out. I split up my database into 7 new databases. I have noticed a 257% increase in index creation performance when doing it this way. It took quite a bit of work to restructure my nearly production-ready app to make this possible, but I am super pleased with it.

I have one problem - and I hope you can help me. One of these databases contains around 750MB of data all on its own. There's around 30,000 documents in there, and each document is rather non-trivial - contain lots of collections. So... I need to index it :) Unfortunately, asking PouchDB to create the index eventually causes Chrome to give the "Ah, Snap" message :(

I think part of the reason this is occurring is that the call to emit() uses the second parameter. The value I am setting contains a lot of fields, but I am not providing any document property that would contain html content and otherwise large data. I just think the volume of documents is the problem. Is it trying to do it all in memory without some sort of flushing? 

The reason I need to do this is that I want the queries using this index to not return all of the data - just some of it. With smaller databases, this makes things a lot faster and it's pretty important that this query performs well.

Any suggestions? Otherwise, everything is working great with the rest of my data. Thank you!

Katie

--
You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/enYAKsUPpis/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.

Alexander Gabriel

unread,
Feb 7, 2016, 7:49:24 AM2/7/16
to pou...@googlegroups.com
sorry if this may not help, but:

I also have a database with about 500MB. And documents containing lots of collections that need to be queried.
I tried building indexes at first. That took WAY too long (dozens of minutes, sometimes crashing the browser).

I then settled with this:
  1. running some of the queries on CouchDB on the server instead of locally
  2. building two hashtable-like objects and saving them in _local documents. These are used for search and building a tree view that enables navigation in the documents
The hashtables are a pain though when documents are changed, as the hashtables need to be updated too...

So I am not happy with my solution and hope to find a better way.

Alex


--
You received this message because you are subscribed to the Google Groups "PouchDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pouchdb+u...@googlegroups.com.

To post to this group, send email to pou...@googlegroups.com.

Katie Egervari

unread,
Feb 7, 2016, 12:44:38 PM2/7/16
to pou...@googlegroups.com
In my case, I don't even use CouchDB. But despite that, the app has to run without wifi.

Katie

Nolan Lawson

unread,
Feb 8, 2016, 3:27:01 PM2/8/16
to PouchDB
Hi Katie,

Hmm, I've never even seen Chrome crash due to excessive map/reduce indexing... frankly I've never used map/reduce on such a large database. Could be a bug in Chrome, but more likely a bug on our end. :/

In principle, the indexing is supposed to occur in batches - this is currently hard-coded in the map/reduce codebase to 50. I'm kind of surprised that it totally kills Chrome. Is it running out of memory? Can you profile to see what's going on?

I'm glad to know that the "separate databases" trick solved some of your problems, but it sounds like secondary indexes are still killing your app's performance. Unfortunately I can't advice much except to either avoid secondary indexes or wait until we have a faster implementation (native secondary indexes). Another option is something like what Alexander describes, i.e. precompute an index on the server.

Another trick you might try would be to run PouchDB inside of a web worker. It might be less likely to cause the browser tab to crash, since it's running in a separate process. And plus, your framerate should improve, because IndexedDB blocks the DOM (http://nolanlawson.com/2015/09/29/indexeddb-websql-localstorage-what-blocks-the-dom/).

Hope that helps, and sorry I can't provide much more advice for now. :/

Cheers,
Nolan

Jan Lehnardt

unread,
Feb 10, 2016, 4:28:36 AM2/10/16
to pou...@googlegroups.com
On 07 Feb 2016, at 09:16, Katie Egervari <katie.e...@gmail.com> wrote:

Hi Nolan,

I finally managed to find time to get around to trying this out. I split up my database into 7 new databases. I have noticed a 257% increase in index creation performance when doing it this way. It took quite a bit of work to restructure my nearly production-ready app to make this possible, but I am super pleased with it.

I have one problem - and I hope you can help me. One of these databases contains around 750MB of data all on its own. There's around 30,000 documents in there, and each document is rather non-trivial - contain lots of collections. So... I need to index it :) Unfortunately, asking PouchDB to create the index eventually causes Chrome to give the "Ah, Snap" message :(

I think part of the reason this is occurring is that the call to emit() uses the second parameter. The value I am setting contains a lot of fields, but I am not providing any document property that would contain html content and otherwise large data. I just think the volume of documents is the problem. Is it trying to do it all in memory without some sort of flushing? 

The reason I need to do this is that I want the queries using this index to not return all of the data - just some of it. With smaller databases, this makes things a lot faster and it's pretty important that this query performs well.

HI Katie,

have you measured the difference between using a large emit() value and using include_docs (it should make queries slower which you say is what you don’t want, but might still work for your case)?

The only other option I see here is further splitting up your large document, so the emit() value isn’t as big.

Best
Jan
-- 
 



Any suggestions? Otherwise, everything is working great with the rest of my data. Thank you!

Katie

On Wed, Jan 20, 2016 at 7:23 PM, Nolan Lawson <no...@nolanlawson.com> wrote:
Hi Katie,

If each of those 150 databases were simultaneously live-syncing to a remote CouchDB, then yes, that would be a problem. Sounds like the answer is no, though, so you should be fine (the app only uses ~15 at a time?).

As for local operations: you may be able to get away with it (would depend on the browser/adapter), but TBH I've never heard of someone having so many databases (>100), so I can't say for sure how it would perform in practice. In my Pokedex app I've got ~10 databases and it runs like a champ on all browsers. You'd have to test it out to see.

BTW instead of 150 databases, another option is the relational-pouch style of overloading IDs, which would allow you to use one single database while still using only allDocs() instead of query(). There is also some advice about that here (http://pouchdb.com/2014/05/01/secondary-indexes-have-landed-in-pouchdb.html - scroll down to "When not to use map/reduce").

Another option entirely is to use another database. For highly relational data, constantly changing data, or data that needs lots of secondary indexes, a database closer to the metal like Dexie (with IndexedDB shim for Safari) would probably be best. However, if you can overload IDs or use many separate databases, then you can stick with PouchDB and have much better performance.

Hope that helps!

Cheers,
Nolan

--
You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/enYAKsUPpis/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pouchdb/2e16d312-2f83-4ba9-a054-58fb09ec01bf%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups "PouchDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pouchdb+u...@googlegroups.com.

To post to this group, send email to pou...@googlegroups.com.

Katie Egervari

unread,
Feb 12, 2016, 12:39:48 PM2/12/16
to pou...@googlegroups.com
Hi Nolan,

I finally have some time to respond back. I wanted to sooner.

Depending on the database I am using, I did actually get up to a 488% speed improvement when creating indexes after I separated them. This meant that instead of taking 715 seconds to create them all, it only took around 140 seconds. That was pretty sweet.

I am actually having even more index creation problems on various Android devices. I do not get the issue on Chrome using a desktop though, although the type of error is the same as I was reporting earlier.

I have tried using indexdb as an adapter, as well as websql. I have used both the cordova-sql-storage plugin and without it. I have tried the androidDatabaseImplementation: 2 option as well, which is the most stable configuration I can get, but it only works for small-to-medium sized databases.

I'm going to paste the two indexes it crashes at. When it crashes, I can inspect the console in chrome when the device is attached to desktop using usb. The crash happens in a random spot each time, so I don't think there's a logic problem. Also, the index works great on a browser, so I guess that rules out logical errors.

Here are the indexes:

mapFunction: function(document) {
function emitLowerCaseSubStrings(text, observation) {
if(text) {
var words = text.toLowerCase()
.replace(/[/;:\-]/g, ' ')
.replace(/[.,#!$%\^&\*\[\]{}=_`~()]/g, '')
.replace(/\s{2,}/g, ' ')
.trim()
.match(/(\S+)*/g);

for(var i = 0; i < words.length; i++) {
if(words[i].length > 0) {
emit(words[i], observation);
}
}
}
}

if(document.documentType === 'attribute' &&
document.options &&
(document.category.keyCode === 'SYMPTOM' ||
document.category.keyCode === 'UNSPECIFIED')
) {
for(var i = 0; i < document.options.length; i++) {
if(document.options[i].name && !document.options[i].normal) {
var observation = {
option: document.options[i]
};

emitLowerCaseSubStrings(document.name, observation);
emitLowerCaseSubStrings(document.searchFriendlyName, observation);
emitLowerCaseSubStrings(document.options[i].name, observation);
emitLowerCaseSubStrings(document.options[i].searchFriendlyName, observation);
}
}
}
}

And:

mapFunction: function(document) {
if(document.documentType === 'solution') {
for(var i = 0; i < document.orGroups.length; i++) {
for(var j = 0; j < document.orGroups[i].observations.length; j++) {
emit(document.orGroups[i].observations[j].attribute.id);
}
}
}
}

Both indexes operate on databases that are several hundred megabytes (up to 750MB or so).

I would appreciate any advice you can give on what I might be doing wrong with my map functions. In the meantime, I will try the web worker approach and see if that solves anything.

Katie


--
You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/enYAKsUPpis/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.

Katie Egervari

unread,
Feb 15, 2016, 8:00:14 AM2/15/16
to pou...@googlegroups.com
I actually spent some time optimizing the first index, rewriting portions of it and cleaning it up - but it made no difference.

I then profiled my application, looking for memory leaks. I get the sawtooth pattern, which is supposed to be 'normal'. I also investigated all of the objects in memory during and after the indexes were done, and the data is not sitting there in memory, and the sawtooth goes down after the indexes are created.

So... I expect the algorithm that PouchDB is using to create the index will not work for large databases - the memory simply grows to be too large too quickly and it causes crashes as a result. Would there be a way to throttle the memory used to prevent crashes in cases where the database is large?

Katie

Nolan Lawson

unread,
Feb 15, 2016, 11:57:44 PM2/15/16
to PouchDB
Hi Katie,

Unfortunately the batch size is not configurable for the mapreduce plugin. However, you could try forking PouchDB and changing the batch size to make it smaller than 50: https://github.com/pouchdb/pouchdb/blob/3acb34dad1fd24206a6e3dbe318ff355f91e6e8f/src/mapreduce/index.js#L28

To be honest, it looks like your usage of map/reduce is intense enough (for loop within a for loop with arrays as the emitted key... youch), that you may even want to look into directly using pouchdb-collate and creating the indexes yourself: https://github.com/pouchdb/collate

Again, I'm really sorry that PouchDB is not a good solution for this kind of problem. Secondary indexes are definitely a weak point in the current implementation.

Good luck,
Nolan

Nolan Lawson

unread,
Feb 16, 2016, 12:01:18 AM2/16/16
to PouchDB
Actually, forget what I said about pouchdb-collate. I just realized you are emitting a string as the key and not an array of strings, so pouchdb-collate would not help you much.

If I were you, I would probably implement this as an entirely separate index using something like Dexie or LocalForage. Or I would precompute it and keep the mapping of words -> docs entirely in memory. I see you are doing full-text search, which honestly IndexedDB is kinda bad at, although WebSQL has native support for it, in case you are able to use that.

I had a similar discussion with someone about pouchdb-quick-search (which also uses mapreduce, so it's also not terribly fast, despite the name). You might be able to get some insights form some experiments I ran using raw IndexedDB and WebSQL: https://github.com/nolanlawson/pouchdb-quick-search/issues/22#issuecomment-72152900

- Nolan

Katie Egervari

unread,
Feb 16, 2016, 4:27:17 AM2/16/16
to pou...@googlegroups.com
Thanks Nolan,

I will try adjusting the batch size first - seems simple enough, and I hope it it's that simple to solve the crashes.

If that doesn't work, I could look at computing the index using another database... OR... maybe I should compute it on the server and ship it to the client, doing your combo _id approach to point the words to option for my full text search. The same would go for the solution observations index.

I don't want to hold any of this stuff in memory though. The app already uses around 180mb, and I am sacred to fill it even more.

I was also thinking maybe the problem is with the cordova-sqlite-storage plugin too. There were some github posts related to that - crashes when the OS detects huge memory spikes. He made the enterprise version, which was supposed to fix the problem, but not such luck for me - I get the same result.

I'm sure I'll figure it out eventually. Once the index is there, pouchdb does this quite well with its queries. I wish my databases weren't so large though, but there's not much I can do about that :(

I will report back soon enough, hehe :)

Katie

--
You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/enYAKsUPpis/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.

Alexander Gabriel

unread,
Feb 16, 2016, 8:17:43 AM2/16/16
to pou...@googlegroups.com
I will keep lurking in the background hoping to scavenge a solution :-)


--
You received this message because you are subscribed to the Google Groups "PouchDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pouchdb+u...@googlegroups.com.

To post to this group, send email to pou...@googlegroups.com.

Nolan Lawson

unread,
Feb 16, 2016, 10:11:54 AM2/16/16
to PouchDB
Katie - precomputing on the server sounds like a great idea, especially if the data is mostly static. Also I'll reiterate that, for such a large secondary index, you'll almost certainly get better perf from LocalForage, Dexie, or YDN-DB. Agreed that in-memory doesn't seem like an option here.

Alexander: ultimately the solution will be native secondary indexes, which I've already started poking around on, although I can't promise anything anytime soon. Basically it will create secondary indexes in the host PouchDB database rather than constructing a second PouchDB database, which should give performance closer to what LocalForage, Dexie, or YDN offers.

Katie Egervari

unread,
Feb 25, 2016, 8:56:00 PM2/25/16
to pou...@googlegroups.com
Hi Nolan,

I have a question. I have all of my data in CouchDB. Earlier today, I had sync'd it to PouchDB, and my app was working rather nicely - it seemed to copy the indexes/views along with the data. But now, I'm having trouble - it just all of a sudden stopped syncing the indexes/views and they are nowhere to be found in the client. Now I am baffled as to why it was working originally. Unfortunately, I didn't commit anything - my app was in a state of breakage and I was just playing around with things. But now I am lost as to how to get my indexes/views to be copied by PouchDB's sync command.

Can you give me guidance please? Thanks Nolan!

Katie

On Tue, Feb 16, 2016 at 10:11 AM, Nolan Lawson <no...@nolanlawson.com> wrote:
Katie - precomputing on the server sounds like a great idea, especially if the data is mostly static. Also I'll reiterate that, for such a large secondary index, you'll almost certainly get better perf from LocalForage, Dexie, or YDN-DB. Agreed that in-memory doesn't seem like an option here.

Alexander: ultimately the solution will be native secondary indexes, which I've already started poking around on, although I can't promise anything anytime soon. Basically it will create secondary indexes in the host PouchDB database rather than constructing a second PouchDB database, which should give performance closer to what LocalForage, Dexie, or YDN offers.

--
You received this message because you are subscribed to a topic in the Google Groups "PouchDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pouchdb/enYAKsUPpis/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.

Katie Egervari

unread,
Feb 26, 2016, 12:58:26 AM2/26/16
to pou...@googlegroups.com
Okay, I found the reason the indexes appeared to work - the a refactoring I made in the last 3 hours of the day at the office changed the design document name, and it broke it.

I investigated more tonight though, and now I realize that PouchDB was just making the index on the fly because it had access to the design document for that database.

I am not too much of a stickler over the security of the design document - especially not now. All I really want is to copy the generated index and sync on that index. Is there no way to do that?

Please, tell me there is. *crossing fingers*

Katie

Alexander Gabriel

unread,
Feb 26, 2016, 5:10:41 PM2/26/16
to pou...@googlegroups.com
crossing my fingers too - but I believe this is not possible


You received this message because you are subscribed to the Google Groups "PouchDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pouchdb+u...@googlegroups.com.

To post to this group, send email to pou...@googlegroups.com.

Nolan Lawson

unread,
Mar 3, 2016, 10:23:32 AM3/3/16
to PouchDB
No, it's not possible. You currently can't sync views from CouchDB.

Alexander Gabriel

unread,
Mar 3, 2016, 11:42:12 AM3/3/16
to pou...@googlegroups.com
Thanks Nolan. And: Thanks for helping to make this great tool possible!


2016-03-03 16:23 GMT+01:00 Nolan Lawson <no...@nolanlawson.com>:
No, it's not possible. You currently can't sync views from CouchDB.

--
You received this message because you are subscribed to the Google Groups "PouchDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pouchdb+u...@googlegroups.com.
To post to this group, send email to pou...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages