Re: Sample usage of putExistingRevisionWithProperties:revisionHistory:fromURL:error: ?

28 views
Skip to first unread message

Jens Alfke

unread,
Mar 31, 2016, 10:29:45 AM3/31/16
to mobile-c...@googlegroups.com

On Mar 30, 2016, at 8:50 PM, Brendan Duddridge <bren...@gmail.com> wrote:

This could be that new method you were talking about which would facilitate importing of data. The docs imply this is a good method to use if you want to ship your app with some pre-canned data (which I do). Do you have a sample anywhere that shows that sort of thing?

No; it’s a very new method so there’s no sample code yet. The doc-comment should be self-explanatory, but ask if there’s anything you don’t understand.

The usual way to include canned data is to package a database with your app and install it on first launch, but if that doesn’t work for you, you can use this method instead.

—Jens

Brendan Duddridge

unread,
Mar 31, 2016, 1:37:07 PM3/31/16
to Couchbase Mobile
Actually I'm going to do both. I have multiple canned databases that I'll ship with my app. Each database has the same data, just localized to different languages. I need the export/import ability to transfer data from one database to another. 

I'm working on the export function now, then I'll work on the import function using this method.

One question though, if when I generate the export function I am adding a "_revisions" string with a comma separated list of revision history IDs, do I still need to provide the separate parameter of those revision history IDs in that method?

Effectively, this is what one of my JSON objects looks like:

{
  "sortField2Direction" : "asc",
  "sortOrder" : 0,
  "_revisions" : "2-bcac0f16be912a85381a946dc5be1d82,1-6059de57a6c9b9a914f74c150b6e1442",
  "groupFieldDirection" : "asc",
  "oldPK" : "715BAC6E-6D39-4B65-9E4E-ECDCD45F591D",
  "sortField1" : "fld-fad36b18f7c449ecb77b04d42552bd11",
  "hideWhenLinked" : false,
  "sortField1Direction" : "asc",
  "_attachments" : {
    "icon" : {
      "stub" : true,
      "length" : 2619,
      "digest" : "sha1-Y4HUFaM0aX0048NLPNxplu5u7nQ=",
      "revpos" : 2,
      "content_type" : "image\/png"
    }
  },
  "_id" : "frm-f4d442510ddb40a0b9078d1080a4e1e5",
  "type" : "TFForm",
  "formCategory" : "cat-bc417f04af264db88898d3a643333ade",
  "_rev" : "2-bcac0f16be912a85381a946dc5be1d82",
  "listViewFieldsCount" : 3,
  "sortField3Direction" : "asc",
  "showOnWatch" : true,
  "dbID" : "db-1deccd0e15dc4047ab96eba344144f3b",
  "name" : "Venues",
  "shouldSync" : true
}

You can see theres a "_revisions" property in there.

I'm guessing what I'll do is just extract that string out of the JSON and then pass it into the separate parameter in that new put method.

Thanks,

Brendan

Jens Alfke

unread,
Mar 31, 2016, 2:00:18 PM3/31/16
to mobile-c...@googlegroups.com

On Mar 31, 2016, at 1:37 PM, Brendan Duddridge <bren...@gmail.com> wrote:

One question though, if when I generate the export function I am adding a "_revisions" string with a comma separated list of revision history IDs, do I still need to provide the separate parameter of those revision history IDs in that method?

Yes. The _revisions property is used in the REST API and replication protocol, but that native method doesn’t recognize it. (In fact I think you’ll need to remove that property before calling it, or it’ll complain about the leading underscore.)

—Jens

Brendan Duddridge

unread,
Apr 1, 2016, 2:07:32 PM4/1/16
to Couchbase Mobile
So in order to read in my .json file I've generated, is it correct that since I have to create a new CBLDocument in order to call the new put method, that in order to add an attachment to it, I'll also have to create a new revision? So basically, the sequence will be:

1. Create new CBLDocument using the _id value from the JSON data.
2. Strip out and store the list of _attachments from the JSON data.
3. Call putExistingRevisionWithProperties:revisionHistory:fromURL:error:
4. Create a newRev CBLUnsavedRevision by calling [newDoc newRevision].
5. Add the attachments by calling [newRev setAttachmentNamed:withContentType:contentURL:]
6. Save the newRev [newRev save:]
7. Lather, rinse, repeat :-)

Does that make sense? It seems to me that maybe there would be a way to manufacture a revision based on the most recent revision history ID instead of creating a brand new revision. Especially since the source database of these documents would have already had a revision associated with these attachments and CBLDocument.

Thanks!

Brendan

Jens Alfke

unread,
Apr 1, 2016, 3:45:23 PM4/1/16
to mobile-c...@googlegroups.com
Hm, attachments. Good point. Internally, the way that method gets used is that the attachment bodies are downloaded into the attachment store first, and then when the revision is added (with the _attachments property intact) they’re already there so the insert succeeds. But there’s no public API for shoving attachments directly into the store (nor should there be.)

As a workaround for right now, you can preprocess the attachment metadata to put the data inline:
- remove the “follows” or “stub” properties
- base64-encode the attachment body
- add a “data” property whose value is the base64 string
Then call putExistingRevision…

I should come up with a better way to do this, though, before we release this API. I filed #1195.

—Jens

Brendan Duddridge

unread,
Apr 1, 2016, 6:45:10 PM4/1/16
to Couchbase Mobile
Hi Jens,

If I embed the attachments into the metadata as a data property, the JSON file could potentially get really huge. For example, I'm working with one import which is about 450 MB. Mostly it's because of the attachments which I store in a folder along side a data.json file. I've got them all bundled together into a package directory.

The data.json file itself is actually 96 MB. So if I stick the attachments in there too, it's going to get really huge, especially base64 encoded.

Is there any downside to the way I've done it above? I actually have that technique working, but I think it does mean I get an additional revision for each document.

Thanks,

Brendan

Jens Alfke

unread,
Apr 4, 2016, 1:18:28 PM4/4/16
to mobile-c...@googlegroups.com

> On Apr 1, 2016, at 3:45 PM, Brendan Duddridge <bren...@gmail.com> wrote:
>
> Is there any downside to the way I've done it above? I actually have that technique working, but I think it does mean I get an additional revision for each document.

The downside is just that you end up with a different revision history than in the original database you created the JSON file from. This won’t be a problem unless you later try to replicate between the new and original database, in which case there will suddenly be conflicts because the docs have two different rev histories.

(On the other hand, if you use this technique to copy docs into two destination databases, those should be consistent with each other and replication will work.)

So if you know you won’t be replicating between the source and destination database, this approach is fine.

—Jens

Brendan Duddridge

unread,
Apr 4, 2016, 2:13:48 PM4/4/16
to Couchbase Mobile
Well, that could possibly happen if someone exported from Database A, then imported into a copy of Database A (say on another device), then tried to sync them together after. I know they shouldn't really do this and just setting up sync between the two databases in the first place is the right thing to do, but if there's a path to an issue like this, someone will find it :-)

The fewer support requests I get from people trying to do this the better.

Jens Alfke

unread,
Apr 4, 2016, 3:02:43 PM4/4/16
to mobile-c...@googlegroups.com

On Apr 4, 2016, at 11:13 AM, Brendan Duddridge <bren...@gmail.com> wrote:

Well, that could possibly happen if someone exported from Database A, then imported into a copy of Database A (say on another device), then tried to sync them together after. I know they shouldn't really do this and just setting up sync between the two databases in the first place is the right thing to do, but if there's a path to an issue like this, someone will find it :-)

I can imagine situations where database A contains terabytes* of data, and you want to copy its data into an existing database B at some other location. In this case the fastest way to replicate [depending on distance] is probably to export A to a hard drive and carry the hard drive over to where database B is, then import it from the save file. ("Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.” –Andrew Tanenbaum, 1981. See also What If?)

—Jens

* don’t laugh, this isn’t unusual for CouchDB or Couchbase, and CBL’s SQLite and ForestDB storage can scale to this size.
Reply all
Reply to author
Forward
0 new messages