Firestore batch writes are slow - am I doing it correctly?

496 views
Skip to first unread message

opy...@gmail.com

unread,
Feb 19, 2019, 9:43:26 AM2/19/19
to Firebase Google Group
Hello,

Our Firebase functions are in Belgium, and Firestore is in London.

The ping between Brussels and London seems to be around 10ms.

My function has this timing code:

async function handle (path, uid, companyName) {
console.time('handle')
console.time('search')
const result = await searchByCompanyName(companyName, 5)
console.timeEnd('search')
console.time('write')
return firestoreDb.writeBatch(path, result)
.then(() => {
console.timeEnd('write')
return console.timeEnd('handle')
})
}

The write timing seems to hover around 150-200ms.

function writeBatch (path, arr, idRef = 'id') {
const batch = db().batch()
const ref = db().collection(path)

console.log(`batching ${arr.length} results`)
return Promise.all(
[
arr.map(async item => {
let itemRef = ref.doc(item[idRef])
return batch.set(itemRef, item)
})
]
).then(() => {
return batch.commit()
})
}

I also tried this:

function writeBatch (path, arr, idRef = 'id') {
const batch = db().batch()
const ref = db().collection(path)

console.log(`batching ${arr.length} results`)

arr.forEach(item => {
let itemRef = ref.doc(item[idRef])
batch.set(itemRef, item)
})

return batch.commit()
}

Are there any other improvements I can make? 150-200ms seems like a lot considering the ping itself is only 10ms.
The array is also capped to 5 results at all times.

Thanks,
Juan

Screenshot 2019-02-19 at 10.30.40.png



Hiranya Jayathilaka

unread,
Feb 19, 2019, 6:29:27 PM2/19/19
to fireba...@googlegroups.com
What does db() do?

--
You received this message because you are subscribed to the Google Groups "Firebase Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firebase-tal...@googlegroups.com.
To post to this group, send email to fireba...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/firebase-talk/20ab3b44-ab3f-4adb-8431-2cce4311a662%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--

Hiranya Jayathilaka | Software Engineer | h...@google.com | 650-203-0128

opy...@gmail.com

unread,
Feb 20, 2019, 10:35:06 AM2/20/19
to Firebase Google Group
Gets a db handle.

let _db = null

/**
* Initialises the DB handle if it hasn't yet.
*
* PS remember this for date handling:
*
* // Old:
* const date = snapshot.get('created_at');
* // New:
* const timestamp = snapshot.get('created_at');
* const date = timestamp.toDate();
*
* @returns {*}
*/
const db = () => {
if (_db === null) {
const settings = {timestampsInSnapshots: true}
_db = admin.firestore()
// gets rid of
// "The behavior for Date objects stored in Firestore is going to change AND YOUR APP MAY BREAK."
_db.settings(settings)
}
return _db
}

Elie Zedeck RANDRIAMIANDRIRAY

unread,
Feb 20, 2019, 10:35:07 AM2/20/19
to fireba...@googlegroups.com
Hey Juan,

I run batch operations a lot as well, and indeed, it is slow, just like yours! Some of my ops (the max 500 doc updates) goes as high as 2500ms ... almost 3 seconds. Fortunately, it's not a use case that needs speed, so I'm ok with that.

@Firebase: But I'm also curious is it really meant to be so slow? Thinking in scaled use cases, I don't remember exactly how many is the maximum writes on Firestore, but I think during beta it was 10000 and this limit was supposed to be removed when GA (I'm too lazy to check the numbers now). I mean, 500 updates taking 2.5s, that's not even close to the Beta's 10000, interpolate and imagine how long would it take to complete 10000 operations at that pace. It is true that I did not run my ops in parallel, but still!

Best regards,
Elie Zedeck


opy...@gmail.com

unread,
Feb 20, 2019, 11:28:11 AM2/20/19
to Firebase Google Group
I ran an experiment just using straight writes (not batch) and it still takes ~100ms. A read seems to take the same.
I now wonder if there is some middleware which consistently adds a few tens of milliseconds to any CRUD going from my function to Firestore.

const handle = async (uid, data) => {
const companyNumber = data.number
console.time('read')
const teamProfile = await firestoreDb.read(`team/${companyNumber}`)
console.timeEnd('read')


const data2 = _.assign({}, data, {
random: Math.random().toString(36).substring(7)
})

console.time('write')
return firestoreDb.write('junk/' + uid, data2).then(() => {
return console.timeEnd('write')
})
}


Screenshot 2019-02-20 at 16.03.20.png


opy...@gmail.com

unread,
Feb 26, 2019, 4:33:41 PM2/26/19
to Firebase Google Group
To any Googlers reading this... in your opinion, is the ~100ms overhead slow, or not?

Samuel Stern

unread,
Feb 26, 2019, 5:28:44 PM2/26/19
to Firebase Google Group
Hi there,

I just tracked down internal targets and while I can't share much, we consider 100ms for a write to be in the acceptable range.

- Sam

--
You received this message because you are subscribed to the Google Groups "Firebase Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firebase-tal...@googlegroups.com.
To post to this group, send email to fireba...@googlegroups.com.

opy...@gmail.com

unread,
Feb 27, 2019, 10:59:31 AM2/27/19
to Firebase Google Group
Thanks for the reply.
I say ~100ms, but looking at the numbers (see original post) it fluctuates between 100 and 200ms. Is that still an acceptable range?

Samuel Stern

unread,
Feb 27, 2019, 1:45:46 PM2/27/19
to Firebase Google Group
Yes for many workloads, 100-200ms is still reasonable and does not point to anything going wrong.  You can probably speed up writes by turning off some single-field indexes and making your documents smaller but I wouldn't expect anything too dramatic.

- Sam

Reply all
Reply to author
Forward
0 new messages