Cloud Firestore: UnavailableException on batch write

531 views
Skip to first unread message

Sam S

unread,
Dec 5, 2019, 6:40:08 PM12/5/19
to Firebase Google Group
Hi,

I'm working on a  service that will write close to 300,000 entries in a firestore collection. These entries will be stored in a batch of 500. There are no indexes on the firestore collection and I've been seeing this error come up constantly after writing about 10k entries - 

java.util.concurrent.ExecutionException: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason

at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:552)

at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:533)

at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:84)

at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)


This is the code I have - 


WriteBatch batch = firestoreClient.batch();
CollectionReference collectionReference = firestoreClient.collection(collectionName);
int count = 0;
for (String key : values.keySet()) {
DocumentReference documentReference = collectionReference.document(key);
batch.set(documentReference, values.get(key));
count++;
if (count % 500 == 0) {
batch.commit().get();
batch = firestoreClient.batch();
}
}
batch.commit().get();


Can someone please help with this ? 


Thanks,

Sam

Kato Richardson

unread,
Dec 7, 2019, 3:06:12 PM12/7/19
to Firebase Google Group
Hi Sam,

I'm not sure what batch.commit().get() is meant to accomplish here, but it's probably not doing what you think; batch.commit() is probably enough or maybe you want batch.commit().then(...).

Those errors don't look related to the Firebase SDKs. There may be some more details in the stack trace or the networking traffic (e.g. error headers) that you could track down.

Most likely, you're running into write per second limits and just need to throttle out your writes over a longer time frame, rather than queueing up all 300K at once. For example, you could limit your parallel writes to 10K batches and you'd probably get rid of the issues here. But this is just a guess given the limited insight provided by that code snippet and error.

☼, Kato

--
You received this message because you are subscribed to the Google Groups "Firebase Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firebase-tal...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/firebase-talk/21c71482-b7cf-418d-9a20-e994ec5bf74a%40googlegroups.com.


--

Kato Richardson | Developer Programs Eng | kato...@google.com | 775-235-8398

Reply all
Reply to author
Forward
0 new messages