Best practice to avoid "datastore transaction or write too big."

1,134 views
Skip to first unread message

Amir Naor

unread,
Apr 13, 2018, 5:27:32 PM4/13/18
to Google App Engine
One of my end points need to handle storing over 1-5000 entities belonging to a single EG. This needs to happen atomically such that in case of error everything is reverted to the orignal state.

The problem I'm running into is occasional "datastore transaction or write too big." exception. Since all my entities are small in size, I assume I'm hitting the entity limit of 500 per commit (though at times I'm able to store 900 at once). Splitting the save statement into batches, but within the same transaction doesn't seem to help. 

What is the best practice to handle that limit? Are transactions out of the question in this scenario? 

Katayoon (Cloud Platform Support)

unread,
Apr 14, 2018, 2:11:03 PM4/14/18
to Google App Engine

That error message means a mutation was above system limits. Provided that you don’t use transaction exceeding 25 entity groups, causes can include:


Exceeding 500 entities in a single commit

Exceeding 10MiB in a single transaction

Exceeding 2MiB total composite index size for an entity


Here, you can find the best practices of what to keep in mind when using Cloud Datastore. If you need further technical support, I recommend to post your code snippet of the transaction handling the batch requests to Stack Overflow using the supported Cloud tags since Google Groups are reserved for general product discussions and are not for technical questions.


Reply all
Reply to author
Forward
0 new messages