Hi Gopal and thanks for getting back to me and confirming my assumption.
I'll give some context about the use case, maybe it could be an interesting scenario.
We're building a CRUD style service for a somewhat complex Domain Object (DO) (some call them 'Business Object'). Each of these DOs can be several gigabytes large, and the service stores the data in a combination of Cloud Datastore and Cloud Storage. Due to the nature of these DOs, modifications have to be applied strictly in order. Sometimes we get bursts of writes/updates for a given DO, so to support some level of parallel updates/writes we store operations into a write-buffer and process them in the background (sorted by timestamp). Now, we still want strong consistency (or at least a minimum of latency) from when an UPDATE/WRITE is finished until it's available through the READ-APis, so we have implemented a system where a READ will check both the operations in the write-buffer (which rules out using PubSub as write buffer) and the data in permanent storage and apply them on the fly. This has worked well so far.
We use MemoryStore as a cache in several parts of the system and it's doing an excellent job, but I was wondering if it could also be used as the write buffer described above (instead of Cloud Storage as we currently do). This would improve performance of both the service and the background processing, in addition to reducing cost.
For the moment, we only use MemoryStore as a cache-aside for certain READ operations and invalidate these cached elements whenever we have a WRITE operation. We've been thinking about updating the cache both upon WRITE and READ operations as this would largely achieve the same as using MemoryStore as a write-buffer, but haven't found a way to resolve the potential race condition.
T.