When you say "higher in frequency" what do you mean? How many queries
per second? How many games total will be active in your system at
once?
Keep in mind that your query will be eventually consistent. When
someone joins a game, that fact will not become visible to the queries
right away - often for seconds. But your query is by nature
"eventual" since someone could have joined the game immediately after
the query was run, so presumably you already handle this case.
One immediate improvement is to convert to your regular query to a
keys-only query followed by a batch fetch. If you have caching, you
only pay for the keys-only results (1/7th the cost). I see you're
using Java; if you use Objectify4 this is automatically done for you
with the @Cache annotation and "hybrid" queries (the default with
@Cache entities).
The next step is to cache the keys-only query itself in memcache for a
limited amount of time. It would be nice to expire it explicitly when
you know it changes - which is only when a game is created or a game
fills up (not for simple add or leave game). The problem is, there's
no guarantee that the subsequent query (which will populate the cache)
does not contain stale eventual data. I can't think of a way around
this other than just making sure the expiry period is short so it gets
refreshed; you're just trying to reduce the query count, not eliminate
it.
You can't query by playerNames.size() but you can store an indexed
field numPlayers in the entity.
There's no selecting of specific fields out for an entity (with the
exception of Projection queries, which are very limited and probably
not what you are after) so you can't get everything-but-gameState.
This probably won't affect price (a read op is a read op no matter the
size of the entity) but it could affect latency (more RPCs to get the
same bit of data). The usual solution is simply to put the gameState
blob into separate entity with the same id.
Jeff