Nick, thanks for responding. I'll add a lot more detail and test data
here.
I'm doing three fetches: one to get the user data, one to get his
starred items, one to get his own items. I've broken it down to
examine each one individually. The user fetch is fast:
userEntity = datastore.get(KeyFactory.createKey("User", makeUserKeyName
(getCurrentUserId())));
/service?a=g&t=u 200 56ms 27cpu_ms 12api_cpu_ms 0kb
/service?a=g&t=u 200 35ms 21cpu_ms 12api_cpu_ms 0kb
/service?a=g&t=u 200 26ms 15cpu_ms 12api_cpu_ms 0kb
/service?a=g&t=u 200 49ms 15cpu_ms 12api_cpu_ms 0kb
/service?a=g&t=u 200 37ms 15cpu_ms 12api_cpu_ms 0kb
/service?a=g&t=u 200 27ms 15cpu_ms 12api_cpu_ms 0kb
The saved query is not so fast. Each item has a list property "s"
which is a list of all user IDs that have saved that item. It's served
by a dedicated index, s asc + i asc.
Query query = new Query("Item");
query.addFilter("s", Query.FilterOperator.EQUAL, getCurrentUserId());
query.addSort("i");
List<Entity> items = datastore.prepare(query).asList(withLimit
(REQUEST_SIZE));
/service?a=g&t=ts 200 97ms 710cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=ts 200 79ms 711cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=ts 200 84ms 705cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=ts 200 87ms 716cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=ts 200 84ms 696cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=ts 200 72ms 697cpu_ms 678api_cpu_ms 0kb
The query to get the user's items is similarly slow. It is a query on
a string property "u", ordered by "i". It's served by a dedicated
index, u asc + i asc.
Query query = new Query("Item");
query.addFilter("u", Query.FilterOperator.EQUAL, getCurrentUserId());
query.addSort("i");
List<Entity> items = datastore.prepare(query).asList(withLimit
(REQUEST_SIZE));
/service?a=g&t=tm 200 71ms 699cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=tm 200 90ms 710cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=tm 200 73ms 699cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=tm 200 67ms 692cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=tm 200 83ms 698cpu_ms 678api_cpu_ms 0kb
/service?a=g&t=tm 200 80ms 722cpu_ms 678api_cpu_ms 0kb
Adding the sort does not seem to add any extra time, and nor should
it, given the index type.
Anyway, put the three together and this is what you get:
/service?a=g 200 929ms 1731cpu_ms 1368api_cpu_ms 24kb
/service?a=g 200 159ms 1423cpu_ms 1368api_cpu_ms 24kb
/service?a=g 200 171ms 1420cpu_ms 1368api_cpu_ms 24kb
/service?a=g 200 178ms 1414cpu_ms 1368api_cpu_ms 24kb
/service?a=g 200 146ms 1405cpu_ms 1368api_cpu_ms 24kb
See, what really annoys me is this. Those requests are coming back in
170 ms. But I'm charged for 1400 ms of CPU time. How does that work?
It should be doing blocking work on the server, right? Are there
really nine servers all churning for that whole time?
Thanks for your time,
Anthony Mills
On Jul 10, 11:07 am, Alex Popescu