|What's the best way to process a large query result list in Java||Scott Murphy||3/2/12 6:00 PM|
Let's say for instance, I want to send every user an email that is in
So I do the following query:
select all from User where state = 'CA'
As far as I know I have the following 2 options:
1. Use the Mapper API to iterate through all my users and then just
send the users that are in California an email.
This sucks because if I have only 10 users in California and 40,000
users, I have to do 40,000 reads just to process 10 users.
2. Use the Task Queue Api and iterate over a url sequence.
This seems to be a pain because there is no generalized framework to
do this as a Job. I need to properly handle random server 500 errors,
etc and code how to keep track of each job to see if it finished
A preferred approach would be if I could feed the Mapper API a query
as input, but I don't think this is possible.
Do I have any other options?
|Re: What's the best way to process a large query result list in Java||Mark Nuttall-Smith||3/11/12 4:55 AM|
If it's possible you could maintain a set of State entities which have a list of users in that state:
To get the CA users you just get the CA state entity, and do a batch get with all the keys.
Obviously the state entity would need updating transactionally when a new user was added or deleted.
Hope it helps,
|Re: What's the best way to process a large query result list in Java||Scott Murphy||3/11/12 1:14 PM|
Yeah, that would work... but that's a lot of book keeping... The problem is I have numerous
cases with the same type generalized problem. I would prefer to iterate over a query and
was just wondering if there was any generalized way of doing this without having to treat
each situation differently.
Whether it be a query or the blobstore or a logservice query, I just want a generalized way
to take in paginated input and treat it like a job that could maintain state and resume from
a particular point upon failure.
I don't know if there is anything in the Pipeline API that can help me with this, but running
mappers over entire data sets when you have indexes built just seems terribly inefficient
|Re: What's the best way to process a large query result list in Java||Mike Aizatsky||3/13/12 3:52 PM|
Yes, it's impossible right now to feed a query into the Mapper. Mostly
You'll have to copy and modify three classes:
Modify the query in createIterator() from
It should be really straightforward. I can take a look at your code if
|Re: What's the best way to process a large query result list in Java||Ronoaldo José de Lana Pereira||3/13/12 4:11 PM|
By your experience, does it makes sense to use the "__scatter__" property in a special index in a query to try sharding it (even poor sharding)? I just created a test index that includes the reserved __scatter__ property, and seems to return the right results when performing a query, but I got stuck on where I should change to use this query and perform the sharding...
Em terça-feira, 13 de março de 2012 19h52min01s UTC-3, Mike Aizatsky escreveu:
|Re: What's the best way to process a large query result list in Java||Mike Aizatsky||3/13/12 6:07 PM|
Yes, it does. You should modify line 100 in
The only caveat is that you need a special index which includes both
- necessary properties + __scatter__ (to generate splits)
But I might be wrong here. Never actually tried it.
|Re: What's the best way to process a large query result list in Java||Ronoaldo José de Lana Pereira||3/14/12 5:01 AM|
Great! Thanks for your tips, I'll give it a try. The needed indexes are probably cheeper than iterate over the whole datastore every time we need to map only portions of it.
Em terça-feira, 13 de março de 2012 22h07min18s UTC-3, Mike Aizatsky escreveu:
|Re: What's the best way to process a large query result list in Java||Ronoaldo Pereira||3/24/12 6:27 PM|
I'm amost there! After a few haking I have some code to allow testing the required indexes. I'll test on production and report back when finished.
I uploaded a codereview, do you have some time to look into it http://codereview.appspot.com/5905049/? Just to make sure I'm on the right track... Still need to write a good end-to-end test, but looks like I didn't break anything yet.
|Re: What's the best way to process a large query result list in Java||Mike Aizatsky||4/2/12 1:48 PM|
Yes, I think this would work.
|Re: What's the best way to process a large query result list in Java||Amit Sangani||9/12/13 10:28 AM|
We have the exact same requirement - to filter before feeding the Mapper. What's the simplest way to get this accomplished?
|Re: What's the best way to process a large query result list in Java||Ronoaldo Pereira||9/16/13 7:39 AM|
I posted the patch for the old version of Map/Reduce, so it may not work for the new (recomended!) version. Anyway, the main requirement for good sharding is to create the custom index as explained by Mike. After that, you can take a look at this implementation, where the sharded query is built, and try to extend that Job class. You can addapt my patch to override the method that create the split points to add parameters to the query.
Another option, is to use only the Pipelines API to run your processing. You can start iterating over the results or key ranges in a Pipeline, and then pass the results to the child jobs of a Pipeline. The rationale is that, if you will already query a sub-set of your information, then you may not need to use map/reduce itself to iterate over it: you can implement a generator job that splits the data, say, withint 100 child jobs, each one processing 1000 records, giving you a reasonable performance. Obviously, you can tweak the numbers or calculate them based on the expected resultset size.
Let me know if you want to discuss more particular cases sending a message on G+ or posting here in the groups.
Em quinta-feira, 12 de setembro de 2013 14h28min05s UTC-3, Amit Sangani escreveu: