On Thu, Sep 17, 2015 at 2:07 AM, Erik Cederstrand
<
erik+...@cederstrand.dk> wrote:
>> Den 16/09/2015 kl. 16.45 skrev Mike Dewhirst <
mi...@dewhirst.com.au>:
>>
>> On 16/09/2015 9:53 AM, Erik Cederstrand wrote:
>>> issues because the prefetch query does something along the lines of
>>> "SELECT ... FROM lesson_subjects WHERE lesson_id IN
>>> [insane_list_of_lesson_ids]".
>>
>> I'm no expert so I'm wondering if len([insane_list_of_lesson_ids]) == "hundreds of thousands"?
>
> In my, case, yes. I think the backend might process the query in chunks if the length of the SQL exceeds the max SQL query size.
Just curious, which RDBMS are you using? I remember that on MySQL
there used to be an advice to populate a temporary table and do a JOIN
instead of very big `xxx IN (yyyy)` statements. I'm not sure if
there's a hard limit, but anything over a few hundreds would be better
with JOIN than with IN(...)
With Oracle, there _is_ a hard limit, and a not very high one. I've
hit it frequently when doing exploratory SQL on SQLDeveloper. For
anything over a thousand items, it's safer to go the temporary table
route.
The only one where I haven't been able to hit any limit is PostgreSQL.
There i routinely do many thousands of items and it works beautifully.
I haven't personally tried "hundreds of thousands", but I guess that
if it fits within some maximum textual representation, then it will
handle it.
--
Javier