but I cannot find more info in the manual about this cursor load.
I'm concern about loading 200GB , this large amount of data, is there
any limitation?
I'm not sure from perfomance point of view, is there any difference
between loading from cursor and loading from flat files on the same
box?
is there any memeory limitation for cursor load ?
If anybody could share some experience on it, I would be very
appreciated.
Thanks.
Make sure you use the +c option when running the script, or somehow
else make sure that you don't commit between declaring and opening the
cursor and starting the load or the load will fail to start.
Maybe you're missing something. Loading from a cursor allows you to
declare a cursor for a select statement (i.e. data is already in the
database) and then load from the cursor directly into another table.
This saves you the step of having to export the data to a flat file,
and is MUCH more efficient than 'insert into ... select ...' when
loading large amounts of data.
If your data is in a flat file, you do not have the option of loading
from a cursor.
There aren't memory restrictions to this -- load is simply reading the
cursor as it traverses the result set.
about '+c' option, I'm wondering that's that for?
I did not find it in manural...
I tried to do the declare and load without '+c' , it run successfully..
but I used small amount of data...
could you give me some more explain on that..
Thanks.
I agree with you , probably there aren't memory restrictions to this...
also for compare loading from file or load from cursor..
if load from file, there must have physical read I/O happened, but load
from cursor , there is not physical read I/O happened for read from
cursor.. I suppose load from file should slower than load from cursor
from I/O point of view..
you said load from file is faster than load from cursor, do you have
any idea why?
Hi, we use quite often load from cursor, in dpf context, because it is
easier to alter big fact tables this way: we first load to a table in a temp
schema, recreate the fact table fitting our needs (like transformation to
MDC), and load from the temp schema. This way, the DB supports the
duplication of the data - we are not sure that export will fit in FS, and
doing twice load from cursor is faster than exporting + loading from file.
You might consider increase UTIL_HEAP_SZ, especially if you load MDC.
HTH, Jean-Marc