I'm doing a 9.40 upgrade to 11.5 on a smallish database, around 100g. I
want to unload and reload the data too, as the dbspace layout is dreadful.
timings so far:
dbexport 2h 20m
dbimport 10h
So I thought I'd try onunload.
onunload -t filename -s 204800000 -b 512 databasename took 0h 42m
The onload is still going, 15 hours later!! By the looks of it (the
onunload file size was 62g in size) it still has about 6g to go ...
onstat -D ....
67bf1028 9 9 0 24275898 24274725
/opt/informix/dbspaces3/appdbs_1
67bf1218 10 9 0 4784051 4700042
/opt/informix/dbspaces3/appdbs_2
Is onunload supposed to be this slow? Was my blocksize choice (512) wrong?
Why is it doing so many page *reads" (just given myself an idea here;
perhaps it's a free page list thing and if I reinitialise the db server it
wouldn't do the reads ...?).
!5 hours and counting for a 62g database is crazy. This is all on fairly
slow internal disk, but even a dbimport was much faster than this.
Any ideas? (myexport is on my list for sure, but I'd like to know what's
wrong with onload).
Thanks
Neil
| Hi Neil, AFAIK you cannot use onunload/onload for differ IFX versions... Probably the natural differ of struct of the data pages generate this problem of slowness... My suggestion: Go to HPL ... much faster of dbimport/dbexport . But the control of the creation of objects must be manually... check the command: onpladm create project <prj> -D <database> .... The easy / dummy way to do this is: On the source server: - Generate a dbschema full - Export with HPL (using the project or table-by-table) On the target server: - Create this dbschema on the target (all objects) - Disable all FK constraints and triggers - Load all data with HPL - Enable all FKs and triggers. - Update statistics... And exists several ways to tunning this process: - using gzip to unload all data (avoiding write I/O or network I/O) - Writing all data from table to several files in parallel - parameter of the HPL (plconfig.std) - PDQ configuration (for indexes creation) And for the load, alternatively you can use "external tables' ... if you don't want use the HPL Copied from v11.70 Information Center: http://publib.boulder.ibm.com/infocenter/idshelp/v117/topic/com.ibm.mig.doc/ids_mig_192.htm You can use only onunload and onload if your answer to each of the following questions is yes. If your answer is no, you cannot use onunload and onload.
--- Em ter, 15/3/11, Neil Truby <neil....@ardenta.com> escreveu: |
|
External tables?
--
Cheers,
Obnoxio The Clown
http://obotheclown.blogspot.com
I will now proceed to pleasure myself with this fish.
my 2 00.1
first: do not use onload/onunload unless there is really no
alternative.
in the past you could not put tables in different dpspaces etc.
i would gofor scripting using HPL etc.
a decade ago i helped migrate a database using HPL and simple sql
which was 1 TB in size.
The largest table was 100 GB in size and took an hour to load.(express
mode in parallel using fifos from engine a to engine b)
--Arts stuf:
Myexport/myimport using EXTERNAL TABLES in express mode in parallel
will
FLY! It should be faster then HPLoader even.
--
Art i do believe you; however i would do the compare and tune HPL;
i really think you will have a real hard time to beat HPL!!!!
We also generated our own checkpoints using:
create procedure generatechkpt()
define dirty decimal (4,3);
while (1=1)
select ( sum(lru_nmod) / sum ( lru_nfree + lru_nmod ))
into dirty
from syslrus ;
if (dirty < 0.75 ) then
system "sleep 1";
else
system "onmode -c";
end if
end while ;
end procedure;
execute procedure generatechkpt();
!
NOTE this needs to be repaired since you can have more bufferpools
Ok for your issue:
Does the physical log get hammered??? this can happen if you load
stuff into a dbspace
which contains initialized pages.
check onstat -l for the phys log stuff.
This can really slow down the onload ; have been there in the past...
doing a reload of a baan database
HMMM this was needed since dbexport/import with 100.000 tables is no
fun at all.
(they mucked up overwriting the physical log with something else so it
had to be hacked...)
i missed the fact that foreach partition the engine placed a lock so i
had to have about 300.000 locks (onconfig had 100.000 configured....)
with V7.3 onload crashed with no more locks.
adjusting that and restarting the onload realllllly slowwwwed down and
was caused by phlogging the initiialized pages.
So you could create a brand new dbspace and load it in; make sure that
the chunk you add and use is binary zerod before
you add it.
Superboer.
onstat -g ath
onstat -g stk for the onload threads
??
If my memory is correct, to minimize I/O and translation (informix
internal to ASCII), I like to SELECT from source server table and
INSERT into target server table all in 1 SQL. Plus I try to get the
best network connection between the 2 servers (even if I have to run a
new cable directly between them). On the source server setup
LIGHT_SCANS, etc. On the target server initially make the table RAW
(then STANDARD when done), etc. I would insert tables in parallel (if
the lesser computer has 4 cpu's I would do 3 tables in parallel).
I did this years ago with more data than you in less time.
Good luck,
LS
Thanks. It looks good. However, I got 14 of this messag:
SQL -23103: Code-set conversion function failed due to illegal sequence or
invalid value
.. which I assume means that the unloads of the relvant tables has failed.
Back to the original question, does anyone know why onload is so slow? It
eventually loaded the 100g-odd in SEVENTEEN hours!
Thanks
Did you check if stuff is physical logged ???
onstat -l during the load and maybe onstat -g iof etc.
Superboer.
On 17 mrt, 21:23, "Neil Truby" <neil.tr...@ardenta.com> wrote:
> "Art Kagel" <art.ka...@gmail.com> wrote in message
AFAIK you cannot use onunload/onload for differ IFX versions...
from Cesar and he is right!!!
you can not mix.
Superboer.
Thanks
"Superboer" <super...@t-online.de> wrote in message
news:8c2c42f6-bb47-43ae...@x1g2000yqb.googlegroups.com...