This is actually a general question, although it regards to a specific
database.
We have a physical standby database for a production database (8.1.7.4.0) at
another location; the managed recovery and automatic archiving take place
without problems.
However, we noticed that recovery of a single arch.log is rather slow - the
machines (8 CPU HP-UX) and storage boxes (EMCs) are the same.
During periods of peak activity , the production database takes about a
minute to generate a single redolog of 1GB; after it is transferred to
standby machine, standby database needs (average) 3-5 minutes to apply this
particular log.
We tried to establish parallel managed recovery (8 processes), but gained
improvement of only 10-20%.
This still represents no serious problem, as the time during the day is
still sufficient to apply all the logs. However, if activity of the
production database increases, we fear that standby would not be capable of
performing the recovery - ie that the production would generate more logs
during the day that the standby is capable of applying.
I have to emphasize that this is not the matter of network transport (which
is satisfactory), rather the speed of applying logs during recovery.
So is there a way to speed-up the recovery process..? Any suggestions
welcome.
Regards,
Goran Dokmanovic
Oracle DBA
VIPNet d.o.o
So if you've already parallelized the recovery process, that's about as far
as you can take it.
On the other hand, if you've got 8 CPUs, then I'd suggest you increase the
degree of parallelization to (maybe) 16, though it really rather depends on
the number and layout of the data files as to whether that will help at all.
But I'd certainly give it a go. Usual advice, as far as I remember it, is
two or three recovery processes per datafile.
Regards
HJR
"Goran D." <goran9...@yahoo.com> wrote in message
news:bj20ac$db2$1...@fegnews.vip.hr...
attempting to start a parallel recovery with 252 processes
parallel recovery start successful, got 8 processes
Should we really allow for 252 recovery processes, or it is nonsense
regarding 8-cpu machine?
Thanks!
"Howard J. Rogers" <howard...@yahoo.com.au> wrote in message
news:3f5487a7$0$6524$afc3...@news.optusnet.com.au...
Recovery is IO, not CPU intensive, thus recovery "scalability" depends a lot
of your IO subsystem. If you are using async IO, then the gain from
parallel recovery will probably be lower than with sync IO.
Btw, in normal recovery scenario you could open several sessions, where you
issue serial recovery on different sets of datafiles, that way redologs are
read for each session and redo is applied serially, passing the parallel
execution messaging mechanism. This can help on platforms where PX doesn't
work well - Linux from my experience.
(I'm not sure whether it's possible to do it in stanby recovery mode.)
Tanel.
"Goran D." <goran9...@yahoo.com> wrote in message
news:bj20ac$db2$1...@fegnews.vip.hr...
>
/home2/oracle$ sqlplus /nolog
SQL*Plus: Release 8.1.7.0.0 - Production on Tue Sep 2 16:34:00 2003
(c) Copyright 2000 Oracle Corporation. All rights reserved.
SQL> connect internal/soyathinkidunnoaboutinternal
Connected.
SQL> recover managed standby database;
ORA-00283: recovery session canceled due to errors
ORA-01124: cannot recover data file 1 - file is in use or recovery
ORA-01110: data file 1: '/oradata/SXXXX/system01.dbf'
SQL>
And in alertlog:
Media Recovery Waiting for thread 1 seq# 42
Tue Sep 2 16:34:17 2003
ALTER DATABASE RECOVER managed standby database
Tue Sep 2 16:34:17 2003
Media Recovery Start: Managed Standby Recovery
Media Recovery failed with error 1124
ORA-283 signalled during: ALTER DATABASE RECOVER managed standby
database ...
Which of course doesn't quite show whether you have blown off the
original managed recovery and have to manually fix it. It takes up to
5 minutes after switching logs on the source box to archive the next
log on the standby box to show it is still working (fast ethernet
network, standby db only thing on 4 processor hp-ux, faster source box
barely used at this time of day).
Blame it all on Autoraid, I guess.
jg
--
@home.com is bogus.
http://www.pbs.org/cringely/pulpit/pulpit20030828.html