So you are in the middle of a recover database and forward recovering
with archive logs ... then you see this:
ORA-00279: change 568074198 generated at 03/13/2009 10:47:27 needed
for thread 1
ORA-00289: suggestion :
/u02prod/archivelogs/archivelog_prod_1_2484_655753279.log
ORA-00280: change 568074198 for thread 1 is in sequence #2484
ORA-00278: log file '/u02prod/archivelogs/
archivelog_prod_1_2483_655753279.log'
no longer needed for this recovery
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [krr_init_lbufs_1], [74],
[66],
[43], [], [], [], [], [], [], [], []
ORA-01112: media recovery not started
Only 1 hit in metalink but fortunately a documented bypass ( from bug
7373196 evidently ).
Bypass was documented as:
So set this parameter to 4194304 (ie 4Mb) in the init.ora/spfile
then that should be good enough to
function as a workaround and allow recovery.
_max_io_size=4194304
*** It did work but man what a mess. Set the _max_io_size before
doing the forward recovery then you better get rid of it before
running normal workloads.
Supposed to be fixed in 11.2 ... ( and perhaps next 11.1 patchset? ).
Wow. Horrible. I hope you were testing, rather than doing it in anger.
Palooka
snip
> > So you are in the middle of a recover database and forward recovering
> > with archive logs ... then you see this:
snip
> > ORA-00283: recovery session canceled due to errors
> > ORA-00600: internal error code, arguments: [krr_init_lbufs_1], [74],
> > [66],
> > [43], [], [], [], [], [], [], [], []
> > ORA-01112: media recovery not started
snip
> > Only 1 hit in metalink but fortunately a documented bypass ( from bug
> > 7373196 evidently ).
>
> > Bypass was documented as:
>
> > So set this parameter to 4194304 (ie 4Mb) in the init.ora/spfile
> > then that should be good enough to
> > function as a workaround and allow recovery.
> > _max_io_size=4194304
>
> > *** It did work but man what a mess. Set the _max_io_size before
> > doing the forward recovery then you better get rid of it before
> > running normal workloads.
>
> > Supposed to be fixed in 11.2 ... ( and perhaps next 11.1 patchset? ).
>
> Wow. Horrible. I hope you were testing, rather than doing it in anger.
>
> Palooka
Yup ... testing ... hard to believe Oracle let this loose on the
world.
What has happened to the quality assurance process at Oracle?
To be ( a little fair ) it doesn't happen for every recover database
command and may have some relationship with how many logs you
process ... how big they are ... what size ( apparently your online
log buffer is perhaps.
Still ... pretty deadly ... imagine being in a real recover database
crisis and you hit this!
> What has happened to the quality assurance process at Oracle?
It has been outsourced to Elbonia.
Exactamundo. One of my favourite phrases is "Well, one thing I can say
is that I have never lost data with Oracle".
Were this to happen to me I'd be pretty shaken.
Palooka
snip
> > Yup ... testing ... hard to believe Oracle let this loose on the
> > world.
>
> > What has happened to the quality assurance process at Oracle?
>
> > To be ( a little fair ) it doesn't happen for every recover database
> > command and may have some relationship with how many logs you
> > process ... how big they are ... what size ( apparently your online
> > log buffer is perhaps.
>
> > Still ... pretty deadly ... imagine being in a real recover database
> > crisis and you hit this!
...
> Exactamundo. One of my favourite phrases is "Well, one thing I can say
> is that I have never lost data with Oracle".
>
> Were this to happen to me I'd be pretty shaken.
>
> Palooka
Doc ID is 751787.1