we are creating standby databases (physical) for several
databases (9i). All in all it works perfectly except one thing:
for one of them archivelog generation is totally out of control.
We have multiplexed archivelog destinations sized at 8Gb (which,
based on our estimations was something unlikely to happen below
one full production day). Today, these 8Gb are hitten in *one*
hour only; to be more precise, this happens for at least one
program: a purge.
Today, this is a no-go for our whole dataguard platform since
every hour we must delete manually archive logs manually to
permit the purge to finish correctly (thus breaking our standby
database).
What I am trying to figure out is this:
- why do we hit such archivelog production ?
- what is exactly stored in an archived redo log ?
- how can we distingly disminish this archive log generation ?
- what could be done in order not to break our standby database ?
- is there a "best practice" our developers should follow to code
his purge system (# of commit, commit frequency, DLL to avoid
using, ...
I googled hard but found nothing. Any help would be greatly
appreciated here !
Thank you in advance.
Xavier
At Mon, 3 Aug 2009 22:29:39 -0700 (PDT),
ca111026 wrote:
>
> Suggestions:
> - Partition the table so instead of purging data you will be able to
> drop partition
This is a good idea but I am not sure our developers will be able
to modify their procs before the D-Day.
> - Instead of purging once per month purge every day (or even more
> often)
This could be a good idea providing we had control on it (which
we do not).
I will explore the partition idea but I still want to understand
why I am in this "chaos".
Thank you.
How did you come up with such a brilliant solution? Partitioning? I'm
positively stupefied!
Archive logs contain the records of block changes. The good way of
slowing the archive log generation down is to slow down changing blocks.
Now, I will reveal you a great secret: if you stop doing DML (insert,
update, delete) and DDL (truncate, create,alter), your archive log
generation will stop, scout's honor!
Hopefully, you're not a DBA? Instead of suggesting doing a comprehensive
analysis which would establish who changes what and why and how
frequently, you suggest partitioning? Yeah, maybe, with just a tad of the
snake oil.
On Aug 4, 12:11 am, Xavier Maillard <x...@gnu.org> wrote:
> Hi,
>
> we are creating standby databases (physical) for several
> databases (9i). All in all it works perfectly except one thing:
> for one of them archivelog generation is totally out of control.
>
I think it's not out of control, it's simply far more than you
expected.
> We have multiplexed archivelog destinations sized at 8Gb (which,
> based on our estimations was something unlikely to happen below
> one full production day). Today, these 8Gb are hitten in *one*
> hour only; to be more precise, this happens for at least one
> program: a purge.
>
One should expect that, depending upon the volume of data 'purged'.
> Today, this is a no-go for our whole dataguard platform since
> every hour we must delete manually archive logs manually to
> permit the purge to finish correctly (thus breaking our standby
> database).
>
> What I am trying to figure out is this:
>
> - why do we hit such archivelog production ?
You're deleting data (including index entries) and every such change
is recorded in the redo logs.
> - what is exactly stored in an archived redo log ?
Redo records, composed of change vectors describing changes made to
database blocks. Each change vector describes a single transactional
change to a database block. Change vectors also are stored for
changes to the rollback segment data blocks and the transaction table
of the rollback segments.
> - how can we distingly disminish this archive log generation ?
Stop purging data?
> - what could be done in order not to break our standby database ?
Increase the size of your standby archivelog destinations to handle
this increased workload. Frankly, 8 GB is small for the volume of
work you're doing.
> - is there a "best practice" our developers should follow to code
> his purge system (# of commit, commit frequency, DLL to avoid
> using, ...
Deletes generate redo, period, regardless of how many per second you
execute. Commit frequency has nothing to do with how full your redo
logs are. Archivelogs are generated when a redo log is filled and a
log switch occurs. The issue is how quickly you're filling the redo
logs and a large purge can do that very rapidly.
>
> I googled hard but found nothing. Any help would be greatly
> appreciated here !
>
> Thank you in advance.
>
> Xavier
David Fitzjarrell
snip
Truncate table may work better ... an 8 gig dest for archive logs is
tiny ...
If I were you I would start by reading Tom Kyte's latest architecture
book at least the first 10 chapters.
It will explain in detail what is in the redo logs ...
The answer to this question is extremely basic. The archive logs are
well documented in the Concepts and DBA Administration manuals.
Basically all data changes that the database processes are copied to
the redo logs which in turn are archived. (Pretty much all production
databases always be in archive log mode otherwise forward recovery
from disk failure is not possible)
The proper way to handle a large quantitiy of data being archived is
to 1- correctly size the online redo logs so that the logs do not
switch excessively 2- make sure the online redo log files are spread
across multiple physical disks, 3- to make sure the archive log
destination has plenty of free space, and 4- that the archive log
destination is supported by multiple physical disk units.
If you think the amount of redo being generated is excessive for the
workload being performed then you need to discuss the application
design with the developers and look for sections of the application
where the processing logic could be improved. If you do not use a
physical DataGuard setup the nologging feature of direct path loads
might be one tool you can use since it will cut done on the amount of
undo generated and since undo is logged to redo you may be albe to use
this option to reduce the amount of redo generated. But only if you
have large insert jobs that are responsible for much of the redo
activity.
If the developers used a lot of work (permanent temporary tables or
DDL in job streams) then you can have them rewrite the code to just
perform 4 and 5 table joins and reply on the Oracle read consistency
model instead of work tables, substitute true (global) temporary
tables for the permanent work tables where a table has to be used,
substiture truncate for delete to clear out persistent work table
where they have to be used, and a couple of like methods to reduce the
amount of work being done in the processes.
In Oracle you should normally create objects one time and use them
many. If the developer designed processes use DDL then improvement is
possible. Create the objects once and re-use them.
Someone mentioned the extra cost partitioning option. While
partitioning may be useful for managment and performance reasons I am
not sure it will help much in reducing the amount of overall redo
generated.
Running a statspack or AWR report (10g+ with Performance Pack License)
and monitoring the SGA/Shared Pool should give you some idea what
processes generate the large amounts of redo. It then becomes a
matter of looking at the work being done to see if the same results
can be obtained doing less work.
HTH -- Mark D Powell --
What percentage of data in tables are being purged? If all,
truncate. If most, select what you want to save to another table
nologging, truncate, reload from other table, truncate that table.
You will generate much less redo with the nologging (if you do it
right, there are rules you must follow), but data won't get to standby
until you reload, so if you crash you need to restore to time before
truncate.
If you are 50/50, suck it up and buy more hardware. (Actually, you
should do more analysis, but if you don't know the basics, you need
the basics first. Most performance problems come from poor
application design).
Why are you breaking the standby? You ought to be able to keep it
working, just slowing down because of the db waiting while you delete
archived logs. How is your standby configured?
jg
--
@home.com is bogus.
http://www.cnn.com/2009/US/08/03/new.york.jobless.graduate/