Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Help me tuning this wait event:log file sync

116 views
Skip to first unread message

chao_ping

unread,
Jul 12, 2002, 12:54:16 AM7/12/02
to
hi, dbas: One of the database i manage face a serious wait event, log
file sync. This is a big and busy oltp system, and using disk array of
Sun T3 with raid-5. We are using Veritas QuickIO for datafile and redo
log files. The pressure on the database is growing fast, and more and
more the redo log becomes the bottleneck of the database. Here is some
data showing the fact:

2002/05/03 Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event Waits
Time (cs) Wt Time --------------------------------------------
------------ ------------ ------- log file sync 82,244 92,442
33.68 db file sequential read 298,301 80,719 29.41 log file
parallel write 81,849 36,989 13.48 db file parallel write 5,427
33,615 12.25 control file parallel write 4,673 6,104 2.22


2002/06/07 Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event Waits
Time (cs) Wt Time --------------------------------------------
------------ ------------ ------- log file sync 6,352,383
15,785,313 40.09 db file sequential read 26,862,699 12,538,922
31.85 log file parallel write 5,971,229 3,990,066 10.13 db file
parallel write 290,479 3,164,391 8.04 db file scattered read
1,749,137 814,981 2.07

2002/07/21 Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event Waits
Time (cs) Wt Time --------------------------------------------
------------ ------------ ------- log file sync 2,207,609
6,688,751 59.47 log file parallel write 2,044,977 1,385,379
12.32 db file parallel write 103,155 1,203,077 10.70 db file
sequential read 8,772,908 1,088,922 9.68 log buffer space 3,284
222,604 1.98

2002/08/28 Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event
Waits Time (cs) Wt Time
-------------------------------------------- ------------
------------ ------- log file sync 2,247,585 20,529,779 63.71
db file parallel write 441,052 4,377,899 13.59 log file
parallel write 1,724,089 3,806,535 11.81 db file sequential
read 8,854,525 2,044,020 6.34 enqueue 78,759 592,411 1.84

2002/09/04 Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event Waits
Time (cs) Wt Time --------------------------------------------
------------ ------------ ------- log file sync 3,838,694
13,158,371 63.28 db file sequential read 2,189,863 2,401,275
11.55 log file parallel write 3,401,035 2,098,018 10.09 db file
parallel write 97,086 1,503,608 7.23 enqueue 71,251 432,706 2.08

2002/10/11 Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event Waits
Time (cs) Wt Time --------------------------------------------
------------ ------------ ------- log file sync 453,862
9,679,513 68.11 db file sequential read 500,157 1,660,982 11.69
db file parallel write 67,235 1,137,017 8.00 log file parallel
write 163,180 1,075,201 7.57 log file sequential read 32,012
254,327 1.79


As you can see, log file sync is a big bottleneck, and i have to solve
this problem. I think the best solution maybe convert to dedicated raid1
or single disk with oracle multiplex, but this need to rebuild the whole
disk array , need too much downtime, and with some risk. I tried to
disable CachedQuick IO on the redo log file, but did not see performance
gains. Veritas Said quick io is as fast as RAW, but i did not tested it
before, i wonder whether move to raw for redo log will helps? It is
difficult to modify the application to reduce the frequency of commit,
for every transaction finishes, it have to commit.

Another strange question is that: from the iostat result and sar -d
result, there is not much write to the disk array, maybe 2Mbytes/second,
and it causes so much wait( average 10%time wait during normal
workload). Maybe some misconfiguration in the disk array? Thanks for
everyone reading and answering, please give me some suggestions

--
an oracle fan,
an oracle beginner

Posted via dBforums
http://dbforums.com

Howard J. Rogers

unread,
Jul 12, 2002, 3:46:55 AM7/12/02
to
How big are your redo logs and how big is your log buffer?

Regards
HJR


"chao_ping" <chao...@163.com> wrote in message
news:3d2e60f8$1...@usenetgateway.com...

Jonathan Lewis

unread,
Jul 12, 2002, 5:50:10 AM7/12/02
to

Log file sync can be very misleading.

I note that log file WRITES do not appear
as a problem. Remember that a log file write
wait will cause a log file sync for every user
who commits for the duration of that wait.

So if you have two hundred very busy users
committing very frequently, you could easily
end up with log file sync showing up with 200
times the impact of log file sync.

Look at the session level - is any one session
being impacted significantly by log file sync ?


--
Jonathan Lewis
http://www.jlcomp.demon.co.uk

Next Seminars
UK July / Sept
Australia July / August
Malaysia September
USA (MI) November
http://www.jlcomp.demon.co.uk/seminar.html

chao_ping wrote in message <3d2e60f8$1...@usenetgateway.com>...

Yong Huang

unread,
Jul 12, 2002, 10:38:58 AM7/12/02
to
[Sorry if you see my post several times. Google stops working and
Mailgate keeps asking for loging]

If you can't control the frequency of commits, reduce log_buffer.
How big is it?

Your post is messed up in format. In 2002/08/28 post,

log file sync 2,247,585 20,529,779 63.71

you mean the total wait time is only 63.71 centiseconds? That's
nothing unless this is measured right after the database is started.

Yong Huang


--
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG

chao_ping

unread,
Jul 13, 2002, 3:15:45 AM7/13/02
to
Jonathan Lewis wrote:
> Log file sync can be very misleading.
> I note that log file WRITES do not appear as a problem. Remember that a
> log file write wait will cause a log file sync for every user who
> commits for the duration of that wait.
> So if you have two hundred very busy users committing very frequently,
> you could easily end up with log file sync showing up with 200 times the
> impact of log file sync.
> Look at the session level - is any one session being impacted
> significantly by log file sync ?


Thanks, it seems there is about 40 commits per second. I get this
information from statspack: Load Profile ~~~~~~~~~~~~ Per Second Per
Transaction --------------- --------------- Redo size: 76,336.86
2,208.94 Logical reads: 12,555.73 363.32 Some snapshot of
v$session_wait:
SQL> select sid,event from v$session_wait where event not like 'rdbms%'
SQL> and event not like 'SQL*N%' and sid>8;

SID EVENT ----------
----------------------------------------------------------------
198 log file sync 199 log file sync 200 log file sync 201 log
file sync 222 log file sync 364 log file sync

6 rows selected.

SQL> /

SID EVENT ----------
----------------------------------------------------------------
48 log file sync 160 log file sync 347 log file sync

SQL> /

SID EVENT ----------
----------------------------------------------------------------
136 log file sync 190 log file sync 199 log file sync 200 log
file sync 325 log file sync 347 log file sync 355 db file
sequential read

7 rows selected.

SQL> /

SID EVENT ----------
----------------------------------------------------------------
200 log file sync 325 log file sync

I don't quite understand your words"Look at the session level - is any
one session being impacted significantly by log file sync ?", we are
using tuxedo middleware and do users do not directly connect to
database. So it is hard to find how much impact on end users(web
browser). I think for one user, the wait time should not be quite long,
but will this wait event have bad impact on the scalability of the
database? Maybe because of this, no more transaction can be made ?

And because of the poor business logic, everyuser who do the transaction
must commit, so maybe it is hard to reduce the frequency of commit.

My another wonder is that there is only 2Mbytes/second written to the
disk array, but there is 10% cpu spent on wait(from top, iostat and
sar). How to explain this? Will move to raw help?

Thanks very much.:)

chao_ping

unread,
Jul 14, 2002, 3:13:13 AM7/14/02
to
Yong Huang wrote:
> [Sorry if you see my post several times. Google stops working and
> Mailgate keeps asking for loging]
> If you can't control the frequency of commits, reduce log_buffer. How
> big is it?
> now log buffer is 2M, In early time, it was 1M, and i see much wait time
> in log buffer space,(once it was the 4th top wait event), so i enlarged
> the log buffer size from 1m to 2m.

> Your post is messed up in format. In 2002/08/28 post,
> log file sync 2,247,585 20,529,779 63.71
> you mean the total wait time is only 63.71 centiseconds? That's nothing
> unless this is measured right after the database is started.
> Yong Huang


The wait time is different because the time between the snapshot is not
identical. The last snapshot:
2002/07/11
2002/08/11 ________________Snap_Id_____Snap_Time______Sessions
________________-------_------------------_--------
_Begin_Snap:________693_11-Jul-02_13:45:10______336
___End_Snap:________694_11-Jul-02_17:03:51______336
____Elapsed:_________________198.68_(mins)

Top_5_Wait_Events
~~~~~~~~~~~~~~~~~__________________________Wait_____%_Total
Event_____________________________________Waits__Time_(cs)___Wt_Time

log_file_sync____________________________453,862____9,679,513___68.11
db_file_sequential_read__________________500,157____1,660,982___11.69
db_file_parallel_write___________________67,235____1,137,017____8.00
log_file_parallel_write___________________163,180____1,075,201____7.57
log_file_sequential_read___________________32,012______254,327____1.79


It is about 3 hours' time, and the log file wait time is 9,679,513 cs.
that is 96795second?(I think the value is caculated after the database
boot, it was 2002/6/29, and the record time is 2002/7/11, only 12
days?, right) The 63.71 is the percent of time spent on log file wait
event. And now the time spent on log file sync wait is 68.11%. It
seems terrible.

Maybe reduce log buffer size reduce this wait event, but i am not sure
whether log buffer space will occur again?

sigh.Performance tuning...

chao_ping

unread,
Jul 14, 2002, 3:13:13 AM7/14/02
to
Some new information: i check the statspack report and find there is
about 20 commits/second in may, and now there is 40 commits/second these
days. I do not know whether this is too much. THis is a busy oltp
system,and the trend is that there will be more and more transactions
per second, painful

And from statspack, the average redo size for one transactoin is 2K,


Load Profile ~~~~~~~~~~~~ Per Second Per Transaction ---------------

--------------- Redo size: 82,547.28 2,247.16 But from the result of
sar: oracle@main-db1$sar -d 2 2

SunOS main-db1 5.7 Generic_106541-14 sun4u 07/13/02

22:07:33 device %busy avque r+w/s blks/s avwait avserv

22:08:35____ssd0,e____100____20.7___199____3189__0.0___104.4 This is the
most busy time in the day for the system, and this device (ssd0,e
) is for the redo log volumn, 3189blocks/200=16blocks/second, that
means 8KBytes per write.This is some small write, but still much
greate than the 2k per transaction. why? other wasted log because
of the too large log_buffer_size?

(there should be no read for redo log, right, unless it is being
archived, so all write).

Thanks for your help:)

Yong Huang

unread,
Jul 15, 2002, 6:02:34 PM7/15/02
to
chao_ping <chao...@163.com> wrote in message news:<3d312489$1...@usenetgateway.com>...

> Yong Huang wrote:
> > If you can't control the frequency of commits, reduce log_buffer. How
> > big is it?
> now log buffer is 2M, In early time, it was 1M, and i see much wait time
> in log buffer space,(once it was the 4th top wait event), so i enlarged
> the log buffer size from 1m to 2m.

Never use a log_buffer larger than 1M. It's useless over that limit.

Do you have multiple log members per group? Use 1 member.

If you got 40 commits per second, the only solution may be rewrite the
application. If you can't, try a competitor's product.

Yong Huang

Howard J. Rogers

unread,
Jul 15, 2002, 6:30:32 PM7/15/02
to

"Yong Huang" <yon...@yahoo.com> wrote in message
news:b3cb12d6.02071...@posting.google.com...

> chao_ping <chao...@163.com> wrote in message
news:<3d312489$1...@usenetgateway.com>...
> > Yong Huang wrote:
> > > If you can't control the frequency of commits, reduce log_buffer.
How
> > > big is it?
> > now log buffer is 2M, In early time, it was 1M, and i see much wait time
> > in log buffer space,(once it was the 4th top wait event), so i enlarged
> > the log buffer size from 1m to 2m.
>
> Never use a log_buffer larger than 1M. It's useless over that limit.
>

Not quite true. On very heavy-transactional-load systems, a buffer uo to 5
or 6Mb *may* (or may not!) be appropriate. Had Oracle ever decided that
anything bigger than 1Mb was utterly "useless", they would never have
introduced the rule about LGWR flushing every 1Mb of uncommitted redo
(because that rule doesn't get invoked until the buffer is bigger than 3Mb,
because of the 'flush when 1/3rd full' rule).

> Do you have multiple log members per group? Use 1 member.

I definitely can't agree with this last suggestion. 1 member groups? A
recipe for data loss.

You're right that it might help eliminate some performance woes, of course.
Oracle has to do less work, so yeah, of course things work faster. But in
the eternal trade-off between security and performance, you've just plumped
squarely on the performance side of things. Which may or may not be
acceptable to the original poster, but it's certainly not what I would want
to recommend.

>
> If you got 40 commits per second, the only solution may be rewrite the
> application. If you can't, try a competitor's product.

If there are 40 commits per second, I might well try a 6Mb log buffer.

Regards
HJR


>
> Yong Huang


zhuchao

unread,
Jul 15, 2002, 10:51:57 PM7/15/02
to
"Howard J. Rogers" <d...@hjrdba.com> wrote in message news:<agm1g5$hsj$1...@lust.ihug.co.nz>...

> How big are your redo logs and how big is your log buffer?
>
> Regards
> HJR
>
hi, Howard:
Currently my log_buffer is 2M and redo log file is 256M, it switches
every 30-60 minutes during normal busy time(9:00 am - 11:00 pm).
The application is making too frequent commits(now about 40
commits/second), it is hard for me, as the dba to tune the system. Now
log file sync is 70% of total wait event. sigh.
Thanks for your suggestions.

Howard J. Rogers

unread,
Jul 15, 2002, 11:17:29 PM7/15/02
to

"zhuchao" <chao...@163.com> wrote in message
news:d8e45edf.02071...@posting.google.com...

> "Howard J. Rogers" <d...@hjrdba.com> wrote in message
news:<agm1g5$hsj$1...@lust.ihug.co.nz>...
> > How big are your redo logs and how big is your log buffer?
> >
> > Regards
> > HJR
> >
> hi, Howard:
> Currently my log_buffer is 2M and redo log file is 256M, it switches
> every 30-60 minutes during normal busy time(9:00 am - 11:00 pm).

That's a respectable switching rate, so your log file sizes shouldn't be an
issue.

There's a possibility that increasing the size of your redo log buffer might
help. Otherwise, make sure you aren't inducing extraneous checkpoints (in
other words, log_checkpoint_timeout should be zero and
log_checkpoint_interval should be bigger than your largest redo log). Also
make sure you haven't got fast_start_io_target isn't set, and (if this is
9i) make sure fast_start_mttr_target isn't set.

Otherwise, given the application issues you mention below, there's not a lot
I can suggest.

Have you got a report.txt or a statspack report handy??

Regards
HJR

Herman de Boer

unread,
Jul 16, 2002, 4:19:01 AM7/16/02
to
Howard J. Rogers wrote:

>
>"Yong Huang" <yon...@yahoo.com> wrote in message
>news:b3cb12d6.02071...@posting.google.com...
>> chao_ping <chao...@163.com> wrote in message
>news:<3d312489$1...@usenetgateway.com>...
>> > Yong Huang wrote:
>> > > If you can't control the frequency of commits, reduce log_buffer.
>How
>> > > big is it?
>> > now log buffer is 2M, In early time, it was 1M, and i see much wait time
>> > in log buffer space,(once it was the 4th top wait event), so i enlarged
>> > the log buffer size from 1m to 2m.
>>
>> Never use a log_buffer larger than 1M. It's useless over that limit.
>>
>
>Not quite true. On very heavy-transactional-load systems, a buffer uo to 5
>or 6Mb *may* (or may not!) be appropriate. Had Oracle ever decided that
>anything bigger than 1Mb was utterly "useless", they would never have
>introduced the rule about LGWR flushing every 1Mb of uncommitted redo
>(because that rule doesn't get invoked until the buffer is bigger than 3Mb,
>because of the 'flush when 1/3rd full' rule).
>

I Agree with Howard.
See also the full disclosure report of the TCP-H benchmark, 9iR2, on a
HP-9000 superdome, First edition dated on 24-jun-2002.

log_buffer has been set to 32 Mbyte.

Kind Regards,

Herman de Boer
sr. consultant
IT Consultancy Group bv

Jonathan Lewis

unread,
Jul 16, 2002, 4:21:55 PM7/16/02
to

There is a package on my web site which takes
snapshots of various v$ object, much as statspack
does, but for near real-time feedback.

One of the procedures it contains handles v$session_event.

Take a session snapshot of (say) a ten minute period
and see how significant the "log file sync" is to each
session.


Alternatively:

break on sid skip 1

select sid, event, time_waited/100
from v$session_event
order by sid, time_waited desc


log file sync's are highly exaggerated at the
v$system_event level. Your largest problem
is an I/O problem - look at the average I/O
wait time (on reads and writes). Because
you are doing "20 - 40" commits per second,
you "log file sync" time looks 20 - 40 times
as bad as it really is.

Next Seminars
UK July / Sept

Australia August


Malaysia September
USA (MI) November

http://www.jlcomp.demon.co.uk/seminar.html

chao_ping wrote in message <3d2fd3a1$2...@usenetgateway.com>...

Yong Huang

unread,
Jul 16, 2002, 7:16:08 PM7/16/02
to
"Howard J. Rogers" <d...@hjrdba.com> wrote in message news:<agvicv$s65$1...@lust.ihug.co.nz>...

> "Yong Huang" <yon...@yahoo.com> wrote in message
> news:b3cb12d6.02071...@posting.google.com...
> > Never use a log_buffer larger than 1M. It's useless over that limit.
> >
>
> Not quite true. On very heavy-transactional-load systems, a buffer uo to 5
> or 6Mb *may* (or may not!) be appropriate. Had Oracle ever decided that
> anything bigger than 1Mb was utterly "useless", they would never have
> introduced the rule about LGWR flushing every 1Mb of uncommitted redo
> (because that rule doesn't get invoked until the buffer is bigger than 3Mb,
> because of the 'flush when 1/3rd full' rule).

OK. Maybe setting log_buffer to up to 3 MB is appropriate. Above that,
LGWR has to write when the uncommitted redo reaches 1MB anyway. It'll
be very unlikely for foreground sessions to generate redo to fill more
than 2 MB log buffer when LGWR is writing to disk. (Otherwise reaching
to the 1MB mark would also be much more frequent). Not sure if I
express myself clearly.

In fact, this discussion is not relevant to the original poster's
problem. His commit rate is so high adjusting log_buffer shouldn't
have much effect, because LGWR writing is triggered by the
application, not by the (1) 3 second, (2) 1/3 buffer full, or (3) 1MB
buffer full rule. Steve Adams once had a newsletter "condemning" most
developers' bad habit of too frequent commits, not using PL/SQL stored
code. I still propose that his application be overhauled to solve the
problem.

>
> > Do you have multiple log members per group? Use 1 member.
>
> I definitely can't agree with this last suggestion. 1 member groups? A
> recipe for data loss.
>
> You're right that it might help eliminate some performance woes, of course.
> Oracle has to do less work, so yeah, of course things work faster. But in
> the eternal trade-off between security and performance, you've just plumped
> squarely on the performance side of things. Which may or may not be
> acceptable to the original poster, but it's certainly not what I would want
> to recommend.

The original poster says he's using Veritas Quick I/O. We should
assume data redundancy is already done at the lower level. I always
propose 1 log member at the company I'm working at, because we either
have Veritas or (on non-production servers) Solaris Disk Suite. Using
2 or more log members used to be prevalent.

Yong Huang

Yong Huang

unread,
Jul 16, 2002, 7:18:18 PM7/16/02
to
Herman de Boer <h.de...@itcg.nl> wrote in message news:<ah0kvu$e5o$1...@news1.xs4all.nl>...

> I Agree with Howard.
> See also the full disclosure report of the TCP-H benchmark, 9iR2, on a
> HP-9000 superdome, First edition dated on 24-jun-2002.
>
> log_buffer has been set to 32 Mbyte.

Regardless what we say in discussion, I always trust benchmarks. Where
can I find the result (I mean URL)? Does it show 32MB much better than
a small log buffer?

Yong Huang

Howard J. Rogers

unread,
Jul 16, 2002, 7:24:04 PM7/16/02
to


But disk mirroring (if that's what you're referring to) is a disaster just
waiting to happen if that's all you've got. LGWR makes an error writing to
your one member -and the hardware immediately replicates that corruption
onto your mirror(s). You've now got two identical copies of useless redo
logs. You've no protection against corruption at all, merely against the
physical loss of the redo log (which is better than nothing, I suppose, but
it's not an adequate level of protection).

For protection against those sorts of errors you *have* to have multiple
members. Mirror the multiple members, by all means, but it's only by getting
LGWR to write more than once that you have protection against corruption of
redo.

Multiple members ought to be considered compulsory, whatever hardware you
happen to be running on.

HJR


> Yong Huang


Howard J. Rogers

unread,
Jul 16, 2002, 7:37:04 PM7/16/02
to

"Yong Huang" <yon...@yahoo.com> wrote in message
news:b3cb12d6.02071...@posting.google.com...
> "Howard J. Rogers" <d...@hjrdba.com> wrote in message
news:<agvicv$s65$1...@lust.ihug.co.nz>...
> > "Yong Huang" <yon...@yahoo.com> wrote in message
> > news:b3cb12d6.02071...@posting.google.com...
> > > Never use a log_buffer larger than 1M. It's useless over that limit.
> > >
> >
> > Not quite true. On very heavy-transactional-load systems, a buffer uo to
5
> > or 6Mb *may* (or may not!) be appropriate. Had Oracle ever decided that
> > anything bigger than 1Mb was utterly "useless", they would never have
> > introduced the rule about LGWR flushing every 1Mb of uncommitted redo
> > (because that rule doesn't get invoked until the buffer is bigger than
3Mb,
> > because of the 'flush when 1/3rd full' rule).
>
> OK. Maybe setting log_buffer to up to 3 MB is appropriate. Above that,
> LGWR has to write when the uncommitted redo reaches 1MB anyway. It'll
> be very unlikely for foreground sessions to generate redo to fill more
> than 2 MB log buffer when LGWR is writing to disk. (Otherwise reaching
> to the 1MB mark would also be much more frequent). Not sure if I
> express myself clearly.

Well, I think you're wrong on this one too. Entries into the log buffer are
being made entirely in memory by server processes. Yet LGWR only clears the
buffer by writing its contents down to slow spinning chunks of metal called
hard disks. It is therefore perfectly possible for the log buffer to fill
much more quickly than LGWR can clear it. Additional space in the buffer
gives LGWR a chance to clear the first 1Mb before you come back wanting to
over-write its contents.

Incidentally, reaching the 1Mb mark is meaningless if LGWR is still busy
flushing the first 1Mb. It doesn't mean LGWR just stops flushing that first
1Mb and move on to the second. That's the point: on a busy system, you can
easily get a backlog of flushing building up, and fill up your second (and
third and fourth etc) 1Mb sections whilst the first 1Mb is still being
flushed.

Therefore, 3Mb is a complete red herring, too. The buffer needs to be as big
as it needs to be to prevent redo allocation retries. That is *usually*
about 1Mb or so, but on heavily-loaded systems, that could well be 6Mb, 24Mb
or any other megabytage you care to mention.

>
> In fact, this discussion is not relevant to the original poster's
> problem.

I didn't say it necessarily was relevant, but you made the sweeping
statement about anything over 1Mb being useless, and that comment really
doesn't deserve to be left in the air without challenge. Likewise, any
blanket statement that anything over 3Mb is a waste of time is equally
questionable.

HJR

Sean M

unread,
Jul 16, 2002, 7:46:06 PM7/16/02
to
Yong Huang wrote:
>
> The original poster says he's using Veritas Quick I/O. We should
> assume data redundancy is already done at the lower level. I always
> propose 1 log member at the company I'm working at, because we either
> have Veritas or (on non-production servers) Solaris Disk Suite. Using
> 2 or more log members used to be prevalent.

It still is. Online redos are so important that most business critical
systems I know of mirror at the Oracle level (multiple members per
group) and at the disk level (multiple mirrors per member). They each
protect against different failures - the former against more
logical-type errors (e.g. corruption caused by the OS/disk-subsytem,
etc.), the latter against hardware failure. Your disk mirroring
mechanism (whether hardware- or software-based), like anything else in
the computer world, is not immune to bugs.

If I had to pick one method though, it'd be mirroring at the Oracle
level to members on non-mirrored, but physically separate disks. That
way when someone goes crazy with the rm command, your database doesn't
crash. Not that that's ever happened, of course. ;)

Regards,
Sean

srivenu

unread,
Jul 17, 2002, 12:29:36 AM7/17/02
to
I'am not sure which version but i read somewhere(and even tested it )
that if you commit in PL/SQL program units, the session will not wait
till the log is flushed.
Can you try that ?
What application are you using?
Does it use anonymous PL/SQL blocks?
How about converting them to Stored program units

srivenu

unread,
Jul 17, 2002, 1:59:00 AM7/17/02
to
Please not this as well

In the Veritas documentation it states that Oracle performs best
with Quick I/O disabled and Oracle Data Manger turned on.

QIO should only give much help if your I/O is very random and cannot be
contained in cache. Sequential I/O will not benefit and may well be
dramatically worse under QIO.

chao_ping

unread,
Jul 17, 2002, 3:18:13 AM7/17/02
to
Herman De Boer wrote:
> Howard J. Rogers wrote:
> [ Agree with Howard. See also the full disclosure report of the TCP-H

> benchmark, 9iR2, on a HP-9000 superdome, First edition dated on
> 24-jun-2002.
> log_buffer has been set to 32 Mbyte.
> Kind Regards,
> Herman de Boer sr. consultant IT Consultancy Group bv


Thanks, can you give me the link to the page you said? Is it an OLTP
system or DSS system? Thanks.

chao_ping

unread,
Jul 17, 2002, 3:18:13 AM7/17/02
to


Thanks for yong, Howard, Lewis and others who read and answer it.You are
all experts i admire. Since this database support internet application,
and all other service depend on this database, I cannot restart it as i
wish,So, have to think it again and again before i make any modification
to it then bounce it. As for the commits/second, i caculate it from the
statspack report: Load Profile ~
~~~~~~~~~~~~~~~~~~~~~~~~Per~Second~~~~~~~Per~Transaction ~
Redo~size:~~~~~~~~~~~~~76,336.86~~~~~~~~~~~~~~2,208.94
Logical~reads:~~~~~~~~~~~~~12,555.73~~~~~~~~~~~~~~~~363.32 I get the
commits per second by devide the (values per second)/(values per
transacion).Is the method to caculate the commits(transactions per
second, since few rollback) right?

I carefully checked the IO on the redo log volume(all redo log files
on a seperate volumn with Veritas Quick IO), and get the following
data:(via vxstat ) There is about about 10-12 write action to the redo
volumn per second, and every write action write about 10 blocks, so
every write action is about 5k data, and there is 60-80K data written
to the redo log file per second, and there is around 250M data written
to redo log(log switch happens about 40-60 minutes and my redo log
size is 256M).

So, it is clear that it is not amount of data written to the redo
logfile and the disk array, it is the frequency of the write, that
caused the log file sync wait event.

The painful thing is i cannot change the slow raid-5 disk array, CTO
said it need too much downtime and some risk, and As for the
log_buffer size,
SQL> select * from v$sysstat where name like '%redo%';

STATISTIC#~NAME~~~~~~~~~~~~~~~~~~~~~~CLASS~~~~~~VALUE
----------~------------------------------~----------~----------
~~~~~~~~61~redo~synch~writes~~~~~~~~~~8~~~38142734
~~~~~~~~62~redo~synch~time~~~~~~~~~~~~~8~~322428926
~~~~~~~100~redo~entries~~~~~~~~~~~~~~~~~2~~304270016
~~~~~~~101~redo~size~~~~~~~~~~~~~~~~~2~8.2670E+10
~~~~~~~102~redo~buffer~allocation~retries~~~~~2~~~~~~50007
~~~~~~~103~redo~wastage~~~~~~~~~~~~~~~2~~600523436
~~~~~~~104~redo~writer~latching~time~~~~~~2~~~~~~~2660
~~~~~~~105~redo~writes~~~~~~~~~~~2~~~29240900
~~~~~~~106~redo~blocks~written~~~~~~~~~2~~185244483
~~~~~~~107~redo~write~time~~~~~~~~~~~2~~~93445220
~~~~~~~108~redo~log~space~requests~~~~~~~2~~~~~~~4816

~~~~~~~109~redo~log~space~wait~time~~~~~2~~~~~245683
~~~~~~~110~redo~log~switch~interrupts~~~~2~~~~~~~~~~0
~~~~~~~111~redo~ordering~marks~~~~~~~~2~~~~~~~~213

I also read some thread on google and ixora site, from
link:http://www.ixora.com.au/q+a/redo.htm#25_09_1999, steve adams said
that 120-160K should be proper, but i wonder maybe that value is not
suitable to my system as there is still "log buffer space request" and
"redo log space wait time" even if i set it to 2M. Currently both my
former dba and me did not change the value of log_checkpoint, leave it
as the default value: log_checkpoint_interval =10000 and
log_checkpoint_timeout = 1800 So, I think i has better set interval to
some larger value and disable timeout checkpoint. As for the redo log
members, i have just one member. We are on raid5 and if single disk
fail, it is still ok.Have two member give the raid more pressure, i
would like to take this risk, for we have an standby database:) So, the
concern here is the log_buffer size. My main concern is to reduce the
log file sync wait event. I want to do some test on my another machine
with similiar hardware and software. Can someone give me some
suggestions to simulate the workload? I hear from Lewis's post that loop
and commit in PLSQL doesnot cause log file sync.So, how to test whether
large log buffer is better or smaller one is ok? Thanks!!!

Herman de Boer

unread,
Jul 17, 2002, 5:24:41 AM7/17/02
to
chao_ping wrote:

>Herman De Boer wrote:
> > Howard J. Rogers wrote:
> > [ Agree with Howard. See also the full disclosure report of the TCP-H
> > benchmark, 9iR2, on a HP-9000 superdome, First edition dated on
> > 24-jun-2002.
> > log_buffer has been set to 32 Mbyte.
> > Kind Regards,
> > Herman de Boer sr. consultant IT Consultancy Group bv
>
>
>Thanks, can you give me the link to the page you said? Is it an OLTP
>system or DSS system? Thanks.
>
>
>

http://www.tpc.org/tpch/results/tpch_perf_results.asp
Look for the 1000Gb results.
"The TPC Benchmark H (TPC-H) is a decision support benchmark."

Kind Regards,

Herman de Boer
sr consultant
IT Consultancy Group bv

Yong Huang

unread,
Jul 17, 2002, 12:42:39 PM7/17/02
to
Hi, Howard,

Writing log entries is parallelized (wait event: log file parallel
write). Only log file headers are written sequentially. This means that
the same log entries are written to all members of a log group. If the
written data is wrong, it's wrong in all members. Think carefully when
you say "LGWR makes an error". It can mean Oracle's error or the error
of anything underneath (OS, RAID hardware). The underlined OS or
hardware only replicates Oracle's error. Are you saying LGWR can write
correct logs to one member but corrupt logs to another?

Yong Huang

you wrote:

waiting to happen if that's all you've got. LGWR makes an error writing
to
your one member -and the hardware immediately replicates that corruption
onto your mirror(s). You've now got two identical copies of useless redo

Howard J. Rogers

unread,
Jul 17, 2002, 4:09:08 PM7/17/02
to

"Yong Huang" <yon...@yahoo.com> wrote in message
news:31c7ecb9ecd7247035a...@mygate.mailgate.org...

> Hi, Howard,
>
> Writing log entries is parallelized (wait event: log file parallel
> write). Only log file headers are written sequentially. This means that
> the same log entries are written to all members of a log group. If the
> written data is wrong, it's wrong in all members. Think carefully when
> you say "LGWR makes an error".

I do so wish that people would stop telling me to "think carefully" as if I
don't.

>It can mean Oracle's error or the error
> of anything underneath (OS, RAID hardware). The underlined OS or
> hardware only replicates Oracle's error. Are you saying LGWR can write
> correct logs to one member but corrupt logs to another?

Yes, that's exactly what I'm saying.

HJR

chao_ping

unread,
Jul 18, 2002, 6:13:49 AM7/18/02
to
Thanks for all that replied my question. Finally i asked the deveoper to
modify the application to less the commit to the database. Nomatter how
i tune the system, it is of little help. Thanks

chao_ping

unread,
Jul 23, 2002, 3:26:24 AM7/23/02
to
Thanks all replied my question agaion. We solved this problem by
modifying the application, batch 100 commit into one commit with the
most used dml statement. Now the log file sync drop from 75% of total
time wait(just before put the modified app online) and now it is 38% of
total time wait , and now there is about 6-9 commits/second. The most
frequenctly used service(from tuxedo) drop from 0.27S to 0.00??S(less
than 0.01S). It was very successful:)
0 new messages