Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

lms flush message acks

8 views
Skip to first unread message

alek

unread,
Jun 20, 2006, 8:03:05 AM6/20/06
to
Hi,

I have a 10gR2 (10.2.0.2) RAC database and I'm trying to figure out
what the "lms flush message acks" wait event means. In the WAIT
EVENT section of the AWR report the first place is constantly taken by
this event and I want to know more about it but unfortunately I cannot
find any valuable information into the Oracle official documentation or
on the metalink. How can I diagnose this wait event? What does it mean
and what are the solutions to be implemented in order to remove this
bottleneck?

Many thanks.

Alec.

joel garry

unread,
Jun 20, 2006, 4:39:57 PM6/20/06
to

LMS - Global Cache Service Process

I don't know anything about this stuff, but from perusing the docs
wonder if there is any actual problem. It might just show that you are
waiting on block flushes to disk because everything is just peachy and
you are hitting your I/O limits (or you are seeing the result of the
fix for log sync bug 4755405). Or it might mean you are using the
wrong interconnect. See
http://download-west.oracle.com/docs/cd/B19306_01/rac.102/b14197/monitor.htm

Might be worthwhile asking support.

jg
--
@home.com is bogus.
"You got to work fast when you're working for free." - Wyland

alek

unread,
Jun 21, 2006, 2:47:14 AM6/21/06
to
Thanks a lot.

alec.

K Gopalakrishnan

unread,
Jul 1, 2006, 9:54:45 PM7/1/06
to
Alec,

We would need some more details (like what your application is doing,
any specifc DDLs on specific objects) to find the RCA. There were
some bugs in dynamic remastering are which causes excessive LMS flush
waits and would require support involvement to confirm that.
ALternatively you can disable the dynamic remastering and see whether
this problems goes away.

-Gopal

yon...@yahoo.com

unread,
Jul 1, 2006, 10:08:35 PM7/1/06
to
> alek wrote:
> > Hi,
> >
> > I have a 10gR2 (10.2.0.2) RAC database and I'm trying to figure out
> > what the "lms flush message acks" wait event means. In the WAIT
>
> K Gopalakrishnan wrote:
> Alec,
>
> We would need some more details (like what your application is doing,
> any specifc DDLs on specific objects) to find the RCA. There were
> some bugs in dynamic remastering are which causes excessive LMS flush
> waits and would require support involvement to confirm that.
> ALternatively you can disable the dynamic remastering and see whether
> this problems goes away.
>
> -Gopal

Hi, Gopal,

Are you referring to the parameter _lm_dynamic_remastering? It looks
like for that version it's already false.

Yong Huang

yon...@yahoo.com

unread,
Jul 1, 2006, 10:49:30 PM7/1/06
to
yon...@yahoo.com wrote:
>
> Hi, Gopal,
>
> Are you referring to the parameter _lm_dynamic_remastering? It looks
> like for that version it's already false.
>
> Yong Huang

Addition. x$ksppsv.ksppstdf, default value based on the column name, is
actually 'TRUE' for this parameter. But my spfile did not set it.

Yong Huang

K Gopalakrishnan

unread,
Jul 2, 2006, 3:21:28 AM7/2/06
to
Yong,

Dynamic remastering has undergone tremendous changes in recent
versions. They are controlled by multiple _gc_affinity parameters.
Since 10gR2 does remastering based on objects, I suspect that could be
the issue for the original poster. However he can confirm that with a
simple 10046 trace where the lms flush wait will show the object ids.


-Gopal

Anand Rao

unread,
Jul 3, 2006, 1:18:52 AM7/3/06
to
Alek,

firstly, what makes you think that 'lms flush message acks' is a
problem? what is the premise based on which you decide that it is an
issue?

what is the exact nature of the problem you are facing in your RAC
database? or is that you are just bothered about this particular wait
event?

could you send us the top 5 wait events from your AWR or Statspack
report?

assuming you have not changed any of the default values for the _gc*
parameters, lms related log flushes are generally caused by high no. of
requests for current mode blocks from remote instances. over-commiting
in the application can also contribute to the problem.

i am not visting internal oracle related causes for this wait event but
only application/user created causes.

frequent log flush (hence redolog writes) could be a result of high no.
of current block transfers across the interconnect (because remote
instance repetitively ask for current blocks). most (if not all)
current block requests by remote instances requires that the holder
flush his redo before sending across the dirty block. LMS does this
job.

another indirect cause is a slow LGWR (due to slow disk where redologs
are placed). are they on raw devices or ASM?

what are the values for the following statistics (all instances),

"gc current block flush time"
"Avg global cache current block flush time (ms)"
"Global cache log flushes for current blocks served %"

what is the value for fast_start_mttr_target in your instances?

There is some useful information you can dig up in
V$CURRENT_BLOCK_SERVER.

cheers
anand

0 new messages