Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

AIO and Buffer Waits

84 views
Skip to first unread message

Habichtsberg, Reinhard

unread,
Jul 13, 2011, 2:49:07 AM7/13/11
to inform...@iiug.org

We see the following waitstates on one of our Informix 11.50FC7W3 servers. He is running OLTP but also Decision Support.

 

(sum)   81125424549,00

reason  aio

 

(sum)   62105207794,00

reason  buffer

 

(sum)   61713678094,10

reason  mt yield n

 

(sum)   61561288458,60

reason  mt yield

 

(sum)   29339327941,19

reason  mt ready

 

(sum)   15649570039,20

reason  running

 

(sum)   3340893922,100

reason  log i/o

 

(sum)   1550362515,000

reason  mt yield 0

 

(sum)   1439244418,300

reason  checkpoint

 

(sum)   162272243,0000

reason  lock

 

(sum)   157356745,3000

reason  mutex

 

(sum)   21639138,60000

reason  full Q

 

What do you suggest to reduce IO and Bufferwaits. My idea is to increase the buffers.

 

TIA,

Reinhard.

Art Kagel

unread,
Jul 13, 2011, 6:23:57 AM7/13/11
to Habichtsberg, Reinhard, inform...@iiug.org
Most bufwaits are caused by contention for the LRU queues, so increase the LRU queues for your buffer caches to maximum first.  Use the BTR calculation and observing activity in onstat -P over time to determine if you may also need more buffers.

Art

Art S. Kagel
Advanced DataTools (www.advancedatatools.com)
Blog: http://informix-myview.blogspot.com/

Disclaimer: Please keep in mind that my own opinions are my own opinions and do not reflect on my employer, Advanced DataTools, the IIUG, nor any other organization with which I am associated either explicitly, implicitly, or by inference.  Neither do those opinions reflect those of other individuals affiliated with any entity with which I am affiliated nor those of the entities themselves.



_______________________________________________
Informix-list mailing list
Inform...@iiug.org
http://www.iiug.org/mailman/listinfo/informix-list


Habichtsberg, Reinhard

unread,
Jul 13, 2011, 8:29:13 AM7/13/11
to inform...@iiug.org

Hi Art,

 

BTR is 48,57 per hour. According to Lester Knutsen should be < 6 per hour.

 

BUFFERPOOL looks like that and AUTO:LRU_TUNING is turn on:

 

BUFFERPOOL      size=2K,buffers=2400000,lrus=60,lru_min_dirty=0.200000,lru_max_dirty=0.400000

AUTO_LRU_TUNING 1

 

onstat –p (last onstat –z 14 hours ago)

 

IBM Informix Dynamic Server Version 11.50.FC7W3   -- On-Line -- Up 54 days 01:54:20 -- 8984576 Kbytes

 

Profile

dskreads   pagreads   bufreads   %cached dskwrits   pagwrits   bufwrits   %cached

142604256  1410570080 59284431722 99.76   61872253   98433717   227436232  72.80

 

isamtot    open       start      read       write      rewrite    delete     commit     rollbk

52046254582 216281491  457585123  48750470513 17730429   30711047   19268677   18995943   1446067

 

gp_read    gp_write   gp_rewrt   gp_del     gp_alloc   gp_free    gp_curs

14         0          9          0          0          0          0

 

ovlock     ovuserthread ovbuff     usercpu  syscpu   numckpts   flushes

0          0            0          317120.96 38441.46 134        163

 

bufwaits   lokwaits   lockreqs   deadlks    dltouts    ckpwaits   compress   seqscans

20157325   126775     1927754927 0          0          5149       6933881    1205132

 

ixda-RA    idx-RA     da-RA      RA-pgsused lchwaits

66150442   1636318    30850997   97300297   20233621

 

What would be a reasonable value for lrus?

 

TIA, Reinhard.

Art Kagel

unread,
Jul 13, 2011, 8:34:44 AM7/13/11
to Habichtsberg, Reinhard, inform...@iiug.org
OK your bufwaits ratio is less than one, and given the BTR also, you are correct you need more buffers.  Note that on a system with lots of inserts over selects, the BTR's tend to be higher, but 48/hr is high even so.  As you, and Lester, note BTR should be single digits per hour and an ideal for a typical OLTP system is aroung 6.0, but every system is different.  Once you get the performance problem licked, keep track so you know what's normal for your system.


Art

Art S. Kagel
Advanced DataTools (www.advancedatatools.com)
Blog: http://informix-myview.blogspot.com/

Disclaimer: Please keep in mind that my own opinions are my own opinions and do not reflect on my employer, Advanced DataTools, the IIUG, nor any other organization with which I am associated either explicitly, implicitly, or by inference.  Neither do those opinions reflect those of other individuals affiliated with any entity with which I am affiliated nor those of the entities themselves.



Habichtsberg, Reinhard

unread,
Jul 13, 2011, 9:23:21 AM7/13/11
to Alberto Romeu Pessonio Filho, inform...@iiug.org

Hi Alberto,

 

if calculating the LRU queues as you suggested the value would be 2400 LRUS not 240. Are you sure that would be correct?

 

The values

AUTO_CKPTS 1

RTO_SERVER_RESTART 600

are set.

 

I cannot increase the physical log because I have 2 GB chunks as biggest. And you cannot use more than one chunk for the physical log.

 

The values of LRU_MAX_DIRTY and LRU_MIN_DIRTY are so little because we have 4,8 GB BUFFERS. During the checkpoints these buffers have to be written to disk. I’m afraid that the checkpoints would last too long if I increase those values and that this would produce blockages.

 

Thanks,

Reinhard.

 

From: Alberto Romeu Pessonio Filho [mailto:arf...@orizonbrasil.com.br]
Sent: Wednesday, July 13, 2011 3:01 PM
To: Habichtsberg, Reinhard
Subject: RES: AIO and Buffer Waits

 

Reinhard:

 

Using the concepts from the old school, i suggest you to calculate between 750 and 1000 buffers fer LRU queue.

 

Also, try to enable AUTO_CKPTS=1 and RTO_SERVER_RESTART =1800  on your onconfig to able your instance to work with RTO_SERVER_RESTART. It’s make the difference on the performance of your environment. Check the size of your physical log DBSpace with that changes using onstat –g ckp. If after the changes you have checkpoints triggered by Plog, you must add space on the phisical log DBSPace.

 

I noticed that the values of  lru_min_dirty   and   lru_max dirty are too little. Try to set LRU_MAX_DIRTY=85 and LRU_MIN_DIRTY=75, and let your instance adjust this watermarks according his workload. Increase the value of LOCKS of your instance (if you have free available memory), because I noticed that you had lokwaits. The value of each lock is 44 bytes

 

                Try this:

 

                Configuration Parameters:

 

                AUTO_CKPTS 1

                RTO_SERVER_RESTART 1800

 

                BUFFERPOOL      size=2K,buffers=2400000,lrus=240,lru_min_dirty=85.00000,lru_max_dirty=75.00000

AUTO_LRU_TUNING 1

 

 

                I think if you don’t have storage bottlenecks (Storage I/O in 500MB/sec read/write, at least), the performance gains would be good.

 

                Regards,

                Alberto.

0 new messages