Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

memory resident tables

169 views
Skip to first unread message

awal...@eb.com

unread,
Dec 5, 1998, 3:00:00 AM12/5/98
to
I am trying to use the not well documented command:
set table <tablename> memory_resident;
to force tables to stay in main memory.
The command executes ok, but I don't see the expected performance
improvement.
Has anyone successfully used this and obtained significantly better
response times ?
Alfred Wallner

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

andy lennard

unread,
Dec 7, 1998, 3:00:00 AM12/7/98
to
In article <74c6st$ukq$1...@nnrp1.dejanews.com> awal...@eb.com writes:

> I am trying to use the not well documented command:
> set table <tablename> memory_resident;
> to force tables to stay in main memory.
> The command executes ok, but I don't see the expected performance
> improvement.
> Has anyone successfully used this and obtained significantly better
> response times ?
> Alfred Wallner

I'm not sure how I'd quantify performance improvement for this..
If a table sees 'heavy' use then it will effectively be 'memory resident'
because the info will all be present in the LRUs, dic etc. Forcing a table
to be memory resident will not help in this case.

For a lightly used table, forcing it to be resident may help in those
'rare' occasions when the table is accessed, and give snappier
results for the 'once a day' user. (but presumably to the detriment of
other users whose tables are 'coming and going' normally).

Real improvement would also depend on the number of rows to be returned
from this table. Sure, the first row may come back faster, but if there
are another 20,000 following it then that may be the major factor.

Perhaps the poor documentation reflects the poor gains to be achieved....

Andy.

--
Andy Lennard an...@kontron.demon.co.uk


awal...@eb.com

unread,
Dec 7, 1998, 3:00:00 AM12/7/98
to
In article <913020...@kontron.demon.co.uk>,

an...@kontron.demon.co.uk wrote:
> In article <74c6st$ukq$1...@nnrp1.dejanews.com> awal...@eb.com writes:
>
> > I am trying to use the not well documented command:
> > set table <tablename> memory_resident;
> > to force tables to stay in main memory.
> > The command executes ok, but I don't see the expected performance
> > improvement.
> > Has anyone successfully used this and obtained significantly better
> > response times ?
> > Alfred Wallner
>
> I'm not sure how I'd quantify performance improvement for this..
> If a table sees 'heavy' use then it will effectively be 'memory resident'
> because the info will all be present in the LRUs, dic etc. Forcing a table
> to be memory resident will not help in this case.

yes, it's a heavily used table which we access from our web server.
I would like to avoid any disk access when querying this table. We have tons
of main memory and given that this table is getting accessed constantly, I
feel there is potential for significant response time improvement if the
entire table and it's indices were cached in memory. But when I do load
testing and look at vmstat outputs, I still see the same number of disk
accesses and measure the same average response time. If this command worked
the way I think it should, there should be no more disk reads and therefore
improved response times, right, since going to disk is the costly operation ?
Or am I overlooking some other important factor ?

>
> For a lightly used table, forcing it to be resident may help in those
> 'rare' occasions when the table is accessed, and give snappier
> results for the 'once a day' user. (but presumably to the detriment of
> other users whose tables are 'coming and going' normally).
>
> Real improvement would also depend on the number of rows to be returned
> from this table. Sure, the first row may come back faster, but if there
> are another 20,000 following it then that may be the major factor.
>
> Perhaps the poor documentation reflects the poor gains to be achieved....
>

that is the impression that I got as well. I have spent much time talking to
informix tech support, but I haven't found a single support person who is
actually familiar with this option.

> Andy.
>
> --
> Andy Lennard an...@kontron.demon.co.uk
>
>

-----------== Posted via Deja News, The Discussion Network ==----------

Art S. Kagel

unread,
Dec 7, 1998, 3:00:00 AM12/7/98
to
andy lennard wrote:
>
> In article <74c6st$ukq$1...@nnrp1.dejanews.com> awal...@eb.com writes:
>
> > I am trying to use the not well documented command:
> > set table <tablename> memory_resident;
> > to force tables to stay in main memory.
> > The command executes ok, but I don't see the expected performance
> > improvement.
> > Has anyone successfully used this and obtained significantly better
> > response times ?
> > Alfred Wallner
>
> I'm not sure how I'd quantify performance improvement for this..
> If a table sees 'heavy' use then it will effectively be 'memory resident'
> because the info will all be present in the LRUs, dic etc. Forcing a table
> to be memory resident will not help in this case.
>
> For a lightly used table, forcing it to be resident may help in those
> 'rare' occasions when the table is accessed, and give snappier
> results for the 'once a day' user. (but presumably to the detriment of
> other users whose tables are 'coming and going' normally).
>
> Real improvement would also depend on the number of rows to be returned
> from this table. Sure, the first row may come back faster, but if there
> are another 20,000 following it then that may be the major factor.
>
> Perhaps the poor documentation reflects the poor gains to be achieved....

Where memory resident tables can REALLY help is on a VERY busy system
where several tables may be busy but a few are accessed only
periodically but performance is critical. In this case without
Residence such tables will be swapped out in favor of more recently
accessed data. With Residence set on the table only activity on
another resident table can cause a resident table's pages to be reused.
This can speed up programs that need instant access to certain data
but are not run constantly such that they force LRU algorithms to keep
the active pages resident.

Art S. Kagel

June Tong

unread,
Dec 10, 1998, 3:00:00 AM12/10/98
to
awal...@eb.com wrote:

> In article <913020...@kontron.demon.co.uk>,


> an...@kontron.demon.co.uk wrote:
> > In article <74c6st$ukq$1...@nnrp1.dejanews.com> awal...@eb.com writes:
> >
> > > I am trying to use the not well documented command:
> > > set table <tablename> memory_resident;
> > > to force tables to stay in main memory.
> > > The command executes ok, but I don't see the expected performance
> > > improvement.
> > > Has anyone successfully used this and obtained significantly better
> > > response times ?
> > > Alfred Wallner
> >
> > I'm not sure how I'd quantify performance improvement for this..
> > If a table sees 'heavy' use then it will effectively be 'memory resident'
> > because the info will all be present in the LRUs, dic etc. Forcing a table
> > to be memory resident will not help in this case.
>

> yes, it's a heavily used table which we access from our web server.
> I would like to avoid any disk access when querying this table. We have tons
> of main memory and given that this table is getting accessed constantly, I
> feel there is potential for significant response time improvement if the
> entire table and it's indices were cached in memory. But when I do load
> testing and look at vmstat outputs, I still see the same number of disk
> accesses and measure the same average response time. If this command worked
> the way I think it should, there should be no more disk reads and therefore
> improved response times, right, since going to disk is the costly operation ?
> Or am I overlooking some other important factor ?

How do you know that the table wasn't cached in memory before you set it
memory_resident? As has been pointed out, because of the way the LRU queues
work, heavily used tables tend to stay in memory, regardless of the
memory_resident setting. Therefore, there would be no difference between the
performance before and after setting it.


> that is the impression that I got as well. I have spent much time talking to
> informix tech support, but I haven't found a single support person who is
> actually familiar with this option.

Probably because it's new and isn't that useful. My impression is that it's one
of those things that was added because enough people who were used to "another
database product" requested it. I was hard-pressed to come up with any example
where it would actually help (actually, I never did come up with one). Maybe I
should save Art's post to remind me, but even that, I think, is a bit contrived.

June
--
jun...@hotmail.com
Grounded in Palo Alto, living on chocolate chip cookies

Art S. Kagel

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to June Tong
June Tong wrote:
>
> awal...@eb.com wrote:
>
> > In article <913020...@kontron.demon.co.uk>,
> > an...@kontron.demon.co.uk wrote:
> > > In article <74c6st$ukq$1...@nnrp1.dejanews.com> awal...@eb.com writes:
[SNIP]

> database product" requested it. I was hard-pressed to come up with any example
> where it would actually help (actually, I never did come up with one). Maybe I
> should save Art's post to remind me, but even that, I think, is a bit contrived.

Yeah I'm really good at contriving. :-) You should see my email to Cem
about how some users are going to get themselves in trouble overuseing
resident tables and Informix how should add an option to limit the
number of buffers allocatable to HIGH priority pages. Noone in Menlo
could imagine it happening but 7.31 has buffer priority aging to solve
the problem instead.

Art S. Kagel

awal...@eb.com

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
In article <36707D9B...@hotmail.com>,

June Tong <jun...@hotmail.com> wrote:
> awal...@eb.com wrote:
>
> > In article <913020...@kontron.demon.co.uk>,
> > an...@kontron.demon.co.uk wrote:
> > > In article <74c6st$ukq$1...@nnrp1.dejanews.com> awal...@eb.com writes:
> > >

ok, here is the sequence of commands I executed to test this:
1) onstat -z //zero out stats
2) onstat -p //check if all stats are zero
3) dbaccess db < cmd // cmd contains "set table A memory_resident;"
4) dbaccess db < cmd2 // cmd2 contains "select * from A;"
5) onstat -p // look at number of disk reads
6) onstat -z // zero out stats again
7) onstat -p // check if all stats are zero
8) dbaccess db < cmd2
9) onstat -p

at this point, I expected to see a close to 0 number for disk reads, because
the entire table should be in memory by now, but the number of disk reads is
about the same as in 5)
table A is about 40 MB, while we have 4 GIG of main memory, and here are my
onconfig parameters for shared memory:

LOCKS 500000 # Maximum number of locks BUFFERS 1000000 # Maximum number
of shared buffers NUMAIOVPS # Number of IO vps PHYSBUFF 32 # Physical log
buffer size (Kbytes) LOGBUFF 32 # Logical log buffer size (Kbytes) LOGSMAX
6 # Maximum number of logical log files CLEANERS 2 # Number of buffer
cleaner processes SHMBASE 0xa000000 # Shared memory base address
SHMVIRTSIZE 100000 # initial virtual shared memory segment size ##SHMADD
8192 # Size of new shared memory segments (Kbytes) SHMADD 65536 # Size of
new shared memory segments (Kbytes) SHMTOTAL 0 # Total shared memory
(Kbytes). 0=>unlimited CKPTINTVL 3000 # Check point interval (in sec) LRUS
8 # Number of LRU queues LRU_MAX_DIRTY 60 # LRU percent dirty begin
cleaning limit LRU_MIN_DIRTY 50 # LRU percent dirty end cleaning limit
LTXHWM 50 # Long transaction high water mark percentage LTXEHWM 60 # Long
transaction high water mark (exclusive) TXTIMEOUT 0x12c # Transaction
timeout (in sec) STACKSIZE 32 # Stack size (Kbytes)


if anyone could explain why the informix engine (btw, we use IDS 7.3) keeps
going back to the disk to read that 40MB table, that would be great.


>
> > that is the impression that I got as well. I have spent much time talking to
> > informix tech support, but I haven't found a single support person who is
> > actually familiar with this option.
>
> Probably because it's new and isn't that useful. My impression is that it's
one
> of those things that was added because enough people who were used to "another

> database product" requested it. I was hard-pressed to come up with any
example

my example is where the database is hooked up to a web server with lots of
requests coming in. In that case, you would want the time spent accessing data
from the database to be as short as possible in order to increase throughput.
If you know all the requests are going against a few tables only, then you
would want to keep those tables in memory always.
Alfred

> where it would actually help (actually, I never did come up with one). Maybe
I
> should save Art's post to remind me, but even that, I think, is a bit
contrived.
>

> June
> --
> jun...@hotmail.com
> Grounded in Palo Alto, living on chocolate chip cookies
>
>

-----------== Posted via Deja News, The Discussion Network ==----------

David Williams

unread,
Dec 30, 1998, 3:00:00 AM12/30/98
to
In article <36707D9B...@hotmail.com>, June Tong
<jun...@hotmail.com> writes
>> >
<Someone else writes>

>> > I'm not sure how I'd quantify performance improvement for this..
>> > If a table sees 'heavy' use then it will effectively be 'memory resident'
>> > because the info will all be present in the LRUs, dic etc. Forcing a table
>> > to be memory resident will not help in this case.

>> that is the impression that I got as well. I have spent much time talking to


>> informix tech support, but I haven't found a single support person who is
>> actually familiar with this option.
>

>Probably because it's new and isn't that useful. My impression is that it's one
>of those things that was added because enough people who were used to "another
>database product" requested it. I was hard-pressed to come up with any example

>where it would actually help (actually, I never did come up with one). Maybe I
>should save Art's post to remind me, but even that, I think, is a bit contrived.
>

Many users all accessing small 'reference data' tables every few tens
of seconds.

One user doing a sequential scan of several large tables
and hence keeps pushing the pages from the small tables out of the
buffer cache. E.g. large report which runs for several hours.

The 1 user will not revisit the rows once they have been scanned hence
caching them does not make sense, but Online will still keep them in
the cache.

The many users will reuse the pages but not immediately hence they are
not MRU and get pushed out of the cache.

i.e. where the MRU is not the best caching policy...

>June
>--
>jun...@hotmail.com
>Grounded in Palo Alto, living on chocolate chip cookies
>
>

--
David Williams

0 new messages