Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Plans for solving the VACUUM problem

11 views
Skip to first unread message

Tom Lane

unread,
May 17, 2001, 7:09:11 PM5/17/01
to
I have been thinking about the problem of VACUUM and how we might fix it
for 7.2. Vadim has suggested that we should attack this by implementing
an overwriting storage manager and transaction UNDO, but I'm not totally
comfortable with that approach: it seems to me that it's an awfully large
change in the way Postgres works. Instead, here is a sketch of an attack
that I think fits better into the existing system structure.

First point: I don't think we need to get rid of VACUUM, exactly. What
we want for 24x7 operation is to be able to do whatever housekeeping we
need without locking out normal transaction processing for long intervals.
We could live with routine VACUUMs if they could run in parallel with
reads and writes of the table being vacuumed. They don't even have to run
in parallel with schema updates of the target table (CREATE/DROP INDEX,
ALTER TABLE, etc). Schema updates aren't things you do lightly for big
tables anyhow. So what we want is more of a "background VACUUM" than a
"no VACUUM" solution.

Second: if VACUUM can run in the background, then there's no reason not
to run it fairly frequently. In fact, it could become an automatically
scheduled activity like CHECKPOINT is now, or perhaps even a continuously
running daemon (which was the original conception of it at Berkeley, BTW).
This is important because it means that VACUUM doesn't have to be perfect.
The existing VACUUM code goes to huge lengths to ensure that it compacts
the table as much as possible. We don't need that; if we miss some free
space this time around, but we can expect to get it the next time (or
eventually), we can be happy. This leads to thinking of space management
in terms of steady-state behavior, rather than the periodic "big bang"
approach that VACUUM represents now.

But having said that, there's no reason to remove the existing VACUUM
code: we can keep it around for situations where you need to crunch a
table as much as possible and you can afford to lock the table while
you do it. The new code would be a new command, maybe "VACUUM LAZY"
(or some other name entirely).

Enough handwaving, what about specifics?

1. Forget moving tuples from one page to another. Doing that in a
transaction-safe way is hugely expensive and complicated. Lazy VACUUM
will only delete dead tuples and coalesce the free space thus made
available within each page of a relation.

2. This does no good unless there's a provision to re-use that free space.
To do that, I propose a free space map (FSM) kept in shared memory, which
will tell backends which pages of a relation have free space. Only if the
FSM shows no free space available will the relation be extended to insert
a new or updated tuple.

3. Lazy VACUUM processes a table in five stages:
A. Scan relation looking for dead tuples; accumulate a list of their
TIDs, as well as info about existing free space. (This pass is
completely read-only and so incurs no WAL traffic.)
B. Remove index entries for the dead tuples. (See below for details.)
C. Physically delete dead tuples and compact free space on their pages.
D. Truncate any completely-empty pages at relation's end. (Optional,
see below.)
E. Create/update FSM entry for the table.
Note that this is crash-safe as long as the individual update operations
are atomic (which can be guaranteed by WAL entries for them). If a tuple
is dead, we care not whether its index entries are still around or not;
so there's no risk to logical consistency.

4. Observe that lazy VACUUM need not really be a transaction at all, since
there's nothing it does that needs to be cancelled or undone if it is
aborted. This means that its WAL entries do not have to hang around past
the next checkpoint, which solves the huge-WAL-space-usage problem that
people have noticed while VACUUMing large tables under 7.1.

5. Also note that there's nothing saying that lazy VACUUM must do the
entire table in one go; once it's accumulated a big enough batch of dead
tuples, it can proceed through steps B,C,D,E even though it's not scanned
the whole table. This avoids a rather nasty problem that VACUUM has
always had with running out of memory on huge tables.


Free space map details
----------------------

I envision the FSM as a shared hash table keyed by table ID, with each
entry containing a list of page numbers and free space in each such page.

The FSM is empty at system startup and is filled by lazy VACUUM as it
processes each table. Backends then decrement/remove page entries as they
use free space.

Critical point: the FSM is only a hint and does not have to be perfectly
accurate. It can omit space that's actually available without harm, and
if it claims there's more space available on a page than there actually
is, we haven't lost much except a wasted ReadBuffer cycle. This allows
us to take shortcuts in maintaining it. In particular, we can constrain
the FSM to a prespecified size, which is critical for keeping it in shared
memory. We just discard entries (pages or whole relations) as necessary
to keep it under budget. Obviously, we'd not bother to make entries in
the first place for pages with only a little free space. Relation entries
might be discarded on a least-recently-used basis.

Accesses to the FSM could create contention problems if we're not careful.
I think this can be dealt with by having each backend remember (in its
relcache entry for a table) the page number of the last page it chose from
the FSM to insert into. That backend will keep inserting new tuples into
that same page, without touching the FSM, as long as there's room there.
Only then does it go back to the FSM, update or remove that page entry,
and choose another page to start inserting on. This reduces the access
load on the FSM from once per tuple to once per page. (Moreover, we can
arrange that successive backends consulting the FSM pick different pages
if possible. Then, concurrent inserts will tend to go to different pages,
reducing contention for shared buffers; yet any single backend does
sequential inserts in one page, so that a bulk load doesn't cause
disk traffic scattered all over the table.)

The FSM can also cache the overall relation size, saving an lseek kernel
call whenever we do have to extend the relation for lack of internal free
space. This will help pay for the locking cost of accessing the FSM.


Locking issues
--------------

We will need two extensions to the lock manager:

1. A new lock type that allows concurrent reads and writes
(AccessShareLock, RowShareLock, RowExclusiveLock) but not anything else.
Lazy VACUUM will grab this type of table lock to ensure the table schema
doesn't change under it. Call it a VacuumLock until we think of a better
name.

2. A "conditional lock" operation that acquires a lock if available, but
doesn't block if not.

The conditional lock will be used by lazy VACUUM to try to upgrade its
VacuumLock to an AccessExclusiveLock at step D (truncate table). If it's
able to get exclusive lock, it's safe to truncate any unused end pages.
Without exclusive lock, it's not, since there might be concurrent
transactions scanning or inserting into the empty pages. We do not want
lazy VACUUM to block waiting to do this, since if it does that it will
create a lockout situation (reader/writer transactions will stack up
behind it in the lock queue while everyone waits for the existing
reader/writer transactions to finish). Better to not do the truncation.

Another place where lazy VACUUM may be unable to do its job completely
is in compaction of space on individual disk pages. It can physically
move tuples to perform compaction only if there are not currently any
other backends with pointers into that page (which can be tested by
looking to see if the buffer reference count is one). Again, we punt
and leave the space to be compacted next time if we can't do it right
away.

The fact that inserted/updated tuples might wind up anywhere in the table,
not only at the end, creates no headaches except for heap_update. That
routine needs buffer locks on both the page containing the old tuple and
the page that will contain the new. To avoid possible deadlocks between
different backends locking the same two pages in opposite orders, we need
to constrain the lock ordering used by heap_update. This is doable but
will require slightly more code than is there now.


Index access method improvements
--------------------------------

Presently, VACUUM deletes index tuples by doing a standard index scan
and checking each returned index tuple to see if it points at any of
the tuples to be deleted. If so, the index AM is called back to delete
the tested index tuple. This is horribly inefficient: it means one trip
into the index AM (with associated buffer lock/unlock and search overhead)
for each tuple in the index, plus another such trip for each tuple actually
deleted.

This is mainly a problem of a poorly chosen API. The index AMs should
offer a "bulk delete" call, which is passed a sorted array of main-table
TIDs. The loop over the index tuples should happen internally to the
index AM. At least in the case of btree, this could be done by a
sequential scan over the index pages, which avoids the random I/O of an
index-order scan and so should offer additional speedup.

Further out (possibly not for 7.2), we should also look at making the
index AMs responsible for shrinking indexes during deletion, or perhaps
via a separate "vacuum index" API. This can be done without exclusive
locks on the index --- the original Lehman & Yao concurrent-btrees paper
didn't describe how, but more recent papers show how to do it. As with
the main tables, I think it's sufficient to recycle freed space within
the index, and not necessarily try to give it back to the OS.

We will also want to look at upgrading the non-btree index types to allow
concurrent operations. This may be a research problem; I don't expect to
touch that issue for 7.2. (Hence, lazy VACUUM on tables with non-btree
indexes will still create lockouts until this is addressed. But note that
the lockout only lasts through step B of the VACUUM, not the whole thing.)


There you have it. If people like this, I'm prepared to commit to
making it happen for 7.2. Comments, objections, better ideas?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majo...@postgresql.org so that your
message can get through to the mailing list cleanly

Doug McNaught

unread,
May 17, 2001, 8:07:24 PM5/17/01
to
Tom Lane <t...@sss.pgh.pa.us> writes:

[...]

> There you have it. If people like this, I'm prepared to commit to
> making it happen for 7.2. Comments, objections, better ideas?

I'm just commenting from the peanut gallery, but it looks really
well-thought-out to me. I like the general "fast, good enough, not
necessarily perfect" approach you've taken.

I'd be happy to help test and debug once things get going.

-Doug
--
The rain man gave me two cures; he said jump right in,
The first was Texas medicine--the second was just railroad gin,
And like a fool I mixed them, and it strangled up my mind,
Now people just get uglier, and I got no sense of time... --Dylan

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl

Mike Mascari

unread,
May 17, 2001, 8:57:42 PM5/17/01
to
Very neat. You mention that the truncation of both heap and index
relations is not necessarily mandatory. Under what conditions would
either of them be truncated?

Mike Mascari
mas...@mascari.com

-----Original Message-----
From: Tom Lane [SMTP:t...@sss.pgh.pa.us]

....

3. Lazy VACUUM processes a table in five stages:
A. Scan relation looking for dead tuples; accumulate a list of
their
TIDs, as well as info about existing free space. (This pass is
completely read-only and so incurs no WAL traffic.)
B. Remove index entries for the dead tuples. (See below for
details.)
C. Physically delete dead tuples and compact free space on their
pages.
D. Truncate any completely-empty pages at relation's end.
(Optional,
see below.)
E. Create/update FSM entry for the table.

...

Further out (possibly not for 7.2), we should also look at making the
index AMs responsible for shrinking indexes during deletion, or
perhaps
via a separate "vacuum index" API. This can be done without
exclusive
locks on the index --- the original Lehman & Yao concurrent-btrees
paper
didn't describe how, but more recent papers show how to do it. As
with
the main tables, I think it's sufficient to recycle freed space
within
the index, and not necessarily try to give it back to the OS.

...

mlw

unread,
May 17, 2001, 9:34:26 PM5/17/01
to
Tom Lane wrote:
>
I love it all.
I agree that vacuum should be an optional function that really packs tables.

I also like the idea of a vacuum that runs in the background and does not too
badly affect operation.

My only suggestion would be to store some information in the statistics about
whether or not, and how bad, a table needs to be vacuumed. In a scheduled
background environment, the tables that need it most should get it most often.
Often times many tables never need to be vacuumed.

Also, it would be good to be able to update the statistics without doing a
vacuum, i.e. rather than having to vacuum to analyze, being able to analyze
without a vacuum.

Bruce Momjian

unread,
May 17, 2001, 10:33:29 PM5/17/01
to
> Free space map details
> ----------------------
>
> I envision the FSM as a shared hash table keyed by table ID, with each
> entry containing a list of page numbers and free space in each such page.
>
> The FSM is empty at system startup and is filled by lazy VACUUM as it
> processes each table. Backends then decrement/remove page entries as they
> use free space.
>
> Critical point: the FSM is only a hint and does not have to be perfectly
> accurate. It can omit space that's actually available without harm, and
> if it claims there's more space available on a page than there actually
> is, we haven't lost much except a wasted ReadBuffer cycle. This allows
> us to take shortcuts in maintaining it. In particular, we can constrain
> the FSM to a prespecified size, which is critical for keeping it in shared
> memory. We just discard entries (pages or whole relations) as necessary
> to keep it under budget. Obviously, we'd not bother to make entries in
> the first place for pages with only a little free space. Relation entries
> might be discarded on a least-recently-used basis.

The only question I have is about the Free Space Map. It would seem
better to me if we could get this map closer to the table itself, rather
than having every table of every database mixed into the same shared
memory area. I can just see random table access clearing out most of
the map cache and perhaps making it less useless.

It would be nice if we could store the map on the first page of the disk
table, or store it in a flat file per table. I know both of these ideas
will not work, but I am just throwing it out to see if someone has a
better idea.

I wonder if cache failures should be what drives the vacuum daemon to
vacuum a table? Sort of like, "Hey, someone is asking for free pages
for that table. Let's go find some!" That may work really well.
Another advantage of centralization is that we can record update/delete
counters per table, helping tell vacuum where to vacuum next. Vacuum
roaming around looking for old tuples seems wasteful.

Also, I suppose if we have the map act as a shared table cache (fseek
info), it may override the disadvantage of having it all centralized.

I know I am throwing out the advantages and disadvantages of
centralization, but I thought I would give out the ideas.

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

August Zajonc

unread,
May 17, 2001, 11:31:54 PM5/17/01
to
Heck ya...

> I wonder if cache failures should be what drives the vacuum daemon to
> vacuum a table? Sort of like, "Hey, someone is asking for free pages
> for that table. Let's go find some!" That may work really well.
> Another advantage of centralization is that we can record update/delete
> counters per table, helping tell vacuum where to vacuum next. Vacuum
> roaming around looking for old tuples seems wasteful.

Counters seem like a nice addition. For example, access patterns to session
tables are almost pure UPDATE/DELETEs and a ton of them. On the other hand,
log type tables see no UPDATE/DELETE but tend to be huge in comparision. I
suspect many applications outside ours will show large disparties in the
"Vacuumability" score for different tables.

Quick question:
Using lazy vacuum, if I have two identical (at the file level) copies of a
database, run the same queries against them for a few days, then shut them
down again, are the copies still identical? Is this different than the
current behavior (ie, queries, full vacuum)?


AZ


Matthew T. O'Connor

unread,
May 18, 2001, 12:35:17 AM5/18/01
to
> Also, it would be good to be able to update the statistics without doing a
> vacuum, i.e. rather than having to vacuum to analyze, being able to
analyze
> without a vacuum.
>
I was going to ask the same thing. In a lot of situations (insert
dominated) vacumm analyze is more important for the update of statistics
then for recovery of space. Could we roll that into this? Or should we
also have an Analyze daemon?


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majo...@postgresql.org

Matthew T. O'Connor

unread,
May 18, 2001, 12:43:12 AM5/18/01
to
> Free space map details
> ----------------------
>
> Accesses to the FSM could create contention problems if we're not careful.

Another quick thought for handling FSM contention problems. A backend could
give up waiting for access to the FSM after a short period of time, and just
append it's data to the end of the file the same way it's done now. Dunno
if that is feasable but it seemed like an idea to me.

Other than that, I would just like to say this will be a great improvement
for pgsql. Tom, you and several other on this list continue to impress the
hell out of me.

Tom Lane

unread,
May 18, 2001, 12:55:34 AM5/18/01
to
mlw <ma...@mohawksoft.com> writes:
> My only suggestion would be to store some information in the statistics about
> whether or not, and how bad, a table needs to be vacuumed.

I was toying with the notion of using the FSM to derive that info,
somewhat indirectly to be sure (since what the FSM could tell you would
be about tuples inserted not tuples deleted). Heavily used FSM entries
would be the vacuum daemon's cues for tables to hit more often.

ANALYZE stats don't seem like a productive way to attack this, since
there's no guarantee they'd be updated often enough. If the overall
data distribution of a table isn't changing much, there's no need to
analyze it often.

> Also, it would be good to be able to update the statistics without doing a
> vacuum, i.e. rather than having to vacuum to analyze, being able to analyze
> without a vacuum.

Irrelevant, not to mention already done ...

regards, tom lane

Tim Allen

unread,
May 18, 2001, 1:01:14 AM5/18/01
to
On Thu, 17 May 2001, Tom Lane wrote:

> I have been thinking about the problem of VACUUM and how we might fix it
> for 7.2. Vadim has suggested that we should attack this by implementing
> an overwriting storage manager and transaction UNDO, but I'm not totally
> comfortable with that approach: it seems to me that it's an awfully large
> change in the way Postgres works. Instead, here is a sketch of an attack
> that I think fits better into the existing system structure.

<snip>

>

My AUD0.02, FWIW, is this sounds great. You said you were only planning to
concentrate on performance enhancements, not new features, Tom, but IMHO
this is a new feature and a good one :).

As several others have mentioned, automatic analyze would also be nice. I
gather the backend already has the ability to treat analyze as a separate
process, so presumably this is a completely separate issue from automatic
vacuum. Some sort of background daemon or whatever would be good. And
again, one could take the approach that it doesn't have to get it 100%
right, at least in the short term; as long as it is continually
incrementing itself in the direction of accurate statistics, then that's
much better than the current situation. Presumably one could also retain
the option of doing an explicit analyze occasionally, if you have
processor cycles to burn and are really keen to get the stats correct in
a hurry.

Tim

--
-----------------------------------------------
Tim Allen t...@proximity.com.au
Proximity Pty Ltd http://www.proximity.com.au/
http://www4.tpg.com.au/users/rita_tim/


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majo...@postgresql.org)

Tom Lane

unread,
May 18, 2001, 1:05:59 AM5/18/01
to
Mike Mascari <mas...@mascari.com> writes:
> Very neat. You mention that the truncation of both heap and index
> relations is not necessarily mandatory. Under what conditions would
> either of them be truncated?

In the proposal as given, a heap file would be truncated if (a) it
has at least one totally empty block at the end, and (b) no other
transaction is touching the table at the instant that VACUUM is
ready to truncate it.

This would probably be fairly infrequently true, especially for
heavily used tables, but if you believe in a "steady state" analysis
then that's just fine. No point in handing blocks back to the OS
only to have to allocate them again soon.

We might want to try to tilt the FSM-driven reuse of freed space
to favor space near the start of the file and avoid end blocks.
Without that, you might never get totally free blocks at the end.

The same comments hold for index blocks, with the additional problem
that the index structure would make it almost impossible to drive usage
away from the physical end-of-file. For btrees I think it'd be
sufficient if we could recycle empty blocks for use elsewhere in the
btree structure. Actually shrinking the index probably won't happen
short of a REINDEX.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

Hiroshi Inoue

unread,
May 18, 2001, 1:24:15 AM5/18/01
to
Tom Lane wrote:
>
> I have been thinking about the problem of VACUUM and how we might fix it
> for 7.2. Vadim has suggested that we should attack this by implementing
> an overwriting storage manager and transaction UNDO, but I'm not totally
> comfortable with that approach:

IIRC, Vadim doesn't intend to implement an overwriting
smgr at least in the near future though he seems to
have a plan to implement a functionality to allow space
re-use without vacuum as marked in TODO. IMHO UNDO
functionality under no overwriting(ISTM much easier
than under overwriting) smgr has the highest priority.
Savepoints is planned for 7.0 but we don't have it yet.

[snip]

>
> Locking issues
> --------------
>
> We will need two extensions to the lock manager:
>
> 1. A new lock type that allows concurrent reads and writes
> (AccessShareLock, RowShareLock, RowExclusiveLock) but not
> anything else.

What's different from RowExclusiveLock ?
Does it conflict with itself ?

>
> The conditional lock will be used by lazy VACUUM to try to upgrade its
> VacuumLock to an AccessExclusiveLock at step D (truncate table). If it's
> able to get exclusive lock, it's safe to truncate any unused end pages.
> Without exclusive lock, it's not, since there might be concurrent
> transactions scanning or inserting into the empty pages. We do not want
> lazy VACUUM to block waiting to do this, since if it does that it will
> create a lockout situation (reader/writer transactions will stack up
> behind it in the lock queue while everyone waits for the existing
> reader/writer transactions to finish). Better to not do the truncation.
>

And would the truncation occur that often in reality under
the scheme(without tuple movement) ?

[snip]

>
> Index access method improvements
> --------------------------------
>
> Presently, VACUUM deletes index tuples by doing a standard index scan
> and checking each returned index tuple to see if it points at any of
> the tuples to be deleted. If so, the index AM is called back to delete
> the tested index tuple. This is horribly inefficient: it means one trip
> into the index AM (with associated buffer lock/unlock and search overhead)
> for each tuple in the index, plus another such trip for each tuple actually
> deleted.
>
> This is mainly a problem of a poorly chosen API. The index AMs should
> offer a "bulk delete" call, which is passed a sorted array of main-table
> TIDs. The loop over the index tuples should happen internally to the
> index AM. At least in the case of btree, this could be done by a
> sequential scan over the index pages, which avoids the random I/O of an
> index-order scan and so should offer additional speedup.
>

???? Isn't current implementation "bulk delete" ?
Fast access to individual index tuples seems to be
also needed in case a few dead tuples.

> Further out (possibly not for 7.2), we should also look at making the
> index AMs responsible for shrinking indexes during deletion, or perhaps
> via a separate "vacuum index" API. This can be done without exclusive
> locks on the index --- the original Lehman & Yao concurrent-btrees paper
> didn't describe how, but more recent papers show how to do it. As with
> the main tables, I think it's sufficient to recycle freed space within
> the index, and not necessarily try to give it back to the OS.
>

Great. There would be few disadvantages in our btree
implementation.

regards,
Hiroshi Inoue

Tom Lane

unread,
May 18, 2001, 1:32:11 AM5/18/01
to
"Matthew T. O'Connor" <mat...@zeut.net> writes:
> Another quick thought for handling FSM contention problems. A backend could
> give up waiting for access to the FSM after a short period of time, and just
> append it's data to the end of the file the same way it's done now. Dunno
> if that is feasable but it seemed like an idea to me.

Mmm ... maybe, but I doubt it'd help much. Appending a page to the file
requires grabbing some kind of lock anyway (since you can't have two
backends doing it at the same instant). With any luck, that locking can
be merged with the locking involved in accessing the FSM.

regards, tom lane

Tom Lane

unread,
May 18, 2001, 1:44:46 AM5/18/01
to
Bruce Momjian <pg...@candle.pha.pa.us> writes:
> The only question I have is about the Free Space Map. It would seem
> better to me if we could get this map closer to the table itself, rather
> than having every table of every database mixed into the same shared
> memory area. I can just see random table access clearing out most of
> the map cache and perhaps making it less useless.

What random access? Read transactions will never touch the FSM at all.
As for writes, seems to me the places you are writing are exactly the
places you need info for.

You make a good point, which is that we don't want a schedule-driven
VACUUM to load FSM entries for unused tables into the map at the cost
of throwing out entries that *are* being used. But it seems to me that
that's easily dealt with if we recognize the risk.

> It would be nice if we could store the map on the first page of the disk
> table, or store it in a flat file per table. I know both of these ideas
> will not work,

You said it. What's wrong with shared memory? You can't get any closer
than shared memory: keeping maps in the files would mean you'd need to
chew up shared-buffer space to get at them. (And what was that about
random accesses causing your maps to get dropped? That would happen
for sure if they live in shared buffers.)

Another problem with keeping stuff in the first page: what happens when
the table gets big enough that 8k of map data isn't really enough?
With a shared-memory area, we can fairly easily allocate a variable
amount of space based on total size of a relation vs. total size of
relations under management.

It is true that a shared-memory map would be useless at system startup,
until VACUUM has run and filled in some info. But I don't see that as
a big drawback. People who aren't developers like us don't restart
their postmasters every five minutes.

> Another advantage of centralization is that we can record update/delete
> counters per table, helping tell vacuum where to vacuum next. Vacuum
> roaming around looking for old tuples seems wasteful.

Indeed. But I thought you were arguing against centralization?

Philip Warner

unread,
May 18, 2001, 1:56:08 AM5/18/01
to
At 19:05 17/05/01 -0400, Tom Lane wrote:
>
>But having said that, there's no reason to remove the existing VACUUM
>code: we can keep it around for situations where you need to crunch a
>table as much as possible and you can afford to lock the table while
>you do it.

It would be great if this was the *only* reason to use the old-style
VACUUM. ie. We should try to avoid a solution that has a VACCUM LAZY in
background and a recommendation to a 'VACUMM PROPERLY' once in a while.


>The new code would be a new command, maybe "VACUUM LAZY"
>(or some other name entirely).

Maybe a name that reflects it's strength/purpose: 'VACUUM
ONLINE/BACKGROUND/NOLOCKS/CONCURRENT' etc.


>Enough handwaving, what about specifics?
>
>1. Forget moving tuples from one page to another. Doing that in a
>transaction-safe way is hugely expensive and complicated. Lazy VACUUM
>will only delete dead tuples and coalesce the free space thus made
>available within each page of a relation.

Could this be done opportunistically, meaning it builds up a list of
candidates to move (perhaps based on emptiness of page), then moves a
subset of these in each pass? It's only really useful in the case of a
table that has a high update load then becomes static. Which is not as
unusual as it sounds: people do archive tables by renaming them, then
create a new lean 'current' table. With the new vacuum, the static table
may end up with many half-empty pages that are never reused.


>2. This does no good unless there's a provision to re-use that free space.
>To do that, I propose a free space map (FSM) kept in shared memory, which
>will tell backends which pages of a relation have free space. Only if the
>FSM shows no free space available will the relation be extended to insert
>a new or updated tuple.

I assume that now is not a good time to bring up memory-mapped files? ;-}


>3. Lazy VACUUM processes a table in five stages:
> A. Scan relation looking for dead tuples; accumulate a list of their
> TIDs, as well as info about existing free space. (This pass is
> completely read-only and so incurs no WAL traffic.)

Were you planning on just a free byte count, or something smaller? Dec/RDB
uses a nast system of DBA-defined thresholds for each storage area: 4 bits,
where 0=empty, and 1, 2 & 3 indicate above/below thresholds (3 is also
considered 'full'). The thresholds are usually set based on average record
sizes. In this day & age, I suspect a 1 byte percentage, or 2 byte count is
OK unless space is really a premium.


>5. Also note that there's nothing saying that lazy VACUUM must do the
>entire table in one go; once it's accumulated a big enough batch of dead
>tuples, it can proceed through steps B,C,D,E even though it's not scanned
>the whole table. This avoids a rather nasty problem that VACUUM has
>always had with running out of memory on huge tables.

This sounds great, especially if the same approach could be adopted when/if
moving records.


>Critical point: the FSM is only a hint and does not have to be perfectly
>accurate. It can omit space that's actually available without harm, and
>if it claims there's more space available on a page than there actually
>is, we haven't lost much except a wasted ReadBuffer cycle.

So long as you store the # of bytes (or %), that should be fine. One of the
horrors of the Dec/RDB system is that with badly set threholds you can
cycle through many pages looking for one that *really* has enough free space.

Also, would the detecting process fix the bad entry?


> This allows
>us to take shortcuts in maintaining it. In particular, we can constrain
>the FSM to a prespecified size, which is critical for keeping it in shared
>memory. We just discard entries (pages or whole relations) as necessary
>to keep it under budget.

Presumably keeping the 'most empty' pages?


>Obviously, we'd not bother to make entries in
>the first place for pages with only a little free space. Relation entries
>might be discarded on a least-recently-used basis.

You also might want to record some 'average/min/max' record size for the
table to assess when a page's free space is insufficient for the
average/minimum record size.

While on the subject of record keeping, it would be great if it was coded
to collect statistics about it's own operation for Jan's stats package.


>Accesses to the FSM could create contention problems if we're not careful.

>I think this can be dealt with by having each backend remember (in its
>relcache entry for a table) the page number of the last page it chose from
>the FSM to insert into. That backend will keep inserting new tuples into
>that same page, without touching the FSM, as long as there's room there.
>Only then does it go back to the FSM, update or remove that page entry,
>and choose another page to start inserting on. This reduces the access
>load on the FSM from once per tuple to once per page.

This seems to have the potential to create as many false FSM page entries
as there are backends. Is it really that expensive to lock the FSM table
entry, subtract a number, then unlock it? especially when compared to the
cost of adding/updating a whole record.


>(Moreover, we can
>arrange that successive backends consulting the FSM pick different pages
>if possible. Then, concurrent inserts will tend to go to different pages,
>reducing contention for shared buffers; yet any single backend does
>sequential inserts in one page, so that a bulk load doesn't cause
>disk traffic scattered all over the table.)

This will also increase the natural clustering of the database for SERIAL
fields even though the overall ordering will be all over the place (at
least for insert-intensive apps).


>
>Locking issues
>--------------
>
>We will need two extensions to the lock manager:
>
>1. A new lock type that allows concurrent reads and writes
>(AccessShareLock, RowShareLock, RowExclusiveLock) but not anything else.

>Lazy VACUUM will grab this type of table lock to ensure the table schema
>doesn't change under it. Call it a VacuumLock until we think of a better
>name.

Is it possible/worth adding a 'blocking notification' to the lock manager.
Then VACUUM could choose to terminate/restart when someone wants to do a
schema change. This is realy only relevant if the VACUUM will be prolonged.

----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/

Tom Lane

unread,
May 18, 2001, 2:11:36 AM5/18/01
to
Hiroshi Inoue <In...@tpf.co.jp> writes:

> Tom Lane wrote:
>> 1. A new lock type that allows concurrent reads and writes
>> (AccessShareLock, RowShareLock, RowExclusiveLock) but not
>> anything else.

> What's different from RowExclusiveLock ?

I wanted something that *does* conflict with itself, thereby ensuring
that only one instance of VACUUM can be running on a table at a time.
This is not absolutely necessary, perhaps, but without it matters would
be more complex for little gain that I can see. (For example, the tuple
TIDs that lazy VACUUM gathers in step A might already be deleted and
compacted out of existence by another VACUUM, and then reused as new
tuples, before the first VACUUM gets back to the page to delete them.
There would need to be a defense against that if we allow concurrent
VACUUMs.)

> And would the truncation occur that often in reality under
> the scheme(without tuple movement) ?

Probably not, per my comments to someone else. I'm not very concerned
about that, as long as we are able to recycle freed space within the
relation.

We could in fact move tuples if we wanted to --- it's not fundamentally
different from an UPDATE --- but then VACUUM becomes a transaction and
we have the WAL-log-traffic problem back again. I'd like to try it
without that and see if it gets the job done.

> ???? Isn't current implementation "bulk delete" ?

No, the index AM is called separately for each index tuple to be
deleted; more to the point, the search for deletable index tuples
should be moved inside the index AM for performance reasons.

> Fast access to individual index tuples seems to be
> also needed in case a few dead tuples.

Yes, that is the approach Vadim used in the LAZY VACUUM code he did
last summer for Alfred Perlstein. I had a couple of reasons for not
duplicating that method:

1. Searching for individual index tuples requires hanging onto (or
refetching) the index key data for each target tuple. A bulk delete
based only on tuple TIDs is simpler and requires little memory.

2. The main reason for that form of lazy VACUUM is to minimize the time
spent holding the exclusive lock on a table. With this approach we
don't need to worry so much about that. A background task trawling
through an index won't really bother anyone, since it won't lock out
updates.

3. If we're concerned about the speed of lazy VACUUM when there's few
tuples to be recycled, there's a really easy answer: don't recycle 'em.
Leave 'em be until there are enough to make it worth going after 'em.
Remember this is a "fast and good enough" approach, not a "get every
possible byte every time" approach.

Using a key-driven delete when we have only a few deletable tuples might
be a useful improvement later on, but I don't think it's necessary to
bother with it in the first go-round. This is a big project already...

regards, tom lane

---------------------------(end of broadcast)---------------------------

Hannu Krosing

unread,
May 18, 2001, 2:56:34 AM5/18/01
to
Tom Lane wrote:

>
> mlw <ma...@mohawksoft.com> writes:

> > Also, it would be good to be able to update the statistics without doing a
> > vacuum, i.e. rather than having to vacuum to analyze, being able to analyze
> > without a vacuum.
>
> Irrelevant, not to mention already done ...

Do you mean that we already can do just analyze ?

What is the syntax ?

----------------
Hannu

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Tom Lane

unread,
May 18, 2001, 3:01:58 AM5/18/01
to
Philip Warner <p...@rhyme.com.au> writes:
> At 19:05 17/05/01 -0400, Tom Lane wrote:
>> 1. Forget moving tuples from one page to another.

> Could this be done opportunistically, meaning it builds up a list of


> candidates to move (perhaps based on emptiness of page), then moves a
> subset of these in each pass?

Well, if we move tuples at all then we have a considerably different
animal: to move tuples across pages you must be a transaction so that
you can have an atomic commit for both pages, and that brings back the
issue of how long the transaction runs for and how large its WAL trail
will grow before it can be dropped. Yeah, you could move a limited
number of tuples, commit, and start again ... but it's not so
lightweight anymore.

Perhaps we will eventually end up with three strengths of VACUUM:
the existing heavy-duty form, the "lazy" form that isn't transactional,
and an intermediate form that is willing to move tuples in simpler
cases (none of that tuple-chain-moving stuff please ;-)). But I'm
not buying into producing the intermediate form in this go-round.
Let's build the lazy form first and get some experience with it
before we decide if we need yet another kind of VACUUM.

>> To do that, I propose a free space map (FSM) kept in shared memory, which
>> will tell backends which pages of a relation have free space. Only if the
>> FSM shows no free space available will the relation be extended to insert
>> a new or updated tuple.

> I assume that now is not a good time to bring up memory-mapped files? ;-}

Don't see the relevance exactly ...

> Were you planning on just a free byte count, or something smaller? Dec/RDB
> uses a nast system of DBA-defined thresholds for each storage area: 4 bits,
> where 0=empty, and 1, 2 & 3 indicate above/below thresholds (3 is also
> considered 'full'). The thresholds are usually set based on average record
> sizes. In this day & age, I suspect a 1 byte percentage, or 2 byte count is
> OK unless space is really a premium.

I had toyed with two different representations of the FSM:

1. Bitmap: one bit per page in the relation, set if there's an
"interesting" amount of free space in the page (exact threshold ???).
DEC's approach seems to be a generalization of this.

2. Page list: page number and number of free bytes. This is six bytes
per page represented; you could maybe compress it to 5 but I'm not sure
there's much point.

I went with #2 mainly because it adapts easily to being forced into a
limited amount of space (just forget the pages with least free space)
which is critical for a table to be kept in shared memory. A bitmap
would be less forgiving. #2 also gives you a better chance of going
to a page that actually has enough free space for your tuple, though
you'd still need to be prepared to find out that it no longer has
enough once you get to it. (Whereupon, you go back to the FSM, fix
the outdated entry, and keep searching.)

> While on the subject of record keeping, it would be great if it was coded
> to collect statistics about it's own operation for Jan's stats package.

Seems like a good idea, but I've seen no details yet about that package...

> This seems to have the potential to create as many false FSM page entries
> as there are backends. Is it really that expensive to lock the FSM table
> entry, subtract a number, then unlock it?

Yes, if you are contending against N other backends to get that lock.
Remember the *whole* point of this design is to avoid locking as much
as possible. Too many trips to the FSM could throw away the performance
advantage.

> Is it possible/worth adding a 'blocking notification' to the lock manager.
> Then VACUUM could choose to terminate/restart when someone wants to do a
> schema change. This is realy only relevant if the VACUUM will be prolonged.

Seems messier than it's worth ... the VACUUM might not be the only thing
holding off your schema update anyway, and regular transactions aren't
likely to pay any attention.

regards, tom lane

---------------------------(end of broadcast)---------------------------

Hannu Krosing

unread,
May 18, 2001, 3:17:57 AM5/18/01
to
Tom Lane wrote:
>
>>
> Enough handwaving, what about specifics?
>
> 1. Forget moving tuples from one page to another. Doing that in a
> transaction-safe way is hugely expensive and complicated. Lazy VACUUM
> will only delete dead tuples and coalesce the free space thus made
> available within each page of a relation.

If we really need to move a tuple we can do it by a regular update that
SET-s nothing and just copies the tuple to another page inside a
separate
transaction.

-------------------
Hannu

---------------------------(end of broadcast)---------------------------

Hiroshi Inoue

unread,
May 18, 2001, 3:28:51 AM5/18/01
to
Tom Lane wrote:
>
> Hiroshi Inoue <In...@tpf.co.jp> writes:
> > Tom Lane wrote:
>
> > And would the truncation occur that often in reality under
> > the scheme(without tuple movement) ?
>
> Probably not, per my comments to someone else. I'm not very concerned
> about that, as long as we are able to recycle freed space within the
> relation.
>

Agreed.

> We could in fact move tuples if we wanted to --- it's not fundamentally
> different from an UPDATE --- but then VACUUM becomes a transaction and
> we have the WAL-log-traffic problem back again.

And it has been always the cause of bugs and innefficiency
of VACUUM IMHO.

regards,
Hiroshi Inoue

Tom Lane

unread,
May 18, 2001, 4:04:00 AM5/18/01
to
Hannu Krosing <ha...@tm.ee> writes:
>> Irrelevant, not to mention already done ...

> Do you mean that we already can do just analyze ?

In development sources, yes. See

http://www.ca.postgresql.org/devel-corner/docs/postgres/sql-analyze.html

Zeugswetter Andreas SB

unread,
May 18, 2001, 6:22:50 AM5/18/01
to

> > ???? Isn't current implementation "bulk delete" ?
>
> No, the index AM is called separately for each index tuple to be
> deleted; more to the point, the search for deletable index tuples
> should be moved inside the index AM for performance reasons.

Wouldn't a sequential scan on the heap table be the fastest way to find
keys, that need to be deleted ?

foreach tuple in heap that can be deleted do:
foreach index
call the current "index delete" with constructed key and xtid

The advantage would be, that the current API would be sufficient and
it should be faster. The problem would be to create a correct key from the heap
tuple, that you can pass to the index delete function.

Andreas

Tom Lane

unread,
May 18, 2001, 9:53:29 AM5/18/01
to
Zeugswetter Andreas SB <Zeugsw...@wien.spardat.at> writes:
> foreach tuple in heap that can be deleted do:
> foreach index
> call the current "index delete" with constructed key and xtid

See discussion with Hiroshi. This is much more complex than TID-based
delete and would be faster only for small numbers of tuples. (Very
small numbers of tuples, is my gut feeling, though there's no way to
prove that without implementations of both in hand.)

A particular point worth making is that in the common case where you've
updated the same row N times (without changing its index key), the above
approach has O(N^2) runtime. The indexscan will find all N index tuples
matching the key ... only one of which is the one you are looking for on
this iteration of the outer loop.

regards, tom lane

---------------------------(end of broadcast)---------------------------

Zeugswetter Andreas SB

unread,
May 18, 2001, 10:01:57 AM5/18/01
to

> A particular point worth making is that in the common case where you've
> updated the same row N times (without changing its index key), the above
> approach has O(N^2) runtime. The indexscan will find all N index tuples
> matching the key ... only one of which is the one you are looking for on
> this iteration of the outer loop.

It was my understanding, that the heap xtid is part of the key now, and thus
with a somewhat modified access, it would find the one exact row directly.
And in above case, the keys (since identical except xtid) will stick close
together, thus caching will be good.

Andreas

---------------------------(end of broadcast)---------------------------

Tom Lane

unread,
May 18, 2001, 11:59:19 AM5/18/01
to
Zeugswetter Andreas SB <Zeugsw...@wien.spardat.at> writes:
> It was my understanding, that the heap xtid is part of the key now,

It is not.

There was some discussion of doing that, but it fell down on the little
problem that in normal index-search cases you *don't* know the heap tid
you are looking for.

> And in above case, the keys (since identical except xtid) will stick close
> together, thus caching will be good.

Even without key-collision problems, deleting N tuples out of a total of
M index entries will require search costs like this:

bulk delete in linear scan way:

O(M) I/O costs (read all the pages)
O(M log N) CPU costs (lookup each TID in sorted list)

successive index probe way:

O(N log M) I/O costs for probing index
O(N log M) CPU costs for probing index (key comparisons)

For N << M, the latter looks like a win, but you have to keep in mind
that the constant factors hidden by the O() notation are a lot different
in the two cases. In particular, if there are T indexentries per page,
the former I/O cost is really M/T * sequential read cost whereas the
latter is N log M * random read cost, yielding a difference in constant
factors of probably a thousand or two. You get some benefit in the
latter case from caching the upper btree levels, but that's by
definition not a large part of the index bulk. So where's the breakeven
point in reality? I don't know but I suspect that it's at pretty small
N. Certainly far less than one percent of the table, whereas I would
think that people would try to schedule VACUUMs at an interval where
they'd be reclaiming several percent of the table.

So, as I said to Hiroshi, this alternative looks to me like a possible
future refinement, not something we need to do in the first version.

regards, tom lane

---------------------------(end of broadcast)---------------------------

Zeugswetter Andreas SB

unread,
May 18, 2001, 1:37:43 PM5/18/01
to

> There was some discussion of doing that, but it fell down on the little
> problem that in normal index-search cases you *don't* know the heap tid
> you are looking for.

I can not follow here. It does not matter if you don't know a trailing
part of the key when doing a btree search, it only helps to directly find the
leaf page.

>
> > And in above case, the keys (since identical except xtid) will stick close
> > together, thus caching will be good.
>
> Even without key-collision problems, deleting N tuples out of a total of
> M index entries will require search costs like this:
>
> bulk delete in linear scan way:
>
> O(M) I/O costs (read all the pages)
> O(M log N) CPU costs (lookup each TID in sorted list)
>
> successive index probe way:
>
> O(N log M) I/O costs for probing index
> O(N log M) CPU costs for probing index (key comparisons)

both O(N log (levels in index))

> So, as I said to Hiroshi, this alternative looks to me like a possible
> future refinement, not something we need to do in the first version.

Yes, I thought it might be easier to implement than the index scan :-)

Andreas

---------------------------(end of broadcast)---------------------------

Oleg Bartunov

unread,
May 18, 2001, 2:09:42 PM5/18/01
to
On Thu, 17 May 2001, Tom Lane wrote:

>
> We will also want to look at upgrading the non-btree index types to allow
> concurrent operations. This may be a research problem; I don't expect to
> touch that issue for 7.2. (Hence, lazy VACUUM on tables with non-btree
> indexes will still create lockouts until this is addressed. But note that
> the lockout only lasts through step B of the VACUUM, not the whole thing.)

am I right you plan to work with GiST indexes as well ?
We read a paper "Concurrency and Recovery in Generalized Search Trees"
by Marcel Kornacker, C. Mohan, Joseph Hellerstein
(http://citeseer.nj.nec.com/kornacker97concurrency.html)
and probably we could go in this direction. Right now we're working
on adding of multi-key support to GiST.

btw, I have a question about function gistPageAddItem in gist.c
it just decompress - compress key and calls PageAddItem to
write tuple. We don't understand why do we need this function -
why not use PageAddItem function. Adding multi-key support requires
a lot of work and we don't want to waste our efforts and time.
We already done some tests (gistPageAddItem -> PageAddItem) and
everything is ok. Bruce, you're enthuasistic in removing unused code :-)

>
> regards, tom lane


>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majo...@postgresql.org so that your
> message can get through to the mailing list cleanly
>

Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

Tom Lane

unread,
May 18, 2001, 2:12:33 PM5/18/01
to
Oleg Bartunov <ol...@sai.msu.su> writes:
> On Thu, 17 May 2001, Tom Lane wrote:
>> We will also want to look at upgrading the non-btree index types to allow
>> concurrent operations.

> am I right you plan to work with GiST indexes as well ?


> We read a paper "Concurrency and Recovery in Generalized Search Trees"
> by Marcel Kornacker, C. Mohan, Joseph Hellerstein
> (http://citeseer.nj.nec.com/kornacker97concurrency.html)
> and probably we could go in this direction. Right now we're working
> on adding of multi-key support to GiST.

Yes, GIST should be upgraded to do concurrency. But I have no objection
if you want to work on multi-key support first.

My feeling is that a few releases from now we will have btree and GIST
as the preferred/well-supported index types. Hash and rtree might go
away altogether --- AFAICS they don't do anything that's not done as
well or better by btree or GIST, so what's the point of maintaining
them?

> btw, I have a question about function gistPageAddItem in gist.c
> it just decompress - compress key and calls PageAddItem to
> write tuple. We don't understand why do we need this function -

The comment says

** Take a compressed entry, and install it on a page. Since we now know
** where the entry will live, we decompress it and recompress it using
** that knowledge (some compression routines may want to fish around
** on the page, for example, or do something special for leaf nodes.)

Are you prepared to say that you will no longer support the ability for
GIST compression routines to do those things? That seems shortsighted.

regards, tom lane

---------------------------(end of broadcast)---------------------------

Bruce Momjian

unread,
May 18, 2001, 2:39:11 PM5/18/01
to
> Oleg Bartunov <ol...@sai.msu.su> writes:
> > On Thu, 17 May 2001, Tom Lane wrote:
> >> We will also want to look at upgrading the non-btree index types to allow
> >> concurrent operations.
>
> > am I right you plan to work with GiST indexes as well ?
> > We read a paper "Concurrency and Recovery in Generalized Search Trees"
> > by Marcel Kornacker, C. Mohan, Joseph Hellerstein
> > (http://citeseer.nj.nec.com/kornacker97concurrency.html)
> > and probably we could go in this direction. Right now we're working
> > on adding of multi-key support to GiST.
>
> Yes, GIST should be upgraded to do concurrency. But I have no objection
> if you want to work on multi-key support first.
>
> My feeling is that a few releases from now we will have btree and GIST
> as the preferred/well-supported index types. Hash and rtree might go
> away altogether --- AFAICS they don't do anything that's not done as
> well or better by btree or GIST, so what's the point of maintaining
> them?

We clearly have too many index types, and we are actively telling people
not to use hash. And Oleg, don't you have a better version of GIST rtree
than our native rtree?

I certainly like streamlining our stuff.

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

---------------------------(end of broadcast)---------------------------

Oleg Bartunov

unread,
May 18, 2001, 2:50:32 PM5/18/01
to
On Fri, 18 May 2001, Tom Lane wrote:

> Oleg Bartunov <ol...@sai.msu.su> writes:
> > On Thu, 17 May 2001, Tom Lane wrote:
> >> We will also want to look at upgrading the non-btree index types to allow
> >> concurrent operations.
>
> > am I right you plan to work with GiST indexes as well ?
> > We read a paper "Concurrency and Recovery in Generalized Search Trees"
> > by Marcel Kornacker, C. Mohan, Joseph Hellerstein
> > (http://citeseer.nj.nec.com/kornacker97concurrency.html)
> > and probably we could go in this direction. Right now we're working
> > on adding of multi-key support to GiST.

Another paper to read:
"Efficient Concurrency Control in Multidimensional Access Methods"
by Kaushik Chakrabarti
http://www.ics.uci.edu/~kaushik/research/pubs.html

>
> Yes, GIST should be upgraded to do concurrency. But I have no objection
> if you want to work on multi-key support first.
>
> My feeling is that a few releases from now we will have btree and GIST
> as the preferred/well-supported index types. Hash and rtree might go
> away altogether --- AFAICS they don't do anything that's not done as
> well or better by btree or GIST, so what's the point of maintaining
> them?

Cool ! We could write rtree (and btree) ops using GiST. We have already
realization of rtree for box ops and there are no problem to write
additional ops for points, polygons etc.

>
> > btw, I have a question about function gistPageAddItem in gist.c
> > it just decompress - compress key and calls PageAddItem to
> > write tuple. We don't understand why do we need this function -
>
> The comment says
>
> ** Take a compressed entry, and install it on a page. Since we now know
> ** where the entry will live, we decompress it and recompress it using
> ** that knowledge (some compression routines may want to fish around
> ** on the page, for example, or do something special for leaf nodes.)
>
> Are you prepared to say that you will no longer support the ability for
> GIST compression routines to do those things? That seems shortsighted.
>

No-no !!! we don't intend to lose that (compression) functionality.

there are several reason we want to eliminate gistPageAddItem:
1. It seems there are no examples where compress uses information about
the page.
2. There is some discrepancy between calculation of free space on page and
the size of tuple saved on page - calculation of free space on page
by gistNoSpace uses compressed tuple but tuple itself saved after
recompression. It's possible that size of tupple could changed
after recompression.
3. decompress/compress could slowdown insert because it happens
for every tuple.
4. Currently gistPageAddItem is broken because it's not toast safe
(see call gist_tuple_replacekey in gistPageAddItem)

Right now we use #define GIST_PAGEADDITEM in gist.c and
working with original PageAddItem. If people insist on gistPageAddItem
we'll totally rewrite it. But for now we have enough job to do.


> regards, tom lane
>

Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

Oleg Bartunov

unread,
May 18, 2001, 2:50:31 PM5/18/01
to
On Fri, 18 May 2001, Tom Lane wrote:

> Oleg Bartunov <ol...@sai.msu.su> writes:
> >> The comment says
> >>
> >> ** Take a compressed entry, and install it on a page. Since we now know
> >> ** where the entry will live, we decompress it and recompress it using
> >> ** that knowledge (some compression routines may want to fish around
> >> ** on the page, for example, or do something special for leaf nodes.)
> >>
> >> Are you prepared to say that you will no longer support the ability for
> >> GIST compression routines to do those things? That seems shortsighted.
>
> > No-no !!! we don't intend to lose that (compression) functionality.
>
> > there are several reason we want to eliminate gistPageAddItem:
> > 1. It seems there are no examples where compress uses information about
> > the page.
>

> We have none now, perhaps, but the original GIST authors seemed to think
> it would be a useful capability. I'm hesitant to rip out functionality
> that they put in --- I don't think we understand GIST better than they
> did ;-)

ok. we save the code for future. Probably we could ask original author
btw, who is the orig, author (Hellerstein, Aoki) ?

>
> > 2. There is some discrepancy between calculation of free space on page and
> > the size of tuple saved on page - calculation of free space on page
> > by gistNoSpace uses compressed tuple but tuple itself saved after
> > recompression. It's possible that size of tupple could changed
> > after recompression.
>

> Yes, that's a serious problem with the idea. We'd have to suppose that
> recompression could not increase the size of the tuple --- or else be
> prepared to back up and find another page and do it again (ugh).
>

We have to keep in mind this when return to gistPageAddItem.

> > 3. decompress/compress could slowdown insert because it happens
> > for every tuple.
>

> Seems like there should be a flag somewhere that tells whether the
> compression function actually cares about the page context or not.
> If not, you could skip the useless processing.
>

ok. see above.

> > 4. Currently gistPageAddItem is broken because it's not toast safe
> > (see call gist_tuple_replacekey in gistPageAddItem)
>

> Not sure I see the problem. gist_tuple_replacekey is kinda ugly, but
> what's it got to do with TOAST?
>

tuple could be formed not by index_formtuple which is a correct way.


> regards, tom lane
>

Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83


---------------------------(end of broadcast)---------------------------

Oleg Bartunov

unread,
May 18, 2001, 2:53:38 PM5/18/01
to
On Fri, 18 May 2001, Bruce Momjian wrote:

> > Oleg Bartunov <ol...@sai.msu.su> writes:
> > > On Thu, 17 May 2001, Tom Lane wrote:
> > >> We will also want to look at upgrading the non-btree index types to allow
> > >> concurrent operations.
> >
> > > am I right you plan to work with GiST indexes as well ?
> > > We read a paper "Concurrency and Recovery in Generalized Search Trees"
> > > by Marcel Kornacker, C. Mohan, Joseph Hellerstein
> > > (http://citeseer.nj.nec.com/kornacker97concurrency.html)
> > > and probably we could go in this direction. Right now we're working
> > > on adding of multi-key support to GiST.
> >

> > Yes, GIST should be upgraded to do concurrency. But I have no objection
> > if you want to work on multi-key support first.
> >
> > My feeling is that a few releases from now we will have btree and GIST
> > as the preferred/well-supported index types. Hash and rtree might go
> > away altogether --- AFAICS they don't do anything that's not done as
> > well or better by btree or GIST, so what's the point of maintaining
> > them?
>

> We clearly have too many index types, and we are actively telling people
> not to use hash. And Oleg, don't you have a better version of GIST rtree
> than our native rtree?

We have rtree implementation using GiST - it incomplete, just box.
look at http://www.sai.msu.su/~megera/postgres/gist/
We ported old code from pg95 time. But it's not difficult to port
remaining part. I'm not sure if our version is better, we didn't thoroughly
test, but seems that memory requirements is better and much faster
index construction.

>
> I certainly like streamlining our stuff.
>


>

Regards,


Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83


---------------------------(end of broadcast)---------------------------

Tom Lane

unread,
May 18, 2001, 3:00:44 PM5/18/01
to
Oleg Bartunov <ol...@sai.msu.su> writes:
>> The comment says
>>
>> ** Take a compressed entry, and install it on a page. Since we now know
>> ** where the entry will live, we decompress it and recompress it using
>> ** that knowledge (some compression routines may want to fish around
>> ** on the page, for example, or do something special for leaf nodes.)
>>
>> Are you prepared to say that you will no longer support the ability for
>> GIST compression routines to do those things? That seems shortsighted.

> No-no !!! we don't intend to lose that (compression) functionality.

> there are several reason we want to eliminate gistPageAddItem:
> 1. It seems there are no examples where compress uses information about
> the page.

We have none now, perhaps, but the original GIST authors seemed to think
it would be a useful capability. I'm hesitant to rip out functionality
that they put in --- I don't think we understand GIST better than they
did ;-)

> 2. There is some discrepancy between calculation of free space on page and


> the size of tuple saved on page - calculation of free space on page
> by gistNoSpace uses compressed tuple but tuple itself saved after
> recompression. It's possible that size of tupple could changed
> after recompression.

Yes, that's a serious problem with the idea. We'd have to suppose that
recompression could not increase the size of the tuple --- or else be
prepared to back up and find another page and do it again (ugh).

> 3. decompress/compress could slowdown insert because it happens
> for every tuple.

Seems like there should be a flag somewhere that tells whether the
compression function actually cares about the page context or not.
If not, you could skip the useless processing.

> 4. Currently gistPageAddItem is broken because it's not toast safe


> (see call gist_tuple_replacekey in gistPageAddItem)

Not sure I see the problem. gist_tuple_replacekey is kinda ugly, but
what's it got to do with TOAST?

regards, tom lane

Mikheev, Vadim

unread,
May 18, 2001, 8:13:32 PM5/18/01
to
> I have been thinking about the problem of VACUUM and how we
> might fix it for 7.2. Vadim has suggested that we should
> attack this by implementing an overwriting storage manager
> and transaction UNDO, but I'm not totally comfortable with
> that approach: it seems to me that it's an awfully large
> change in the way Postgres works.

I'm not sure if we should implement overwriting smgr at all.
I was/is going to solve space reusing problem with non-overwriting
one, though I'm sure that we have to reimplement it (> 1 table
per data file, stored on disk FSM etc).

> Second: if VACUUM can run in the background, then there's no
> reason not to run it fairly frequently. In fact, it could become
> an automatically scheduled activity like CHECKPOINT is now,
> or perhaps even a continuously running daemon (which was the
> original conception of it at Berkeley, BTW).

And original authors concluded that daemon was very slow in
reclaiming dead space, BTW.

> 3. Lazy VACUUM processes a table in five stages:

> A. Scan relation looking for dead tuples;...
> B. Remove index entries for the dead tuples...
> C. Physically delete dead tuples and compact free space...
> D. Truncate any completely-empty pages at relation's end.
> E. Create/update FSM entry for the table.
...
> If a tuple is dead, we care not whether its index entries are still
> around or not; so there's no risk to logical consistency.

What does this sentence mean? We canNOT remove dead heap tuple untill
we know that there are no index tuples referencing it and your A,B,C
reflect this, so ..?

> Another place where lazy VACUUM may be unable to do its job completely
> is in compaction of space on individual disk pages. It can physically
> move tuples to perform compaction only if there are not currently any
> other backends with pointers into that page (which can be tested by
> looking to see if the buffer reference count is one). Again, we punt
> and leave the space to be compacted next time if we can't do it right
> away.

We could keep share buffer lock (or add some other kind of lock)
untill tuple projected - after projection we need not to read data
for fetched tuple from shared buffer and time between fetching
tuple and projection is very short, so keeping lock on buffer will
not impact concurrency significantly.

Or we could register callback cleanup function with buffer so bufmgr
would call it when refcnt drops to 0.

> Presently, VACUUM deletes index tuples by doing a standard index
> scan and checking each returned index tuple to see if it points

> at any of the tuples to be deleted. If so, the index AM is called


> back to delete the tested index tuple. This is horribly inefficient:

...


> This is mainly a problem of a poorly chosen API. The index AMs
> should offer a "bulk delete" call, which is passed a sorted array
> of main-table TIDs. The loop over the index tuples should happen
> internally to the index AM.

I agreed with others who think that the main problem of index cleanup
is reading all index data pages to remove some index tuples. You told
youself about partial heap scanning - so for each scanned part of table
you'll have to read all index pages again and again - very good way to
trash buffer pool with big indices.

Well, probably it's ok for first implementation and you'll win some CPU
with "bulk delete" - I'm not sure how much, though, and there is more
significant issue with index cleanup if table is not locked exclusively:
concurrent index scan returns tuple (and unlock index page), heap_fetch
reads table row and find that it's dead, now index scan *must* find
current index tuple to continue, but background vacuum could already
remove that index tuple => elog(FATAL, "_bt_restscan: my bits moved...");

Two ways: hold index page lock untill heap tuple is checked or (rough
schema)
store info in shmem (just IndexTupleData.t_tid and flag) that an index tuple
is used by some scan so cleaner could change stored TID (get one from prev
index tuple) and set flag to help scan restore its current position on
return.

I'm particularly interested in discussing this issue because of it must be
resolved for UNDO and chosen way will affect in what volume we'll be able
to implement dirty reads (first way doesn't allow to implement them in full
- ie selects with joins, - but good enough to resolve RI constraints
concurrency issue).

> There you have it. If people like this, I'm prepared to commit to
> making it happen for 7.2. Comments, objections, better ideas?

Well, my current TODO looks as (ORDER BY PRIORITY DESC):

1. UNDO;
2. New SMGR;
3. Space reusing.

and I cannot commit at this point anything about 3. So, why not to refine
vacuum if you want it. I, personally, was never be able to convince myself
to spend time for this.

Vadim

Bruce Momjian

unread,
May 18, 2001, 8:42:57 PM5/18/01
to
> Well, my current TODO looks as (ORDER BY PRIORITY DESC):
>
> 1. UNDO;
> 2. New SMGR;
> 3. Space reusing.
>
> and I cannot commit at this point anything about 3. So, why not to refine
> vacuum if you want it. I, personally, was never be able to convince myself
> to spend time for this.

Vadim, can you remind me what UNDO is used for?

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

---------------------------(end of broadcast)---------------------------

Tom Lane

unread,
May 18, 2001, 9:00:52 PM5/18/01
to
"Mikheev, Vadim" <vmik...@SECTORBASE.COM> writes:
>> If a tuple is dead, we care not whether its index entries are still
>> around or not; so there's no risk to logical consistency.

> What does this sentence mean? We canNOT remove dead heap tuple untill
> we know that there are no index tuples referencing it and your A,B,C
> reflect this, so ..?

Sorry if it wasn't clear. I meant that if the vacuum process fails
after removing an index tuple but before removing the (dead) heap tuple
it points to, there's no need to try to undo. That state is OK, and
when we next get a chance to vacuum we'll still be able to finish
removing the heap tuple.

>> Another place where lazy VACUUM may be unable to do its job completely
>> is in compaction of space on individual disk pages. It can physically
>> move tuples to perform compaction only if there are not currently any
>> other backends with pointers into that page (which can be tested by
>> looking to see if the buffer reference count is one). Again, we punt
>> and leave the space to be compacted next time if we can't do it right
>> away.

> We could keep share buffer lock (or add some other kind of lock)
> untill tuple projected - after projection we need not to read data
> for fetched tuple from shared buffer and time between fetching
> tuple and projection is very short, so keeping lock on buffer will
> not impact concurrency significantly.

Or drop the pin on the buffer to show we no longer have a pointer to it.
I'm not sure that the time to do projection is short though --- what
if there are arbitrary user-defined functions in the quals or the
projection targetlist?

> Or we could register callback cleanup function with buffer so bufmgr
> would call it when refcnt drops to 0.

Hmm ... might work. There's no guarantee that the refcnt would drop to
zero before the current backend exits, however. Perhaps set a flag in
the shared buffer header, and the last guy to drop his pin is supposed
to do the cleanup? But then you'd be pushing VACUUM's work into
productive transactions, which is probably not the way to go.

>> This is mainly a problem of a poorly chosen API. The index AMs
>> should offer a "bulk delete" call, which is passed a sorted array
>> of main-table TIDs. The loop over the index tuples should happen
>> internally to the index AM.

> I agreed with others who think that the main problem of index cleanup
> is reading all index data pages to remove some index tuples.

For very small numbers of tuples that might be true. But I'm not
convinced it's worth worrying about. If there aren't many tuples to
be freed, perhaps VACUUM shouldn't do anything at all.

> Well, probably it's ok for first implementation and you'll win some CPU
> with "bulk delete" - I'm not sure how much, though, and there is more
> significant issue with index cleanup if table is not locked exclusively:
> concurrent index scan returns tuple (and unlock index page), heap_fetch
> reads table row and find that it's dead, now index scan *must* find
> current index tuple to continue, but background vacuum could already
> remove that index tuple => elog(FATAL, "_bt_restscan: my bits moved...");

Hm. Good point ...

> Two ways: hold index page lock untill heap tuple is checked or (rough
> schema)
> store info in shmem (just IndexTupleData.t_tid and flag) that an index tuple
> is used by some scan so cleaner could change stored TID (get one from prev
> index tuple) and set flag to help scan restore its current position on
> return.

Another way is to mark the index tuple "gone but not forgotten", so to
speak --- mark it dead without removing it. (We could know that we need
to do that if we see someone else has a buffer pin on the index page.)
In this state, the index scan coming back to work would still be allowed
to find the index tuple, but no other index scan would stop on the
tuple. Later passes of vacuum would eventually remove the index tuple,
whenever vacuum happened to pass through at an instant where no one has
a pin on that index page.

None of these seem real clean though. Needs more thought.

> Well, my current TODO looks as (ORDER BY PRIORITY DESC):

> 1. UNDO;
> 2. New SMGR;
> 3. Space reusing.

> and I cannot commit at this point anything about 3. So, why not to refine
> vacuum if you want it. I, personally, was never be able to convince myself
> to spend time for this.

Okay, good. I was worried that this idea would conflict with what you
were doing, but it seems it won't.

regards, tom lane

---------------------------(end of broadcast)---------------------------

Mikheev, Vadim

unread,
May 18, 2001, 9:14:28 PM5/18/01
to
> Vadim, can you remind me what UNDO is used for?

Ok, last reminder -:))

On transaction abort, read WAL records and undo (rollback)
changes made in storage. Would allow:

1. Reclaim space allocated by aborted transactions.
2. Implement SAVEPOINTs.
Just to remind -:) - in the event of error discovered by server
- duplicate key, deadlock, command mistyping, etc, - transaction
will be rolled back to the nearest implicit savepoint setted
just before query execution; - or transaction can be aborted by
ROLLBACK TO <savepoint_name> command to some explicit savepoint
setted by user. Transaction rolled back to savepoint may be continued.
3. Reuse transaction IDs on postmaster restart.
4. Split pg_log into small files with ability to remove old ones (which
do not hold statuses for any running transactions).

Vadim

---------------------------(end of broadcast)---------------------------

Tom Lane

unread,
May 18, 2001, 9:40:52 PM5/18/01
to
"Mikheev, Vadim" <vmik...@SECTORBASE.COM> writes:
>> Vadim, can you remind me what UNDO is used for?
> Ok, last reminder -:))

> On transaction abort, read WAL records and undo (rollback)
> changes made in storage. Would allow:

> 1. Reclaim space allocated by aborted transactions.
> 2. Implement SAVEPOINTs.
> Just to remind -:) - in the event of error discovered by server
> - duplicate key, deadlock, command mistyping, etc, - transaction
> will be rolled back to the nearest implicit savepoint setted
> just before query execution; - or transaction can be aborted by
> ROLLBACK TO <savepoint_name> command to some explicit savepoint
> setted by user. Transaction rolled back to savepoint may be continued.
> 3. Reuse transaction IDs on postmaster restart.
> 4. Split pg_log into small files with ability to remove old ones (which
> do not hold statuses for any running transactions).

Hm. On the other hand, relying on WAL for undo means you cannot drop
old WAL segments that contain records for any open transaction. We've
already seen several complaints that the WAL logs grow unmanageably huge
when there is a long-running transaction, and I think we'll see a lot
more.

It would be nicer if we could drop WAL records after a checkpoint or two,
even in the presence of long-running transactions. We could do that if
we were only relying on them for crash recovery and not for UNDO.

Looking at the advantages:

1. Space reclamation via UNDO doesn't excite me a whole lot, if we can
make lightweight VACUUM work well. (I definitely don't like the idea
that after a very long transaction fails and aborts, I'd have to wait
another very long time for UNDO to do its thing before I could get on
with my work. Would much rather have the space reclamation happen in
background.)

2. SAVEPOINTs would be awfully nice to have, I agree.

3. Reusing xact IDs would be nice, but there's an answer with a lot less
impact on the system: go to 8-byte xact IDs. Having to shut down the
postmaster when you approach the 4Gb transaction mark isn't going to
impress people who want a 24x7 commitment, anyway.

4. Recycling pg_log would be nice too, but we've already discussed other
hacks that might allow pg_log to be kept finite without depending on
UNDO (or requiring postmaster restarts, IIRC).

I'm sort of thinking that undoing back to a savepoint is the only real
usefulness of WAL-based UNDO. Is it practical to preserve the WAL log
just back to the last savepoint in each xact, not the whole xact?

Another thought: do we need WAL UNDO at all to implement savepoints?
Is there some way we could do them like nested transactions, wherein
each savepoint-to-savepoint segment is given its own transaction number?
Committing multiple xact IDs at once might be a little tricky, but it
seems like a narrow, soluble problem. Implementing UNDO without
creating lots of performance issues looks a lot harder.

Nathan Myers

unread,
May 18, 2001, 9:59:16 PM5/18/01
to
On Fri, May 18, 2001 at 06:10:10PM -0700, Mikheev, Vadim wrote:
> > Vadim, can you remind me what UNDO is used for?
>
> Ok, last reminder -:))
>
> On transaction abort, read WAL records and undo (rollback)
> changes made in storage. Would allow:
>
> 1. Reclaim space allocated by aborted transactions.
> 2. Implement SAVEPOINTs.
> Just to remind -:) - in the event of error discovered by server
> - duplicate key, deadlock, command mistyping, etc, - transaction
> will be rolled back to the nearest implicit savepoint setted
> just before query execution; - or transaction can be aborted by
> ROLLBACK TO <savepoint_name> command to some explicit savepoint
> setted by user. Transaction rolled back to savepoint may be continued.
> 3. Reuse transaction IDs on postmaster restart.
> 4. Split pg_log into small files with ability to remove old ones (which
> do not hold statuses for any running transactions).

I missed the original discussions; apologies if this has already been
beaten into the ground. But... mightn't sub-transactions be a
better-structured way to expose this service?

Nathan Myers
n...@zembu.com

Bruce Momjian

unread,
May 18, 2001, 11:15:45 PM5/18/01
to
> Another thought: do we need WAL UNDO at all to implement savepoints?
> Is there some way we could do them like nested transactions, wherein
> each savepoint-to-savepoint segment is given its own transaction number?
> Committing multiple xact IDs at once might be a little tricky, but it
> seems like a narrow, soluble problem. Implementing UNDO without
> creating lots of performance issues looks a lot harder.

I am confused why we can't implement subtransactions as part of our
command counter? The counter is already 4 bytes long. Couldn't we
rollback to counter number X-10?

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

---------------------------(end of broadcast)---------------------------

Tom Lane

unread,
May 18, 2001, 11:18:09 PM5/18/01
to
Bruce Momjian <pg...@candle.pha.pa.us> writes:
> I am confused why we can't implement subtransactions as part of our
> command counter? The counter is already 4 bytes long. Couldn't we
> rollback to counter number X-10?

That'd work within your own transaction, but not from outside it.
After you commit, how will other backends know which command-counter
values of your transaction to believe, and which not?

regards, tom lane

---------------------------(end of broadcast)---------------------------

Bruce Momjian

unread,
May 18, 2001, 11:42:02 PM5/18/01
to
> Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > I am confused why we can't implement subtransactions as part of our
> > command counter? The counter is already 4 bytes long. Couldn't we
> > rollback to counter number X-10?
>
> That'd work within your own transaction, but not from outside it.
> After you commit, how will other backends know which command-counter
> values of your transaction to believe, and which not?

Seems we would have to store the command counters for the parts of the
transaction that committed, or the ones that were rolled back. Yuck.

I hate to add UNDO complexity just for subtransactions.

Hey, I have an idea. Can we do subtransactions as separate transactions
(as Tom mentioned), and put the subtransaction id's in the WAL, so they
an be safely committed/rolledback as a group?

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

---------------------------(end of broadcast)---------------------------

Tom Lane

unread,
May 18, 2001, 11:54:06 PM5/18/01
to
Bruce Momjian <pg...@candle.pha.pa.us> writes:
> Hey, I have an idea. Can we do subtransactions as separate transactions
> (as Tom mentioned), and put the subtransaction id's in the WAL, so they
> an be safely committed/rolledback as a group?

It's not quite that easy: all the subtransactions have to commit at
*the same time* from the point of view of other xacts, or you have
consistency problems. So there'd need to be more xact-commit mechanism
than there is now. Snapshots are also interesting; we couldn't use a
single xact ID per backend to show the open-transaction state.

WAL doesn't really enter into it AFAICS...

regards, tom lane

---------------------------(end of broadcast)---------------------------

Bruce Momjian

unread,
May 19, 2001, 12:03:34 AM5/19/01
to
> Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > Hey, I have an idea. Can we do subtransactions as separate transactions
> > (as Tom mentioned), and put the subtransaction id's in the WAL, so they
> > an be safely committed/rolledback as a group?
>
> It's not quite that easy: all the subtransactions have to commit at
> *the same time* from the point of view of other xacts, or you have
> consistency problems. So there'd need to be more xact-commit mechanism
> than there is now. Snapshots are also interesting; we couldn't use a
> single xact ID per backend to show the open-transaction state.

Yes, I knew that was going to come up that you have to add a lock to the
pg_log that is only in affect when someone is commiting a transaction
with subtransactions. Normal transactions get read/sharedlock, while
subtransaction needs exclusive/writelock.

Seems a lot easier than UNDO. Vadim you mentioned UNDO would allow
space reuse for rolledback transactions, but in most cases the space
reuse is going to be for old copies of committed transactions, right?
Were you going to use WAL to get free space from old copies too?

Vadim, I think I am missing something. You mentioned UNDO would be used
for these cases and I don't understand the purpose of adding what would
seem to be a pretty complex capability:

> 1. Reclaim space allocated by aborted transactions.

Is there really a lot to be saved here vs. old tuples of committed
transactions?

> 2. Implement SAVEPOINTs.
> Just to remind -:) - in the event of error discovered by server
> - duplicate key, deadlock, command mistyping, etc, - transaction
> will be rolled back to the nearest implicit savepoint setted
> just before query execution; - or transaction can be aborted by
> ROLLBACK TO <savepoint_name> command to some explicit savepoint
> setted by user. Transaction rolled back to savepoint may be
> continued.

Discussing, perhaps using multiple transactions.

> 3. Reuse transaction IDs on postmaster restart.

Doesn't seem like a huge win.

> 4. Split pg_log into small files with ability to remove old ones (which
> do not hold statuses for any running transactions).

That one is interesting. Seems the only workaround for that would be to
allow a global scan of all databases and tables to set commit flags,
then shrink pg_log and set XID offset as start of file.

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

---------------------------(end of broadcast)---------------------------

Kaare Rasmussen

unread,
May 19, 2001, 4:52:08 AM5/19/01
to
> Second: if VACUUM can run in the background, then there's no reason not
> to run it fairly frequently. In fact, it could become an automatically
> scheduled activity like CHECKPOINT is now, or perhaps even a continuously
> running daemon (which was the original conception of it at Berkeley, BTW).

Maybe it's obvious, but I'd like to mention that you need some way of setting
priority. If it's a daemon, or a process, you an nice it. If not, you need to
implement something by yourself.

--
Kaare Rasmussen --Linux, spil,-- Tlf: 3816 2582
Kaki Data tshirts, merchandize Fax: 3816 2501
Howitzvej 75 Åben 14.00-18.00 Web: www.suse.dk
2000 Frederiksberg Lørdag 11.00-17.00 Email: k...@webline.dk

Bruce Momjian

unread,
May 19, 2001, 8:16:38 AM5/19/01
to
> > Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > > Hey, I have an idea. Can we do subtransactions as separate transactions
> > > (as Tom mentioned), and put the subtransaction id's in the WAL, so they
> > > an be safely committed/rolledback as a group?
> >
> > It's not quite that easy: all the subtransactions have to commit at
> > *the same time* from the point of view of other xacts, or you have
> > consistency problems. So there'd need to be more xact-commit mechanism
> > than there is now. Snapshots are also interesting; we couldn't use a
> > single xact ID per backend to show the open-transaction state.
>
> Yes, I knew that was going to come up that you have to add a lock to the
> pg_log that is only in affect when someone is commiting a transaction
> with subtransactions. Normal transactions get read/sharedlock, while
> subtransaction needs exclusive/writelock.

I was wrong here. Multiple backends can write to pg_log at the same
time, even subtraction ones. It is just that no backend can read from
pg_log during a subtransaction commit. Acctually, they can if the are
reading a transaction status that is less than the minium active
transaction id, see GetXmaxRecent().

Doesn't seem too bad.

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

---------------------------(end of broadcast)---------------------------

Bruce Momjian

unread,
May 19, 2001, 8:27:52 AM5/19/01
to
> Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > Hey, I have an idea. Can we do subtransactions as separate transactions
> > (as Tom mentioned), and put the subtransaction id's in the WAL, so they
> > an be safely committed/rolledback as a group?
>
> It's not quite that easy: all the subtransactions have to commit at
> *the same time* from the point of view of other xacts, or you have
> consistency problems. So there'd need to be more xact-commit mechanism
> than there is now. Snapshots are also interesting; we couldn't use a
> single xact ID per backend to show the open-transaction state.

OK, I have another idea about subtransactions as multiple transaction
ids.

I realize that the snapshot problem would be an issue, because now
instead of looking at your own transaction id, you have to look at
multiple transaction ids. We could do this as a List of xid's, but that
will not scale well.

My idea is for a subtransaction backend to have its own pg_log-style
memory area that shows which transactions it owns and has
committed/aborted. It can have the log start at its start xid, and can
look in pg_log and in there anytime it needs to check the visibility of
a transaction greater than its minium xid. 16k can hold 64k xids, so it
seems it should scale pretty well. (Each xid is two bits in pg_log.)

In fact, multi-query transactions are just a special case of
subtransactions, where all previous subtransactions are
committed/visible. We could use the same pg_log-style memory area for
multi-query transactions, eliminating the command counter and saving 8
bytes overhead per tuple.

Currently, the XMIN/XMAX command counters are used only by the current
transaction, and they are useless once the transaction finishes and take
up 8 bytes on disk.

So, this idea gets us subtransactions and saves 8 bytes overhead. This
reduces our per-tuple overhead from 36 to 28 bytes, a 22% reduction!

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

---------------------------(end of broadcast)---------------------------

Tom Lane

unread,
May 19, 2001, 10:47:21 AM5/19/01
to
Kaare Rasmussen <k...@webline.dk> writes:
>> Second: if VACUUM can run in the background, then there's no reason not
>> to run it fairly frequently. In fact, it could become an automatically
>> scheduled activity like CHECKPOINT is now, or perhaps even a continuously
>> running daemon (which was the original conception of it at Berkeley, BTW).

> Maybe it's obvious, but I'd like to mention that you need some way of
> setting priority.

No, you don't --- or at least it's far less easy than it looks. If any
one of the backends gets niced, then you have a priority inversion
problem. That backend may be holding a lock that other backends want.
If it's not running because it's been niced, then the other backends
that are not niced are still kept from running.

Even if we wanted to implement priority inversion detection (painful
for locks, and completely impractical for spinlocks), there wouldn't
be anything we could do about it: Unix doesn't allow non-root processes
to reduce their nice setting.

Obviously any automatically-scheduled VACUUM would need some kind of
throttling mechanism to keep it from running constantly. But that's not
a priority setting.

regards, tom lane

Tom Lane

unread,
May 19, 2001, 11:16:26 AM5/19/01
to
Bruce Momjian <pg...@candle.pha.pa.us> writes:
> In fact, multi-query transactions are just a special case of
> subtransactions, where all previous subtransactions are
> committed/visible. We could use the same pg_log-style memory area for
> multi-query transactions, eliminating the command counter and saving 8
> bytes overhead per tuple.

Interesting thought, but command IDs don't act the same as transactions;
in particular, visibility of one scan to another doesn't necessarily
depend on whether the scan has finished.

Possibly that could be taken into account by having different rules for
"do we think it's committed" in the local pg_log than the global one.

Also, this distinction would propagate out of the xact status code;
for example, it wouldn't do for heapam to set the "known committed"
bit on a tuple just because it's from a previous subtransaction of the
current xact. Right now that works because heapam knows the difference
between xacts and commands; it would still have to know the difference.

A much more significant objection is that such a design would eat xact
IDs at a tremendous rate, to no purpose. CommandCounterIncrement is a
cheap operation now, and we do it with abandon. It would not be cheap
if it implied allocating a new xact ID that would eventually need to be
marked committed. I don't mind allocating a new xact ID for each
explicitly-created savepoint, but a new ID per CommandCounterIncrement
is a different story.

regards, tom lane

---------------------------(end of broadcast)---------------------------

0 new messages