Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[HACKERS] Performance improvement hints

0 views
Skip to first unread message

de...@cdi.cz

unread,
Sep 12, 2000, 8:01:12 AM9/12/00
to
Hello,
I have encountered problems with particular query so that
a started to dug into sources. I've two questions/ideas:

1) when optimizer computes size of join it does it as
card(R1)*card(R2)*selectivity. Suppose two relations
(R1 & R2) each 10000 rows. If you (inner) join them
using equality operator, the result is at most 10000
rows (min(card(R1),card(R2)). But pg estimates
1 000 000 (uses selectivity 0.01 here).
Then when computing cost it will result in very high
cost in case of hash and loop join BUT low (right)
cost for merge join. It is because for hash and loop
joins the cost is estimated from row count but merge
join uses another estimation (as it always know that
merge join can be done only on equality op).
It then leads to use of mergejoin for majority of joins.
Unfortunately I found that in majority of such cases
the hash join is two times faster.
I tested it using SET ENABLE_MERGEJOIN=OFF ...
What about to change cost estimator to use min(card(R1),
card(R2)) instead of card(R1)*card(R2)*selectivity in
case where R1 and R2 are connected using equality ?
It should lead to much faster plans for majority of SQLs.

2) suppose we have relation R1(id,name) and index ix(id,name)
on it. In query like: select id,name from R1 order by id
planner will prefer to do seqscan+sort (althought the R1
is rather big). And yes it is really faster than using
indexscan.
But indexscan always lookups actual record in heap even if
all needed attributes are contained in the index.
Oracle and even MSSQL reads attributes directly from index
without looking for actual tuple at heap.
Is there any need to do it in such ineffecient way ?

regards, devik


Jules Bean

unread,
Sep 12, 2000, 9:12:23 AM9/12/00
to
On Tue, Sep 12, 2000 at 02:30:09PM +0200, de...@cdi.cz wrote:
> Hello,
> I have encountered problems with particular query so that
> a started to dug into sources. I've two questions/ideas:
>
> 1) when optimizer computes size of join it does it as
> card(R1)*card(R2)*selectivity. Suppose two relations
> (R1 & R2) each 10000 rows. If you (inner) join them
> using equality operator, the result is at most 10000
> rows (min(card(R1),card(R2)). But pg estimates
> 1 000 000 (uses selectivity 0.01 here).

Surely not. If you inner join, you can get many more than min
(card(R1),card(R2)), if you are joining over non-unique keys (a common
case). For example:

employee:

name job

Jon Programmer
George Programmer

job_drinks

job drink

Programmer Jolt
Programmer Coffee
Programmer Beer


The natural (inner) join between these two tables results in 6 rows,
card(R1)*card(R2).

I think you mean that min(card(R1),card(R2)) is the correct figure
when the join is done over a unique key in both tables.


>
> 2) suppose we have relation R1(id,name) and index ix(id,name)
> on it. In query like: select id,name from R1 order by id
> planner will prefer to do seqscan+sort (althought the R1
> is rather big). And yes it is really faster than using
> indexscan.
> But indexscan always lookups actual record in heap even if
> all needed attributes are contained in the index.
> Oracle and even MSSQL reads attributes directly from index
> without looking for actual tuple at heap.
> Is there any need to do it in such ineffecient way ?


I believe this is because PgSQL doesn't remove entries from the index
at DELETE time, thus it is always necessary to refer to the main table
in case the entry found in the index has since been deleted.
Presumably this speeds up deletes (but I find this behaviour suprising
too).

Jules

Tom Lane

unread,
Sep 12, 2000, 10:16:02 AM9/12/00
to
de...@cdi.cz writes:
> 1) when optimizer computes size of join it does it as
> card(R1)*card(R2)*selectivity. Suppose two relations
> (R1 & R2) each 10000 rows. If you (inner) join them
> using equality operator, the result is at most 10000
> rows (min(card(R1),card(R2)). But pg estimates
> 1 000 000 (uses selectivity 0.01 here).

0.01 is only the default estimate used if you've never done a VACUUM
ANALYZE (hint hint). After ANALYZE, there are column statistics
available that will give a better estimate.

Note that your claim above is incorrect unless you are joining on unique
columns, anyway. In the extreme case, if all the entries have the same
value in the column being used, you'd get card(R1)*card(R2) output rows.
I'm unwilling to make the system assume column uniqueness without
evidence to back it up, because the consequences of assuming an overly
small output row count are a lot worse than assuming an overly large
one.

One form of evidence that the planner should take into account here is
the existence of a UNIQUE index on a column --- if one has been created,
we could assume column uniqueness even if no VACUUM ANALYZE has ever
been done on the table. This is on the to-do list, but I don't feel
it's real high priority. The planner's results are pretty much going
to suck in the absence of VACUUM ANALYZE stats anyway :-(

> Then when computing cost it will result in very high
> cost in case of hash and loop join BUT low (right)
> cost for merge join. It is because for hash and loop
> joins the cost is estimated from row count but merge
> join uses another estimation (as it always know that
> merge join can be done only on equality op).
> It then leads to use of mergejoin for majority of joins.
> Unfortunately I found that in majority of such cases
> the hash join is two times faster.

The mergejoin cost calculation may be overly optimistic. The cost
estimates certainly need further work.

> But indexscan always lookups actual record in heap even if
> all needed attributes are contained in the index.
> Oracle and even MSSQL reads attributes directly from index
> without looking for actual tuple at heap.

Doesn't work in Postgres' storage management scheme --- the heap
tuple must be consulted to see if it's still valid.

regards, tom lane

de...@cdi.cz

unread,
Sep 12, 2000, 10:50:41 AM9/12/00
to
> > using equality operator, the result is at most 10000
> > rows (min(card(R1),card(R2)). But pg estimates
> > 1 000 000 (uses selectivity 0.01 here).
>
> Surely not. If you inner join, you can get many more than min
> (card(R1),card(R2)), if you are joining over non-unique keys (a common
> case). For example:

Ohh yes. You are right. Also I found that my main problem
was not running VACUUM ANALYZE so that I have invalid value
of column's disbursion.
I ran it and now hash join estimates row count correctly.

> > But indexscan always lookups actual record in heap even if
> > all needed attributes are contained in the index.
> > Oracle and even MSSQL reads attributes directly from index
> > without looking for actual tuple at heap.
>

> I believe this is because PgSQL doesn't remove entries from the index
> at DELETE time, thus it is always necessary to refer to the main table
> in case the entry found in the index has since been deleted.

Hmm it looks reasonable. But it still should not prevent us
to retrieve data directly from index whether possible. What
do you think ? Only problem I can imagine is if it has to
do something with locking ..

regards, devik


de...@cdi.cz

unread,
Sep 13, 2000, 7:45:24 AM9/13/00
to
> > But indexscan always lookups actual record in heap even if
> > all needed attributes are contained in the index.
> > Oracle and even MSSQL reads attributes directly from index
> > without looking for actual tuple at heap.
>
> Doesn't work in Postgres' storage management scheme --- the heap
> tuple must be consulted to see if it's still valid.

yes, I just spent another day by looking into sources and
it seems that we need xmin, xmax stuff.
What do you think about this approach:

1) add all validity & tx fields from heap tuple into
index tuple too
2) when generating plan for index scan try to determine
whether we can satisfy target list using only data
from index tuples, if yes then compute cost without
accounting random heap page reads - it will lead into
much lower cost
3) whenever you update/delete heap tuple's tx fields, update
then also in indices (you don't have to delete them from
index)

It will cost more storage space and slightly more work when
updating indices but should give excelent performance when
index is used.

Measurements:
I've table with about 2 mil. rows declared as
bigrel(namex varchar(50),cnt integer,sale datetime).
I regulary need to run this query against it:
select nazev,sum(cnt) from bigrel group by name;
It took (in seconds):

Server\Index YES NO
pg7.01 linux 58 264
MSSQL7 winnt 17 22

I compared on the same machine (PII/375,128RAM) using
WINNT under VMWARE and native linux 2.2. pq was
vaccum analyzed.
Why is pgsql so slow ? The mssql plan without index uses
hash aggregating but pg sorts while relation.
With index, in pg there is a big overhead of heap tuple
reading - mssql uses data directly from scanned index.

Also I noticed another problem, when I added
where nazev<'0' it took 110ms on pg when I used
set enable_seqscan=on;.
Without is, planner still tried to use seqscan+sort
which took 27s in this case.

I'm not sure how complex the proposed changes are. Another
way would be to implement another aggregator like HashAgg
which will use hashing.
But it could be even more complicated as one has to use
temp relation to store all hash buckets ..

Still I think that direct index reads should give us huge
speed improvement for all indexed queries.
I'm prepared to implement it but I'd like to know your
hints/complaints.

Regards, devik


0 new messages