Partitioning has no effect on performance unless the Cost Based
Optimizer is used.
Is that true? I find it a bit hard to believe but then I am not sure
since I dont know much about partitioning.
rgds
Daud
Bullshit. It works very well on tables even when they don't have statstics.
You only have to make sur the indexes are partitioned too.
Performance is just one of the benetifs one gained from partitioning.
Both CBO and RBO can benefit from high availability (since partition
can be managed individually) and ease of administration with
partitioining.
That said, it is likely nobody will be talking about RBO anymore in
the not too distant future. So the whole discussion may become moot..
- Jusung Yang
dau...@hotmail.com (Daud) wrote in message news:<f0bf3cc3.02102...@posting.google.com>...
The RBO and myself have a number of things in common, the most obvious being
that we are fast approaching our used by date. However another thing we are
both rather hopeless at is pruning. I have killed more roses than I would
like to admit and the RBO wouldn't know how to perform partition pruning on
a partitioned table if it's life depended on it (which I guess it does). The
RBO also wouldn't have the foggiest notion on using an effective
partition-wise join.
Therefore from a purely performance point of view, the RBO is kinda
buggered.
Cheers
Richard
"Daud" <dau...@hotmail.com> wrote in message
news:f0bf3cc3.02102...@posting.google.com...
"Richard Foote" <richar...@bigpond.com> schreef in bericht
news:3wjv9.64478$g9.1...@newsfeeds.bigpond.com...
> I know companies that never use CBO, just because statistics slow
> down performance if they aren't taken regularly. If only one table
> of a user has statistics taken from it, it grately reduces speed of
> all other tables queries under RBO. In general an oracle database
> can perform very well without statistics and CBO. The example I talk
> about is a system with tables less than 100000 rows some with
> queries joining tables on date basis. I saw a query improve from
> seven seconds to less than 1 or 2 secs just by partitioning the
> tables and the indexes, without statistics and CBO.
The ironic part is that in your example above, it's an excellent
candidate for RBO: relatively small datasets so analyzes can be run
periodically without affecting the online system.
The 'issues' (and I intend that in quotes) with RBO is when your
datasets span in the giga and terabyte range, if not greater. Some
RDBMS vendors allow one to output the statistics from one database to
a file, which can then be imported to another database. The beauty
here is that if you are mirroring your instances, you can run the
statistics on your backup, and upload them into your primary.
I expect Oracle to mimic Sybase's behavior relatively soon -- they
'borrow' everything else, so why not? <g>
--
Pablo Sanchez, High-Performance Database Engineering
http://www.hpdbe.com
- Kenneth Koenraadt
Well, they've been doing it since at least 8.1.6. Check out dbms_stats.
There's an export_<xxx>_stats and import_<xxx>_stats procedure. Grab
statistics from one table or schema into a 'stats table' using the right
export.... procedure, use boring old 'exp' export to pipe the statsistics
table into a traditional dump file, transfer the dump file to another
database, use 'imp' to import the statistics table into the new database,
and use the import_<xxx>_.... procedure to transfer the statistics out of
the statistics table into the data dictionary.
Voila.
Even more interestingly, you can use the dbms_stats.SET_<xxx>_statistics
procedure to completely make up statistics for objects -to claim, for
example, that a 14-row table (mentioning no names) actualy has 14 billion
rows in it, and to get the optimiser to behave accordingly.
DBMS_STATS is a seriously sexy package (and I probably need to get a life).
Regards
HJR
>
> DBMS_STATS is a seriously sexy package (and I probably need to get a
> life).
You and me both!
Thx for the info!!!
dau...@hotmail.com (Daud) wrote in message news:<f0bf3cc3.02102...@posting.google.com>...