I have a table that can contain around 50 million records and can be looked
up by column
X. X has a cardinality of 20. i.e. it can contain 20 distinct values. A
lookup by 'X' can
will be done often.
Does it make sense to not use an index at all. Or is a Btree index better
then nothing.
Isn't this a perfect case for INDEX COMPRESSion?
Anyway,
CREATE TABLE Moo (Cow NUMBER);
BEGIN
FOR A IN 1..20 LOOP
FOR B IN 1..1000 LOOP
INSERT INTO Moo VALUES (A);
END LOOP;
END LOOP;
END;
/
COMMIT;
SELECT COUNT(*) FROM Moo WHERE Cow = 5;
SELECT COUNT(*) FROM Moo WHERE Cow = 6;
SELECT COUNT(*) FROM Moo WHERE Cow = 7;
For any of them:
First time: Elapsed: 00:00:00.03
After that: Elapsed: 00:00:00.01
CREATE INDEX A ON Moo(Cow);
SELECT COUNT(*) FROM Moo WHERE Cow = 8;
SELECT COUNT(*) FROM Moo WHERE Cow = 9;
SELECT COUNT(*) FROM Moo WHERE Cow = 10;
For any of them:
First time: Elapsed: 00:00:00.01
After that: Elapsed: 00:00:00.00
I'd say the index helps. That's with only 20,000 records. You may just
want to test yourself on a larger set.
Brian
"ayrobins" <ant...@lumos.com> wrote in message news:<gurP7.80$6e6....@dca1-nnrp2.news.digex.net>...
"Daniel" <twi...@yahoo.com> wrote in message
news:e0e46f5b.01120...@posting.google.com...
I apologize for being a bit long winded, but you can't rely solely on
cardinality when making index choices. You need to know the nature of
your data and the distribution of values in the data set.
HTH
Dan Peacock
DBA
Wolverine World Wide
"ayrobins" <ant...@lumos.com> wrote in message news:<gurP7.80$6e6....@dca1-nnrp2.news.digex.net>...
"Dan Peacock" <peac...@yahoo.com> wrote in message
news:67044b3b.01120...@posting.google.com...