Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

question about storage and DBMS

56 views
Skip to first unread message

elodie

unread,
Nov 21, 2011, 1:21:02 PM11/21/11
to
Hi everyone,

I am working on the following question. I would appreciate if someone
could help me with it.

You are running a DBMS on a computer which has a 3kByte disk block.
Table T in your database D has size 200MBytes. You execute a query:
“select * from T”.
How much data will be read from the drive? Assume that n*size_of_tuple
= block_size, where n is natural.

I am thinking that 200,000/3=66,666.667
so that reading the database is going to take 66,667 disk blocks.
That would mean 66,6667*3kBytes=200,001kB

Thanks for your help.

ddf

unread,
Nov 21, 2011, 1:34:04 PM11/21/11
to
You're assuming no prior reads from that table have been executed,
leaving no blocks in the buffer cache. This may or may not be true.
As a maximum 66,667 disk blocks (at the O/S level) would be read
however Oracle doesn't allow a 3k block size (the available values are
2k, 4k, 8k, 16k and 32k). I also doubt that any disk manufacturer or
O/S vendor would configure disks with a 3k block size as it would be
extremely inefficient.

I can presume, then, this is homework and is designed solely for you
to practice such calculations.


David Fitzjarrell

Charles Hooper

unread,
Nov 21, 2011, 3:35:59 PM11/21/11
to
On Nov 21, 1:34 pm, ddf <orat...@msn.com> wrote:
> You're assuming no prior reads from that table have been executed,
> leaving no blocks in the buffer cache.  This may or may not be true.
> As a maximum 66,667 disk blocks (at the O/S level) would be read
> however Oracle doesn't allow a 3k block size (the available values are
> 2k, 4k, 8k, 16k and 32k).  I also doubt that any disk manufacturer or
> O/S vendor would configure disks with a 3k block size as it would be
> extremely inefficient.
>
> I can presume, then, this is homework and is designed solely for you
> to practice such calculations.
>
> David Fitzjarrell

I could very well be wrong, but I believe that Oracle Database does
permit specifying a default block size that is not a power of 2 in
size. It would be absolutely silly to do so for the reason that you
mentioned.
http://www.freelists.org/post/oracle-l/FW-Why-Separating-Data-and-Indexes-improves-performance-is-amyth

I wonder if there are any other items that need to be considered:
* 1 MB is 1024 KB - therefore the table size is theoretically 1024 *
1024 * 200 / 3072 = 68,267 blocks in size Unless, of course, the 200
MB is measured the way hard drive manufacturers measure a MB - in that
case it would be 1000 * 1000 * 200 / 3000 blocks.
* How many blocks are below the high water mark for the table?
* Is the table in an ASSM tablespace?
* What about the segment header block?
* What is the value of PCTFREE? Are all blocks 100% full?
* How is the 200 MB measured - is that the actual size of the data
stored in the blocks, or is that the number of blocks used?
* Were there any inserts or deletes in the table? Would the session
need to apply undo for a consistent read.
* Are statistics present on the table, is dynamic sampling enabled,
and at what level is dynamic sampling enabled?
* The point mentioned by David, that the buffer cache might have an
effect. What if OS caching is enabled - could be a similar situation.

I think that the question needs much more clarification. I probably
missed a couple of potential problems related to the question. In my
opinion this is a bad question.

Charles Hooper
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.

Mark D Powell

unread,
Nov 22, 2011, 9:22:50 AM11/22/11
to
On Nov 21, 3:35 pm, Charles Hooper <hooperc2...@yahoo.com> wrote:
> On Nov 21, 1:34 pm, ddf <orat...@msn.com> wrote:
>
> > You're assuming no prior reads from that table have been executed,
> > leaving no blocks in the buffer cache.  This may or may not be true.
> > As a maximum 66,667 disk blocks (at the O/S level) would be read
> > however Oracle doesn't allow a 3k block size (the available values are
> > 2k, 4k, 8k, 16k and 32k).  I also doubt that any disk manufacturer or
> > O/S vendor would configure disks with a 3k block size as it would be
> > extremely inefficient.
>
> > I can presume, then, this is homework and is designed solely for you
> > to practice such calculations.
>
> > David Fitzjarrell
>
> I could very well be wrong, but I believe that Oracle Database does
> permit specifying a default block size that is not a power of 2 in
> size.  It would be absolutely silly to do so for the reason that you
> mentioned.http://www.freelists.org/post/oracle-l/FW-Why-Separating-Data-and-Ind...
>
> I wonder if there are any other items that need to be considered:
> * 1 MB is 1024 KB - therefore the table size is theoretically 1024 *
> 1024 * 200 / 3072 = 68,267 blocks in size  Unless, of course, the 200
> MB is measured the way hard drive manufacturers measure a MB - in that
> case it would be 1000 * 1000 * 200 / 3000 blocks.
> * How many blocks are below the high water mark for the table?
> * Is the table in an ASSM tablespace?
> * What about the segment header block?
> * What is the value of PCTFREE?  Are all blocks 100% full?
> * How is the 200 MB measured - is that the actual size of the data
> stored in the blocks, or is that the number of blocks used?
> * Were there any inserts or deletes in the table?  Would the session
> need to apply undo for a consistent read.
> * Are statistics present on the table, is dynamic sampling enabled,
> and at what level is dynamic sampling enabled?
> * The point mentioned by David, that the buffer cache might have an
> effect.  What if OS caching is enabled - could be a similar situation.
>
> I think that the question needs much more clarification.  I probably
> missed a couple of potential problems related to the question.  In my
> opinion this is a bad question.
>
> Charles Hooperhttp://hoopercharles.wordpress.com/
> IT Manager/Oracle DBA
> K&M Machine-Fabricating, Inc.

I have seen 5K used as the Oracle Block size in a demo that I am
pretty sure was on AskTom. I have not personally tried to use an odd
block size but now that you can set the block size at the tablespace
level testing is not as hard as when you had to build a database at
one size providing you have some disk space and time. Two thinkgs in
short supply for me. It may still be possible on newer versions. It
would be interesting to know.

IMHO -- Mark D Powell --

ddf

unread,
Jan 6, 2012, 1:44:33 PM1/6/12
to
> IMHO -- Mark D Powell --- Hide quoted text -
>
> - Show quoted text -

SQL> create tablespace fivek
2 datafile 'c:\oradba\oradata\smedley\fivek.dbf' size 100M
3 blocksize 5k;
create tablespace fivek
*
ERROR at line 1:
ORA-25157: Specified block size 5120 is not valid

SQL> alter system set db_5k_cache_size = 100M;
alter system set db_5k_cache_size = 100M
*
ERROR at line 1:
ORA-02065: illegal option for ALTER SYSTEM

SQL>

This in 11.2.0.3 -- with the current parameters that need to be set I
can't see how a 'non-standard' blocksize can be specified.


David Fitzjarrell

joel garry

unread,
Jan 9, 2012, 5:07:18 PM1/9/12
to
The documentation supports you:
http://docs.oracle.com/cd/E11882_01/server.112/e25494/tspaces003.htm#ADMIN11373
and the links there.
But that doesn't rule out the db block size.
http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams049.htm#REFRN10031
notes it has to be a multiple of the physical block size, which I
guess means it is defined at the platform level, or how would Oracle
know? All it can do is ask for data from the OS, and maybe keep track
of what the OS gives back. I also don't have 11.2 to test.

jg
--
@home.com is bogus.
http://www.tmcnet.com/topics/articles/252347-key-sales-vp-hp-jumps-oracle-could-others.htm
0 new messages