Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Problem with tablespace file size limit

431 views
Skip to first unread message

jstuglik

unread,
Aug 20, 2009, 5:23:38 AM8/20/09
to
Hi.
I have a problem with tablespace file size limit. I use Oracle 11g
Standard Edition One on Fedora 8 Linux (x86_64).
The linux file system is ext3 with 4096 bytes blocks (which gives 2TiB
max file size). User oracle (on which the RDBMS runs) has no limits
set for a max file size (unlimited). The tablespace has also unlimited
extends set and the block size is set to 8192 bytes. Nevertheless I
encountered a problem with extending tablespace file size past 32GB.
The exact error was:

ORA-01654: unable to extend index XXX.SYS_C0011327 by 8192 in
tablespace XXX

The index is a primary key index on one of the tables. The tablespace
datafile size is now 34359730176 bytes, hence extending it by another
8192 bytes would give exactly 32GB of data.
Just to be sure I also checked if there is a possibility of creating
file sized more than 32GB on user oracle and it succeeded (I used cat /
dev/zero > aa).
The query from user_tablespaces shows that tablespace max size is
2147483645 with 8192 block size so it is way more than 32GB. It is
also standard SFT.
Oh, before you ask, there is plenty of free space on disk.

Do you have any ideas?
Please help me ASAP because now the system is running only because I
dropped one of the less important indices but it will fill up again in
short time.

sybrandb

unread,
Aug 20, 2009, 5:56:03 AM8/20/09
to

The maximum size of a smallfile tablespace datafile at 8k blocksize is
32G, as per the Oracle Reference Manual.
When sizing datafiles one of the aspects is recoverability. Personally
I wouldn't extend it beyond 32G, just because a restore would likely
take too long.
Of course you can add another datafile
(alter tablespace add datafile '....' would do that trick)
so your question is a bit strange, as well as labeling it ASAP, as
this is a forum of volunteers, so no one is obliged to answer.


----------
Sybrand Bakker
Senior Oracle DBA

jstuglik

unread,
Aug 20, 2009, 6:34:51 AM8/20/09
to
Thank you very much for your help.
It seems strange, I agree, that I don't want to add another datafile
but there are some reasons to do that involving supporting of the
application this DB is for. I think I will move some tables containing
various statistics to another tablespace stored in another file.
Those are the troublemakers - they are quite big and will grow
constantly in time.
I know this is a volunteer forum and I'm very grateful for your quick
reply. I just marked it ASAP because I'm on the clock here
and I hoped for a quick help.

Noons

unread,
Aug 20, 2009, 6:53:28 AM8/20/09
to
jstuglik wrote,on my timestamp of 20/08/2009 8:34 PM:
> Thank you very much for your help.
> It seems strange, I agree, that I don't want to add another datafile
> but there are some reasons to do that involving supporting of the
> application this DB is for. I think I will move some tables containing
> various statistics to another tablespace stored in another file.
> Those are the troublemakers - they are quite big and will grow
> constantly in time.

Hang on a tick: you don't want to add another file, but it's OK to add another
tablespace(s) - with its(their) own file(s)?

I strongly suggest you move away from that train of thought: adding a datafile
to a tablespace is the accepted and correct way to increase the capacity of a
tablespace. Use it: it's there, it's standard, it won't change.

Besides if your concern is growing size, then you most definitely need to think
in terms of adding files, rather than tablespaces: even the dedicated
tablespaces will eventually need more files.

Unless of course you want to package averything into a self-contained set and
then later on after install, as things grow, let dbas add files as needed?
In that case, yes: package the large growth tables into their own tablespaces,
the dbas will love your package as that is the way they like to isolate and
contain large growth tables!

jstuglik

unread,
Aug 20, 2009, 7:24:52 AM8/20/09
to

Actually I want to package everything into self-contained set as you
said, but
as things grow I will archive them and delete. Those tables which are
a problem
are statistics and I need them in the DB from two years behind so I
will archive
older ones once every 3 months. For now I have those tables packed in
the same
tablespace (and datafile of course) as other data (which grow in time
also but not so fast and they should
be ok with 32GB of space for a long time). If I move the big tables
into the new tablespace
with new datafile it should be enough space in one file to store 2
years behind
of them, and if it's not I will just add another datafile then.
I hope also this will improve performance of the db in data
manipulation in those tables left
in the old tablespace (and datafile).

I will do the CTAS on those tables and then drop the old ones - do you
have any hints on this to make
it as fast as possible? The are 5 tables to move, the biggest one
contains about 50 millions rows and
is about 4GB of size on disk. I'm asking because I'm affraid this
could take a long time and I will have
only few-hour long window to do this.

Noons

unread,
Aug 20, 2009, 8:36:50 AM8/20/09
to
jstuglik wrote,on my timestamp of 20/08/2009 9:24 PM:

> with new datafile it should be enough space in one file to store 2
> years behind
> of them, and if it's not I will just add another datafile then.

Sounds good in principle.

> I hope also this will improve performance of the db in data
> manipulation in those tables left
> in the old tablespace (and datafile).

It always helps to separate tables by mode of access, rather than purely size.
Highly volatile tables should be in a dedicated tablespace, read-mostly ones in
another, and so on. That way a dba can apportion things to the appropriate
devices, if desirable or needed.

> I will do the CTAS on those tables and then drop the old ones - do you
> have any hints on this to make
> it as fast as possible? The are 5 tables to move, the biggest one
> contains about 50 millions rows and
> is about 4GB of size on disk. I'm asking because I'm affraid this
> could take a long time and I will have
> only few-hour long window to do this.

It's all highly contingent on the target hardware. I wouldn't worry too much
with a 50Mrow table in the system I work with, we have them with up to 2 billion
rows: *those* are a worry! But in my prior job, 50Mrow would be a concern in
that particular hardware: all relative.

Don't want to over-complicate things but the ideal for your case would be
partitioning: it's then dirt easy to just drop partitions with the older data
and add them for new data. But partitioning is an additional cost Oracle option
and that will make your product more expensive as well as complicating your
code. If you can survive in the target hardware with CTAS performance, then
it's OK.


Ideally with CTAS, you'd be moving the table to a new tablespace. Or make sure
you use a locally managed uniform size tablespace (EXTENT MANAGEMENT LOCAL
UNIFORM SIZE [xxx]M, look it up). Pick a value for the "[xxx]M" that keeps even
largest tables at < 10000 extents. That will avoid creation of "holes" and
potentially making it impossible for the new trimmed table to be created even
though there is enough total free space left.

You may also want to have a look at the DBMS_REDEFINITION PL/SQL package. It
helps a lot with the task of moving/reloading tables after massive deletes while
keeping track of dependent objects. I think it needs Enterprise Edition as it
uses MVs to keep constant online access to the table while being redefined.

The other option is to delete all rows, then just move the table to another
tablespace at a convenient time: deleted space is not moved (except in very
unusual cases, like half-filled blocks) and you need to rebuild indexes and
recalc stats after. But it does not invalidate views or other dependents. Have
a try.


Not an easy task or decision. It all is very dependent on what sizes, hardware
capacity and so on you're dealing with.
I don't envy the work ahead of you...

jstuglik

unread,
Aug 20, 2009, 8:52:31 AM8/20/09
to
On 20 Sie, 14:36, Noons <wizofo...@yahoo.com.au> wrote:
> jstuglik wrote,on my timestamp of 20/08/2009 9:24 PM:
>
> > with new datafile it should be enough space in one file to store 2
> > years behind
> > of them, and if it's not I will just add another datafile then.
>
> Sounds good in principle.
>
> > I hope also this will improve performance of the db in data
> > manipulation in those tables left
> > in the old tablespace (and datafile).
>
> It always helps to separate tables by mode of access, rather than purely size.
> Highly volatile tables should be in a dedicated tablespace, read-mostly ones in
> another, and so on.  That way a dba can apportion things to the appropriate
> devices, if desirable or needed.

I think in this case it's ok because the tables I'm going to move are
all very similar when looking on mode of access - very frequent
inserts and no deletes (until one of archiving sessions). The rest of
the tables are all similar also - many selects, updates and inserts
but very seldom deletes (it's in general OLTP db but in some aspects
is treated like data warehouse to improve performance of some specific
tasks).

>
> > I will do the CTAS on those tables and then drop the old ones - do you
> > have any hints on this to make
> > it as fast as possible? The are 5 tables to move, the biggest one
> > contains about 50 millions rows and
> > is about 4GB of size on disk. I'm asking because I'm affraid this
> > could take a long time and I will have
> > only few-hour long window to do this.
>
> It's all highly contingent on the target hardware.  I wouldn't worry too much
> with a 50Mrow table in the system I work with, we have them with up to 2 billion
> rows: *those* are a worry!  But in my prior job, 50Mrow would be a concern in
> that particular hardware: all relative.

We just migrated to a new server few months ago and I think it should
be ok. It's HP ProLiant BL460c G1 with Hitachi drive array connected
via FC.

>
> Don't want to over-complicate things but the ideal for your case would be
> partitioning: it's then dirt easy to just drop partitions with the older data
> and add them for new data.  But partitioning is an additional cost Oracle option
> and that will make your product more expensive as well as complicating your
> code.  If you can survive in the target hardware with CTAS performance, then
> it's OK.

Just as you say: we have purchased Standard Edition One per proccessor
license and it's relatively cheap. Partitioning is, I think,
Enterprise Edition feature so there is no way we can afford it right
now. We could use a lot of other features available in Enterprise but
instead we have to struggle to make our app better using what is
available in Standard Edition.

>
> Ideally with CTAS, you'd be moving the table to a new tablespace.  Or make sure
> you use a locally managed uniform size tablespace (EXTENT MANAGEMENT LOCAL
> UNIFORM SIZE [xxx]M, look it up).  Pick a value for the "[xxx]M" that keeps even
> largest tables at < 10000 extents.  That will avoid creation of "holes" and
> potentially making it impossible for the new trimmed table to be created even
> though there is enough total free space left.

I will definitely look it up. It sounds like a good solution for
archiving purposes too.

>
> You may also want to have a look at the DBMS_REDEFINITION PL/SQL package. It
> helps a lot with the task of moving/reloading tables after massive deletes while
> keeping track of dependent objects.  I think it needs Enterprise Edition as it
> uses MVs to keep constant online access to the table while being redefined.
>
> The other option is to delete all rows, then just move the table to another
> tablespace at a convenient time: deleted space is not moved (except in very
> unusual cases, like half-filled blocks) and you need to rebuild indexes and
> recalc stats after. But it does not invalidate views or other dependents.  Have
> a try.
>
> Not an easy task or decision.  It all is very dependent on what sizes, hardware
> capacity and so on you're dealing with.
> I don't envy the work ahead of you...

Thanks a lot for all your help. I'll have to get deep into this
problem to make it right it seems:-)

Noons

unread,
Aug 20, 2009, 9:18:06 AM8/20/09
to
jstuglik wrote,on my timestamp of 20/08/2009 10:52 PM:


>> The other option is to delete all rows, then just move the table to another
>> tablespace at a convenient time: deleted space is not moved (except in very
>> unusual cases, like half-filled blocks) and you need to rebuild indexes and
>> recalc stats after. But it does not invalidate views or other dependents. Have
>> a try.
>>
>> Not an easy task or decision. It all is very dependent on what sizes, hardware
>> capacity and so on you're dealing with.
>> I don't envy the work ahead of you...
>
> Thanks a lot for all your help. I'll have to get deep into this
> problem to make it right it seems:-)
>

Apologies: forgot to say what I meant by "just move the table":
Use "ALTER TABLE [tname] MOVE TABLESPACE [tsname]". Again, look it up: some
subtle variations. Good luck.

Mark D Powell

unread,
Aug 21, 2009, 9:51:24 AM8/21/09
to

If you move the table remember that you need to rebuild all the
indexes associated with the tables.

How long it takes to move 4G of data depends on your hardware. It can
be anywhere from 3 minutes to half an hour. The index rebuilds will
probably take longer than moving the table.

I see nothing wrong with having separate tablespaces for large
tables. It would potentially allow you to size each tablespace about
the same so your backup and recovery time for any tablespace would be
about the same. I also see nothing wrong with having one single
tablespace with two or three 16G - 32G data files where everything is
stored. That is the database has one user tablespace.

It is just a matter of how you intend to manage your database. If you
use RAC you may want multiple tablespaces to use in relation to
instance partitioning the applicaiton. You may not. The decision is
yours to make based on the application, user load, the resouces
available to handle the load, and future growth plans.

I do not see that there is one right way to organize your
tablespaces. I believe in fitting the design to the environemnt.

IMHO -- Mark D Powell --

John Hurley

unread,
Aug 21, 2009, 10:07:20 AM8/21/09
to
On Aug 20, 5:23 am, jstuglik <jakub.stug...@gmail.com> wrote:

snip

Why not use a bigfile tablespace if you don't want to add another
datafile?

It is a kind of peculiar idea to be thinking you want to not consider
adding a second datafile to an existing tablespace.

My systems now are using ASM along with bigfile tablespaces in an 11g
linux 64 bit environment and it is pretty smooth sailing. You may
want to look at ASM eventually and get away from a cooked file system
approach.

Noons

unread,
Aug 23, 2009, 6:52:49 AM8/23/09
to
Mark D Powell wrote,on my timestamp of 21/08/2009 11:51 PM:

>>>> The other option is to delete all rows, then just move the table to another
>>>> tablespace at a convenient time: deleted space is not moved (except in very
>>>> unusual cases, like half-filled blocks) and you need to rebuild indexes and
>>>> recalc stats after. But it does not invalidate views or other dependents. Have

> If you move the table remember that you need to rebuild all the


> indexes associated with the tables.

and recalc the stats as well. Like I said, right above.

0 new messages