Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Tablespace page limit

54 views
Skip to first unread message

Neil Truby

unread,
Oct 23, 2009, 4:44:46 AM10/23/09
to
I had an email from the IIUG last week asking me to rank various features
that could be incorporated into IDS next in terms of very useful; mildly
useful; not useful to me etc.

All good stuff.

However the Number One issue I'd like to see addressed, which didn't appear
on the questionnaire, is the limit on 16 million pages in a tablespace.
This really is becomming more and more of an issue as times goes by, and on
established databases we're having to take long (several hours) outages to
fragment tables or - because there's a general perceptio that fragmented
indexes can give unpredictable results - putting indexes into 4k page size
dbspaces especially created just to get around this restriction.

Is it just me, or this limitation becoming increasingly troublesome for
others?

Fernando Nunes

unread,
Oct 23, 2009, 5:45:58 PM10/23/09
to

I never met this limit. But I remember several posts from you
complaining about this, and you probably "get out" more than I, so I'm
inclined to accept that his is a real limitation.
But I'd put things into another perspective:

- Do you really want to have tables with more than 32GB without any kind
of fragmentation?
- Do you really want to have those tables on 2K pages?

I'll not dive into the technical details (I would have to dig it up),
but my point is that we have workarounds for this limit that bring other
advantages. Changing it would probably be a complex task (I wouldn't
mind seeing things changed at that level, but for other reasons), and I
would not expect it to become a priority...

Having said this, I can really understand the point that some solutions
for this will imply some downtime or at least some restrictions to
implement...

Regards.

dba....@gmail.com

unread,
Oct 23, 2009, 5:48:43 PM10/23/09
to
>However the Number One issue I'd like to see addressed, which didn't appear
>on the questionnaire, is the limit on 16 million pages in a tablespace.
>This really is becomming more and more of an issue as times goes by, and on
>established databases we're having to take long (several hours) outages to
>fragment tables or - because there's a general perceptio that fragmented
>indexes can give unpredictable results - putting indexes into 4k page size
>dbspaces especially created just to get around this restriction.
>
>Is it just me, or this limitation becoming increasingly troublesome for
>others?

Count me in.

db2 until ver 9.0 had the same limitation
and we were sick and tired of moving to a bigger page
size tablespace. Started with 4K and ended with 32K.
Prior to ver 9.0 db2 did not even have table level
partition and the only crude solution was to split
it into multiple tables and use UNION ALL.

Ian Michael Gumby

unread,
Oct 25, 2009, 5:53:36 PM10/25/09
to dba....@gmail.com, inform...@iiug.org
To play devil's advocate, why do you think that IBM really doesn't want to go beyond 16 million pages in a tablespace?
Can a table extend in to multiple table spaces?

I would think that there's some other technology that I would want to see before this would be implemented that would have a more compelling argument for implementation and a faster time till ROI.

But that's just me. ;-)

-G
> _______________________________________________
> Informix-list mailing list
> Inform...@iiug.org
> http://www.iiug.org/mailman/listinfo/informix-list


Windows 7: Simplify your PC. Learn more.

dba....@gmail.com

unread,
Oct 25, 2009, 6:01:42 PM10/25/09
to
In article <mailman.211.12565076...@iiug.org>, Ian Michael
Gumby says...
>
>
>To play devil's advocate=2C why do you think that IBM really doesn't want t=

>o go beyond 16 million pages in a tablespace?
>Can a table extend in to multiple table spaces?


did you ever consider hiring a translator for your drivels.

Informix, since ver 10, allows a table to be partitioned in the same
dbspace . So effectively a table can now be as big as possible, even
when it is in single dbspace.

However I would still like a table to extend beyong 16.7 million
pages, primarily because this feature would be most basic and
can be used in express edition also.

>
>I would think that there's some other technology that I would want to see b=
>efore this would be implemented that would have a more compelling argument =

Jacob Salomon

unread,
Oct 26, 2009, 12:48:44 AM10/26/09
to

Note: I do not have an Informix server handy to peek at right now so I
can't confirm everything I am saying - corrections welcome.

Neil,

I have looked over the responses you got so far and none of them address
the issue of converting a server to tolerate a larger page-count limit.

What sets this limit now? The extent list in the partition header,
wherein each entry - the location and size information for each extent
of the tablespace - has so many bytes set aside for this entry. (Why
only 24 bits for each component? Perhaps the high-order 8 bits of the
location and size words are for status flags.)

That said, to convert a server to use our hypothetical 32-bit limitation
(4-billion or so, if the value is unsigned, as it should be) would
require converting all the partition-header pages (remember, we can now
have more than one partition header page per tablespace) to a new format
with 32-bit logical position plus a 32-bit size for each extent (well,
maybe we could keep the 24-bit extent size) plus the ubiquitous status
flags.

By judiciously copying and converting these pages to the new layout, the
worst consequence is requiring more pages for a very fractured
tablespace. This could be done one table at a time, much like the new
row-compression feature of 11.5.

I started writing the above response to point out how difficult it would
be to perform such a conversion, that such a feature could only be
applied to new tables. I very quickly realized that a conversion would
be no big deal, certainly easier than the compression feature.

All that said: Is there sufficient green-grease - a monetary reason -
for IBM to extend themselves for this? How many customers really need
it? Had I seen this question in the survey, my response would have been
"Nice to have". (My biggest database ever was ~ 5TB; your experience
may differ.)

That's my $0.02 (US) on the issue, FWIW.

- Jacob

Ian Michael Gumby

unread,
Oct 26, 2009, 2:34:30 AM10/26/09
to dba....@gmail.com, inform...@iiug.org


> From: dba....@gmail.com
> Subject: Re: RE: Tablespace page limit
> Date: Sun, 25 Oct 2009 15:01:42 -0700
> To: inform...@iiug.org
>
> In article <mailman.211.12565076...@iiug.org>, Ian Michael
> Gumby says...
> >
> >
> >To play devil's advocate=2C why do you think that IBM really doesn't want t=
> >o go beyond 16 million pages in a tablespace?
> >Can a table extend in to multiple table spaces?
>
>
> did you ever consider hiring a translator for your drivels.
>
No...
I asked a simple question because it was required.
Your answer just proved why this isn't as big an issue as Neil griped about.
It also goes to show that you've got the intelligence only a bit north of an eggplant because the question was supposed to be rhetorical. Not the why IBM wouldn't want to do this, but if a table could extend past a single table space.

If you're going to ask for a feature request, you really need to think about both the pros and cons to the request.

If there is no true limitation, then the priority of the feature request drops dramatically.


> Informix, since ver 10, allows a table to be partitioned in the same
> dbspace . So effectively a table can now be as big as possible, even
> when it is in single dbspace.
>
> However I would still like a table to extend beyong 16.7 million
> pages, primarily because this feature would be most basic and
> can be used in express edition also.
>

TANSTAAFL.
You do know what that means. Right?

And your logic for this feature is to enhance Express so that you don't need to buy Enterprise. Ok, fair enough. But if you were IBM, what do you that they would say to that? ;-)

But hey! What do I know?
Its not like I try to look at both sides of an issue before griping.

-G



Windows 7: It helps you do more. Explore Windows 7.

Richard Harnden

unread,
Oct 26, 2009, 7:58:10 AM10/26/09
to

The rowid - 0xLLLLLLSS - 24 bits for the logical-page (so max
16million pages per partnum) and 8 bits for the slot (so max 255 rows/
page).

>
> That said, to convert a server to use our hypothetical 32-bit limitation
> (4-billion or so, if the value is unsigned, as it should be) would
> require converting all the partition-header pages (remember, we can now
> have more than one partition header page per tablespace) to a new format
> with 32-bit logical position plus a 32-bit size for each extent (well,
> maybe we could keep the 24-bit extent size) plus the ubiquitous status
> flags.
>
> By judiciously copying and converting these pages to the new layout, the
> worst consequence is requiring more pages for a very fractured
> tablespace.  This could be done one table at a time, much like the new
> row-compression feature of 11.5.
>
> I started writing the above response to point out how difficult it would
> be to perform such a conversion, that such a feature could only be
> applied to new tables.  I very quickly realized that a conversion would
> be no big deal, certainly easier than the compression feature.
>
> All that said:  Is there sufficient green-grease - a monetary reason -
> for IBM to extend themselves for this?  How many customers really need
> it?  Had I seen this question in the survey, my response would have been
> "Nice to have".  (My biggest database ever was ~ 5TB; your experience
> may differ.)
>
> That's my $0.02 (US) on the issue, FWIW.
>

> - Jacob- Hide quoted text -
>
> - Show quoted text -

Neil Truby

unread,
Oct 28, 2009, 6:06:59 AM10/28/09
to
"Jacob Salomon" <spamn...@yahoo.com> wrote in message
news:feednSo_r86zt3jX...@earthlink.com...
> Neil Truby wrote:

> All that said: Is there sufficient green-grease - a monetary reason - for
> IBM to extend themselves for this? How many customers really need it?
> Had I seen this question in the survey, my response would have been "Nice
> to have". (My biggest database ever was ~ 5TB; your experience may
> differ.)

We work with applications that have audit tables that grow like Topsy and do
eventually hot the limit. You might argue that the customer or developer
should be archiving this data but at present it isn't necessary. The
*incremental* cost of 32g of even Tier One storage is not so very high and
barely worth worrying about, so it is the IBM Informix limitation that is
causing the customers to address the issue. And the addressing almost
always involves intrusive downtime.

So what would the financial justification be for IBM to increase the page
limit? Just a defensive one I think: in my experience the sectors that
continue to thrive are teleco and gaming, which demand exceptionally high
levels of availability. In my customers even an hour's planned downtime is
quite visible to senior management.

Of course R&D resource is finite: my guess is that IBM is choosing to devote
at least some of it into new, big-ticket, chargebale options like
Compression. Time will tell I suppose whether there are sufficient large
customers (and you'd have to be a large customer with deep pockets) to take
up these options.

Superboer

unread,
Nov 2, 2009, 5:33:31 AM11/2/09
to

-->I never met this limit. But I remember several posts from you
-->complaining about this, and you probably "get out" more than I, so
I'm
my 2 cents....

I hate to say this; these limits were hit at least a decade ago by a
number of customers and yes one can do a lot
in order not to hit this limit using fragmentation.(may be beneficial
for performance too ....)
and yes it is a whole lot of work to change it.

Superboer.

On 28 okt, 11:06, "Neil Truby" <neil.tr...@ardenta.com> wrote:
> "Jacob Salomon" <spamntr...@yahoo.com> wrote in message

0 new messages