Yes.
--
Cheers,
Obnoxio The Clown
http://obotheclown.blogspot.com
--
This message has been scanned for viruses and
dangerous content by OpenProtect(http://www.openprotect.com), and is
believed to be clean.
Thanks for the reply.
That's quite a disappointment.
I imagine that this is quite a fundamental thing to address, or it would
have been already, but I think Informix Development is going to have to
Grasp the Nettle on this. I find it becoming an increasingly important
factor in physical database design and administration. Sure, you can
fragment around it, but this doesn't chime well with IBM's messaging about
Informix ("Set it and Forget it", etc) ...
99% of DBA's /do/ just set it and forget it. Don't blame Informix for
your ineptitude or laziness.
Er, well, that's a bit harsh. I'm not blaming anyone for anything. I'm
saying that a limitation, which once probably seemed as unlikely ever to be
relevant as a 2g chunk size, is becoming more and more of a consideration as
time goes by.
And that, although it's fairly simple for an Informix God such as myself to
workaround, the same is not necessarily true of others; that having to use
fragmentation not for its intended purpose but to get around an increasingly
wearisome product limitation is not consistent with the "Low Maintenance"
marketing message; and that it really needs to be addressed.
So, Bollocks to you and your uninformed opinion! (Which of course though I
fully respect as equally valid to mine).
Quick straw poll at http://obotheclown.blogspot.com/ -- have you ever
run into this ludicrously low limit. I'm curious to see what people have
to say?
How many rows does it translate to? What is the page size?
With tables of this size, I think fragmentation can become a blessing for a lot
of reasons...
This change would not be trivial to implement. I hope that if one day R&D
decides to do it, they take the opportunity to change other things...
Regards.
--
Fernando Nunes
Portugal
http://informix-technology.blogspot.com
My email works... but I don't check it frequently...
You would have thought so. But then setting it and forgetting it is
obviously *much* more important.
I don't think we're "there yet"... I mean, "set it and forget it" for systems
with these sizes (unless you set it with fragmentation and then forget it ;).
But Neil has a point. We want to get there. We have other limitations that time
showed were too small. I don't think this is the same case, but it would be
nice to don't have to worry about it before time prove me wrong...
Maybe there's something in the roadmap that point in that direction... For the
time being I believe systems with this size will have a DBA... Maybe not a full
time one, but at least in the planning phase.
> And that, although it's fairly simple for an Informix God such as myself to
> workaround, the same is not necessarily true of others; that having to use
> fragmentation not for its intended purpose but to get around an increasingly
> wearisome product limitation is not consistent with the "Low Maintenance"
> marketing message; and that it really needs to be addressed.
>
> So, Bollocks to you and your uninformed opinion! (Which of course though I
> fully respect as equally valid to mine).
Neil,
When you talk about a 'set it and forget it' database, how large are
you really talking about?
I mean lets take an example... a POS system sitting in a Wally*Mart.
You only have so many SKU items.
You have only so many transactions, which you can offload to corporate
past 120 days if you like and then purge them from the system.
(For RMAs you would want to track it in both the same store or either
query the other store (peer to peer) or back to HQ.)
So in the store, IDS works great because you're not really going to
have to deal with a 16GB table space limit.
While I chose a POS system, you can also choose an embedded
application. Order entry for a mom & pop shop. Warehouse Distribution.
Lots of applications don't even touch that limit.
For those that do, we have guys like you and Art. ;-)
> How many rows does it translate to? What is the page size?
> With tables of this size, I think fragmentation can become a blessing for
> a lot of reasons...
Many millions. Tens of millions. 2k. But I've already had it once this
year on an AIX (4k page) system too ...
> This change would not be trivial to implement. I hope that if one day R&D
> decides to do it, they take the opportunity to change other things...
I'm surprised that you don't think it's important. Less surprised at
Obnoxio - I doubt if he ever really administers any proper systems, and his
kitchen inventory system running on IDS 11 hasn't hit any limits yet ;-)
To me, it's more of an irritaion than a serious issue - just like 2g chunks
actually!
But then I'm not advertising Informix as "Set and Forget" (a strapline
which, as a database services compnay, we don't really like much anyway!).
rgds
Neil
Those words will come back to haunt you.
Guys -
I agree with the "most honorable Mr. Truby" on this one. We see
clients all the time hitting one of the 3 limits (pages, size, rows)
that IDS still is enforcing. And no - it's not a trivial fix,
regardless of what the limit might be. And IF the skill level is low-
medium, then they typically shy away from it, even with a good
explanation. We have a number of clients that choose to purge/delete
rows when getting there or close, versus implementing a frag'd
solution. We once discussed "automatically fragmenting" a partition if
it was hitting one of the limits, but the feature didn't make it. That
feature would have supported the "set it and forget it" much better
for sure, but I believe the inherent ramifications outweigh the
possibility (and issues that come with) of "set it and forget it." I
also agree that advertising "set it and forget it" is simply
misleading. WAY too many variables to say that one is available or
rings of some percentage truth. In many clients cases, that's EXACTLY
what happens, but in many (obviously) that is impossible (again, due
to many variables, including education, staffing, etc...).
HTH -
Mark Scranton
Xtivia Inc.
Don't get me wrong... I think it is important. Personally I never hit this
limit. I know of customer(s?) who did.
I just think that when this happens, we're talking about systems that by no
means are the "set and forget" kind... For several reasons.
The limit is 16,775,134 pages per fragment, for tablespace with 2K
page the limit is 32GB, for tablespace witk 8K page the limit is
128GB, is it enought?
Well no, obviously it isn't, or he wouldn't have started the thread!
> The limit is 16,775,134 pages per fragment, for tablespace with 2K
page the limit is 32GB, for tablespace witk 8K page the limit is
128GB, is it enought?
Here are the largest tables in a Lawson customer's database:
dbsname tabname num_of_extents total_size
live txtaxtran 60 15453132
live coline 17 14882316
live apinvoice 31 12622244
live oeinvcline 28 12199439
live aroitems 56 10121879
As you can see, several of these are approaching the 16g page limit, and
several even larger tables have already been fragmented.
Wow, that's a blast from the past. Didn't know that there were any
other Lawson / Informix systems out there . . .
>
> dbsname tabname num_of_extents total_size
>
> live txtaxtran 60 15453132
> live coline 17 14882316
> live apinvoice 31 12622244
> live oeinvcline 28 12199439
> live aroitems 56 10121879
>
> As you can see, several of these are approaching the 16g page limit, and
> several even larger tables have already been fragmented.
Can you detach indexes? I don't believe that detached indexes count
toward total size, and I know that Lawson has some pretty wide indexes
(especially in AP / GL). And Lawson (depending on version) has ways to
stuff their dictionary for their "dbreorgs" also, from what I remember.
John Carlson