When I run onstat -d it says at the end:
Expanded chunk capacity mode: disabled.
I cannot see any obvious information about this in the release notes
directory. Being flat-out, I don't really have a chance to scour the manuals
yet.
Can anyone give me a brief rundown of what this mode offers, and if it's
useful, how do I enable and use this feature? I can definitely make use of
chunks or offsets > 2Gb here.
Thanks in advance - I promise, busy not just lazy.
to enable this new feature you must use onmode -BC
Take a quick look into 8.40 Admin Ref & Guide
Warning: point of no return! To switch back means
oninit -i.
In 9.40.UC2 and UC3 we had problems on large chunks
when doing inplace alter table. Those are fixed in
9.40.*C4, though.
Looks like our shop *must* enable it now to overcome the
fact that one cannot have a multichunk physical log
and running log mode ANSI in a biggish shop this
triggers checkpoints more dense than we want to have
them (1 hour we want, 50 mins we get)
I would not switch large chunks on without
excessive testing when there is no need to do so.
dic_k
--
Richard Kofler
SOLID STATE EDV
Dienstleistungen GmbH
Vienna/Austria/Europe
ok - in 9.40 I see trhat onmode reports the existence of -BC flag
but the online PDF admin and reference guide for 9.40 doesn't describe it at
all. I assume you really meant 8.40 and not 9.40 manuals? If so I don't have
them immediately to hand.
> triggers checkpoints more dense than we want to have
> them (1 hour we want, 50 mins we get)
You mean you want to have a checkpoint only every hour, but you get one
every 50 minutes? The alternative interpretation is too horrible to consider
:-> Sounds like there's a little bit more write activity going on with large
chunks.
Ahh well - i'll stick to tradition for this job. Thanks for the pointers.
Andrew Hamm wrote
> Expanded chunk capacity mode: disabled.
>
> I cannot see any obvious information about this in the
> release notes directory. Being flat-out, I don't really have
> a chance to scour the manuals yet.
>
> Can anyone give me a brief rundown of what this mode offers,
> and if it's useful, how do I enable and use this feature? I
> can definitely make use of chunks or offsets > 2Gb here.
>
> Thanks in advance - I promise, busy not just lazy.
>
Copy of email sent to docinfo@ibm on 25/8/2004
#Onmode -BC [1|2] command
#I cannot find any reference in any indexes of the Admin guides,
migration guide,Database Design or #Getting Started guide to this
command.
#I did eventually find an entry on page 3-23 of the Migration Guide. Has
this new command been left #out of the indexes ?
They will put it in the next lot of manuals.
Onmode -BC 1 converts to allow greater than 2GB chunks with the
possibility of reversion.
Onmode -BC 2 converts with no reversion possible
Colin Bull
________________________________________________________________________
This email has been scanned for all known viruses by the MessageLabs Email
Security System.
________________________________________________________________________
sending to informix-list
> <>¿Que?
>
> When I run onstat -d it says at the end:
>
> Expanded chunk capacity mode: disabled.
It means that you can't use chunks greater then 2 GB.
If you want to use chunks > 2 GB anyway, you have to enable this feature
by running onmode -BC 1|2.
Have a look at the admin reference guide..
Johann
> <>I cannot see any obvious information about this in the release notes
> directory. Being flat-out, I don't really have a chance to scour the
> manuals
> yet.
>
> Can anyone give me a brief rundown of what this mode offers, and if it's
> useful, how do I enable and use this feature? I can definitely make use of
> chunks or offsets > 2Gb here.
>
> Thanks in advance - I promise, busy not just lazy.
>
sending to informix-list
The documentation IS in the Administrators Reference for 9.4 (no not 8.4),
page 3-44 in the Utilities section under the section for the onmode command
in a sub-heading entitles "Allow Large Chunks Mode".
-BC 1 allows reversion to pre-9.4 mode if no large chunks or large offsets
have yet been used. Only chunks which are modified or added to take
advantage of large offset/size features are modified.
-BC 2 modifies all existing chunks to the new addressing format. Reversion
to pre-9.4 compatibility is not possible.
Art S. Kagel
thanks all for the replies. Even with your pointers I cannot see any mention
in the indexes...
So how do people /feeeel/ about the expanded chunks? Any positive or
negative experiences? Richard sounds very cautious but I apart from a
suggestion that there's a bit more dirty pages generated (is that the
correct interpretation?) how does it tend to stack up? I've got a small
margin of time to decide to use it; and must also consider that I haven't
done this even on any of my office machines so it's going to be a first, and
it will be going live.
Understand, inside the engine, everything is in BC 2 format. However, on
disk, the page may be in old format. We have the three modes for
conversion/reversion reasons. Basically with the initial format BC 0,
everything is in the 2GB limit format. With BC1, you can create new
dbspaces in >2GB format. And in BC2, writes are always in >2GB format.
This allows reversion to be possible always in BC0 state. If you are in BC1
state, then you can still revert, but must first drop dbspaces using big
chunk format. This state allows you to use big chunk format for things like
temporary dbspaces and such --- basically a means to gain a bit of
confidence.
Once you go to BC2, there is no reversion.
Our goal was to minimize conversion/reversion time, which I think that we
did. The conversion time to 9.4 is roughly the same time as from 9.2 to
9.3. We had to convert the partition page extent list from using physical
addressiong (3-nibbles for chunk + 5 nibbles for chunk offset) to using
dbspace relative addressing ( 2GB pages per dbspace). However, in memory
and in oncheck, the dbspace relative address is converted to big chunk
format. Again, all in the interest of minimizing conversion down-time.
In any case, understand that internally, the 9.4 server is doing everything
in big-chunk format. It's just the IO that does any conversion.
M.P.
"Andrew Hamm" <ah...@mail.com> wrote in message
news:2pkioiF...@uni-berlin.de...
No really negative exepriences, over multiple sites. Some suspicion on one
very bust database that when large chunks were first enabled there was a
transient slowdown (which one might intuitively expect, as each page has to
be read, processed and re-written in entireity) for a day or two, but I'm
not convinced.
Positives? Well, there's no doubt that the administration is simpler -
fewer keystrokes to add one 10Gbyte lv and chunk than 5 x 2Gbyte ones. And
you need to set up fewer HPL unload files due to the removal of the 2GByte
limit (although you have to remember to enable largefiles on your systems).
But, as I've said before - and I know I'm in a moinority of about one - I
don't see the large chunks as a feature that's going to fundamentally change
my life.
Sorry, busy, not bust. That bloody receptionist ....
> database that when large chunks were first enabled there was a
> transient slowdown (which one might intuitively expect, as each page has
to
> be read, processed and re-written in entireity)
Having read Madison's subsequent post I'm not sure that this is true now.
Thanks - I'm feeling warmer and fuzzier, especially as this will be a bust
server just like yours.
> I'm in a moinority of about one - I don't see the large chunks as a
> feature that's going to fundamentally change my life.
Good call - it's going to be one of them set-and-forgetters.
I'm actually expecting it to be better for load balancing. I don't like too
many chunks on one spindle, but some of the tables are freaking huge and
need a split if they are stuck in 2Gb. Not having to care about that anymore
will make the maths easier.
Even when going to BC2, we don't scan and re-write the pages in the new
format.
We had a bit in the page that was used only in the physical log. With 9.4,
we re-did how we managed the physical log so that we no longer needed that
bit. So we are now using that bit to identify the page as being in new
format.
The conversion of the page from the old to new format is only taking about 6
'c' instructions to the IO path. Basically moving a few things arround on
the page header. So I really doubt that it would cause any significant
impact.
M.P.
"Neil Truby" <neil....@ardenta.com> wrote in message
news:2pled0F...@uni-berlin.de...
thanks for all the techie bits - i'm going to forward that to a few of my
cow-orkers so they feel warm and fuzzy about using it.
While we're on the subject of the physical log, when I tried moving it on
two seperate yet very similar engines, it neglected to actually bounce
itself. The onparams just returned to command like in less than a second. I
thought, "huh?", re-issued the command, checked the online status.
So i tried onmode -k and a re-start, and then it started to move the log as
usual. Wierd.
The only contributing factor I can think of is that I used a binary search
to zero in on the absolutely largest possible plog that will fit into the
target space - ie (with random numbers picked here)
onparams -p -d plogdbs -s 199900 # response: too big
onparams -p -d plogdbs -s 199800 # response - are you sure etc? N
onparams -p -d plogdbs -s 199850 # response: are u sure? N
When I found the size that was the smallest possible that was too big to
fit, I had the best fit. So I ran it once more with X and noticed the
behaviour described.
On another build of a temporary engine, where I couldn't be bothered
searching for the best fit, the engine bounced itself in the usual manner. I
guess it's possible that my search ritual has confused something somewhere,
but the confused state could only have been left somewhere deep in a bit of
engine code...
As for building a test-case and finding the true operator-cause, that'll
have to wait for some other 60 seconds. Especially if nobody else can
reproduce it easily.
It is in the 9.40 online docs, as other poster already pointed out.
We do not have any XPS (version 8.xx) running, only IDS: Version 9.21,
9.30, 9.40.UC2 and 9.40FC4W4
>
> > triggers checkpoints more dense than we want to have
> > them (1 hour we want, 50 mins we get)
>
> You mean you want to have a checkpoint only every hour, but you get one
> every 50 minutes? The alternative interpretation is too horrible to consider
hmm - my English is just too bad sometimes, sorry for that :(
We are using CKPTINTVL 3600, but due to the physical log
filling up to 75% in less than 1 hour during busy hours
we do have checkpoints every 50 minutes.
So far we did not find a way to make our physical log
bigger than 2 GB without enabling large chunk support.
(onmode -BC 1)
> :-> Sounds like there's a little bit more write activity going on with large
> chunks.
I did not notice more write activity when we were testing
with large chunks enabled.
The only troubles we ever had was in IDS 9.40.UC2 and are fixed in
version 9.40.F|UC4.
>
> Ahh well - i'll stick to tradition for this job. Thanks for the pointers.
dic_k
yeah, I just can't seem to find any index entries in the online .PDF
manuals. Have I mentioned I hate PDF?
> hmm - my English is just too bad sometimes, sorry for that :(
>
> We are using CKPTINTVL 3600, but due to the physical log
> filling up to 75% in less than 1 hour during busy hours
> we do have checkpoints every 50 minutes.
> So far we did not find a way to make our physical log
> bigger than 2 GB without enabling large chunk support.
> (onmode -BC 1)
Why not have two chunks in it's space and then .... oh wait, the -s
parameter to onparams -p probably doesn't like such a big number without -BC
hey?
Well... Using <CTRL>F one can search in a pdf file and
simulate grep by pressing <F3>, which is 'continue to search'
Not fast - but atleast much better than docs scattered
into 100-reds of html files, where I cannot simulate grep
no matter how patient I am.
Actually, if you
a) download all the pdf files
b) edit the funny file names into something
meaningful and modify the index.html accordingly
c) get a decent module from CPAN to process the bunch
you can grep over all docu files or even extract all
warning or hint sections, exclude windows paragraphs
and thus it IS possible to live with it :)
>
> > hmm - my English is just too bad sometimes, sorry for that :(
> >
> > We are using CKPTINTVL 3600, but due to the physical log
> > filling up to 75% in less than 1 hour during busy hours
> > we do have checkpoints every 50 minutes.
> > So far we did not find a way to make our physical log
> > bigger than 2 GB without enabling large chunk support.
> > (onmode -BC 1)
>
> Why not have two chunks in it's space and then .... oh wait, the -s
> parameter to onparams -p probably doesn't like such a big number without -BC
> hey?
We opened a support case to confirm, that the physical log
cannot have more than one chunk in the dbspace. The docs is
mentioning it pretty straight: storage of the physical log
on disk must be continuous -> it has to be a single chunk
dbspace.
> -----Original Message-----
> From: Andrew Hamm [mailto:ah...@mail.com]
> Sent: Wednesday, September 01, 2004 6:06 PM
> To: inform...@iiug.org
> Subject: Re: Expanded chunk capacity mode in 9.40
Snip
> thanks for all the techie bits - i'm going to forward that to a few of
my
> cow-orkers so they feel warm and fuzzy about using it.
How does one ork a cow?
Snip
sending to informix-list
[ [Bill Dare]
I think someone lost a "P".
> Snip
>
>
> sending to informix-list
sending to informix-list
Hook, line and sinker :-)) gets someone every time.
it's an ancient netnews mis-spelling that's sunk into folklore...