Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

negative powers of two

1 view
Skip to first unread message

stip

unread,
Jul 31, 2009, 7:01:16 AM7/31/09
to
Hello,

given the exponent as integer, what's the fastest way to calculate
negative (and positive) integer powers of two?
std::pow() is quite slow, because designed for the general case.
Do I have to do some bit manipulations on my own or are there already
functions?

Thanks a lot,
Alex

SG

unread,
Jul 31, 2009, 7:55:40 AM7/31/09
to

There is no portable way to do it as far as i know. You can certainly
write code that exploits the binary representation of doubles. You
could include a test in your build script that tests whether the
assumptions about implementation details are true and defines a
preprocessor name accordingly so you can use #ifdef #else #endif to
select your "pow2" function. You might also want to look into the
numeric_limits traits class in <limits>

Cheers!
SG

Juha Nieminen

unread,
Jul 31, 2009, 8:15:35 AM7/31/09
to

I'm not completely sure what you mean by "negative integer powers of
two". Do you mean that you have an integral value (which might be
negative) and you want to raise it to the power of 2? That would be
rather trivial: Just multiply the integer with itself, and you have
raised it to the power of 2.

By your question, however, I deduce that you mean something else. Do
you mean, by any chance, that you want to raise 2 to some integral power
(which might be negative)? Or more generally, to calculate a*pow(2,b)
where a and b are integers?

For positive values of b the solution is simple: a*(1<<b). (But take
into account that, rather obviously, this will only work for values of b
smaller than the amount of bits in your integral type.)

Since raising to a negative power is the same as the inverse of
raising to the positive power, the solution for negative values of b is
equally simple: a/(1<<(-b)). (Of course this gives you only the integral
part of the result. If you need a floating point result, use pow().)

Alf P. Steinbach

unread,
Jul 31, 2009, 8:25:11 AM7/31/09
to
* stip:

Don't know if it helps, but the standard library has two functions one which for
given double d gives you mantissa and exponent in d = m*2^E, and the second goes
the other way. But even if they help for the not-too-small integer exponents,
for powers in the neighborhood around zero you're probably better off doing some
simple multiplication. But have you *measured* -- it's likely that pow() does
this already?

Cheers & hth.,

- Alf

Bill Davy

unread,
Jul 31, 2009, 8:53:35 AM7/31/09
to
"stip" <alexander...@uni-ulm.de> wrote in message
news:7a12f847-c3d7-4079...@o15g2000yqm.googlegroups.com...


The ldexp(x,exp) function returns the value of x * 2exp if successful

You can if you like play around with using products of 2^OneBit where
OneBit at a time is taken from exp so (for example) 2^9 = 2^8 * 2^1 but I am
not sure it is worth it.


Jonathan Lee

unread,
Jul 31, 2009, 9:07:11 AM7/31/09
to
On Jul 31, 7:01 am, stip <alexander.stipp...@uni-ulm.de> wrote:
> given the exponent as integer, what's the fastest way to calculate
> negative (and positive) integer powers of two?

Not a C++ solution, but if you happen to be working on x86 I think the
fscale assembly instruction does exactly that.

--Jonathan

Jerry Coffin

unread,
Jul 31, 2009, 12:01:02 PM7/31/09
to
In article <7a12f847-c3d7-4079-918e-fd3f1c6f18b5
@o15g2000yqm.googlegroups.com>, alexander...@uni-ulm.de says...

If the base is an integer, then you just use bit shifts.

If the base is a floating point, you can use something like:

int exponent;

frexp(base, &exponent);
exponent += added_exponent;
ldexp(base, exponent);

Whether this will be faster than pow() will depend -- for a lot of
cases where the floating point is stored as a significand and a power
of two, frexp will be something like a mask, shift and an integer
subtraction to remove a bias in the exponent. As you'd expect, ldexp
will pretty much reverse that: add the bias, shift to the right
position, OR with the significand. Generally pretty fast. OTOH, if
the native floating point representation is drastically different
from that, they could end up pretty slow -- but machines with such
strange floating point representations are pretty unusual.

--
Later,
Jerry.

Pete Becker

unread,
Jul 31, 2009, 2:05:59 PM7/31/09
to
Jerry Coffin wrote:
>
> Whether this will be faster than pow() will depend -- for a lot of
> cases where the floating point is stored as a significand and a power
> of two, frexp will be something like a mask, shift and an integer
> subtraction to remove a bias in the exponent. As you'd expect, ldexp
> will pretty much reverse that: add the bias, shift to the right
> position, OR with the significand. Generally pretty fast. OTOH, if
> the native floating point representation is drastically different
> from that, they could end up pretty slow -- but machines with such
> strange floating point representations are pretty unusual.
>

Strange floating-point representations like decimal? <g> Decimal
floating-point is specified (along with binary) in IEEE 754-2008, and
provided as extensions to C by TR 24732 and to C++ by DTR 24733
(approved for balloting at the Frankfurt meeting).

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)

Jerry Coffin

unread,
Jul 31, 2009, 2:40:47 PM7/31/09
to
In article <8radnVF2p5Yar-7X...@giganews.com>,
pe...@versatilecoding.com says...

[ ... ]

> Strange floating-point representations like decimal? <g> Decimal
> floating-point is specified (along with binary) in IEEE 754-2008, and
> provided as extensions to C by TR 24732 and to C++ by DTR 24733
> (approved for balloting at the Frankfurt meeting).

Yes, that's probably the most obvious anyway. At least AFAIK, about
the only machines that use it are IBM z-series. I believe somewhere
along the line IBM also added decimal FP hardware to the POWER
processors (around POWER 5 or 6) but I believe they use binary
floating point by default, only switching to decimal if its
explicitly enabled. I'd guess most C and C++ compilers leave it set
for binary FP, though it wouldn't surprise me if COBOL compilers (for
one example) normally switched it to use decimal FP instead.

--
Later,
Jerry.

James Kanze

unread,
Aug 1, 2009, 4:59:45 AM8/1/09
to

ldexp.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

James Kanze

unread,
Aug 1, 2009, 5:07:26 AM8/1/09
to
On Jul 31, 6:01 pm, Jerry Coffin <jerryvcof...@yahoo.com> wrote:
> In article <7a12f847-c3d7-4079-918e-fd3f1c6f18b5
> @o15g2000yqm.googlegroups.com>, alexander.stipp...@uni-ulm.de says...

> > given the exponent as integer, what's the fastest way to
> > calculate negative (and positive) integer powers of two?
> > std::pow() is quite slow, because designed for the general
> > case. Do I have to do some bit manipulations on my own or
> > are there already functions?

> If the base is an integer, then you just use bit shifts.

> If the base is a floating point, you can use something like:

> int exponent;

> frexp(base, &exponent);
> exponent += added_exponent;
> ldexp(base, exponent);

> Whether this will be faster than pow() will depend -- for a
> lot of cases where the floating point is stored as a
> significand and a power of two, frexp will be something like a
> mask, shift and an integer subtraction to remove a bias in the
> exponent. As you'd expect, ldexp will pretty much reverse
> that: add the bias, shift to the right position, OR with the
> significand. Generally pretty fast. OTOH, if the native
> floating point representation is drastically different from
> that, they could end up pretty slow -- but machines with such
> strange floating point representations are pretty unusual.

Not really. I don't know of any mainframe which uses base 2 for
its floating point: IBM uses base 16, and both Unisys
architectures use base 8; Unisys MCP also normalizes in a
somewhat strange way. The fact that the bases are powers of two
means that the operation can still be done with masking and
shifting, but it requires a bit more than a base 2
representation would.

Jerry Coffin

unread,
Aug 1, 2009, 10:38:19 AM8/1/09
to
In article <069cb282-77ec-45c0-bcd6-2aa88e53ec48
@k6g2000yqn.googlegroups.com>, james...@gmail.com says...

>
> On Jul 31, 6:01 pm, Jerry Coffin <jerryvcof...@yahoo.com> wrote:

[ ... ]

> > Whether this will be faster than pow() will depend -- for a
> > lot of cases where the floating point is stored as a
> > significand and a power of two, frexp will be something like a
> > mask, shift and an integer subtraction to remove a bias in the
> > exponent. As you'd expect, ldexp will pretty much reverse
> > that: add the bias, shift to the right position, OR with the
> > significand. Generally pretty fast. OTOH, if the native
> > floating point representation is drastically different from
> > that, they could end up pretty slow -- but machines with such
> > strange floating point representations are pretty unusual.
>
> Not really. I don't know of any mainframe which uses base 2 for
> its floating point: IBM uses base 16, and both Unisys
> architectures use base 8; Unisys MCP also normalizes in a
> somewhat strange way. The fact that the bases are powers of two
> means that the operation can still be done with masking and
> shifting, but it requires a bit more than a base 2
> representation would.

You start by saying "not really", but from what I can see, you then
go on to pretty much confirm what I said -- that while these machines
aren't binary, they still work in a power of two, which means that
frexp/ldexp will probably be faster than pow() for all of them.

I have to admit I'm a bit surprised though -- I thought IBM's
mainframes used decimal floating point, not hexadecimal...

--
Later,
Jerry.

Keith H Duggar

unread,
Aug 1, 2009, 3:08:50 PM8/1/09
to
On Aug 1, 10:38 am, Jerry Coffin <jerryvcof...@yahoo.com> wrote:
> In article <069cb282-77ec-45c0-bcd6-2aa88e53ec48
> @k6g2000yqn.googlegroups.com>, james.ka...@gmail.com says...

> > On Jul 31, 6:01 pm, Jerry Coffin <jerryvcof...@yahoo.com> wrote:
> > > Whether this will be faster than pow() will depend -- for a
> > > lot of cases where the floating point is stored as a
> > > significand and a power of two, frexp will be something like a
> > > mask, shift and an integer subtraction to remove a bias in the
> > > exponent. As you'd expect, ldexp will pretty much reverse
> > > that: add the bias, shift to the right position, OR with the
> > > significand. Generally pretty fast. OTOH, if the native
> > > floating point representation is drastically different from
> > > that, they could end up pretty slow -- but machines with such
> > > strange floating point representations are pretty unusual.
>
> > Not really. I don't know of any mainframe which uses base 2 for
> > its floating point: IBM uses base 16, and both Unisys
> > architectures use base 8; Unisys MCP also normalizes in a
> > somewhat strange way. The fact that the bases are powers of two
> > means that the operation can still be done with masking and
> > shifting, but it requires a bit more than a base 2
> > representation would.
>
> You start by saying "not really", but from what I can see, you then
> go on to pretty much confirm what I said -- that while these machines
> aren't binary, they still work in a power of two, which means that
> frexp/ldexp will probably be faster than pow() for all of them.

You didn't claim anything about systems that "work in a power
of two". You specifically made a claim about systems that do not
store floating point "as a significand and a power of two". The
most obvious interpretation of that does not include systems
that store as a significand and a power of 8 or 16 etc. Though
it is possible your words also meant to include such powers of
powers of 2 as well, the operations you sketched for frexp seem
to indicate you did not mean to include them. So either James
correctly called you on your faulty assumptions and you are
backpedaling (as you have done elsewhere), or it was a totally
understandable miscommunication.

> I have to admit I'm a bit surprised though -- I thought IBM's
> mainframes used decimal floating point, not hexadecimal...

You are surprised that you were wrong? I'm not ...

By the way, are you just going to slither quietly away from the
goto debate after you were trounced or are you going to man up
and admit goto was faster and that your "analysis" (to use the
word extremely loosely) of modern machines that was filled with
numerous faulty assumptions (such as the possibility above) was
flawed?

http://groups.google.com/group/comp.lang.c++.moderated/msg/3ac2368e485e740d
http://groups.google.com/group/comp.lang.c++.moderated/msg/5d471364e76392d9

(First link presents hard data proving you wrong and the second
calls you out on your flawed understanding and assumptions and
challenges you to explain the data or provide new measurements
that correct the proven problems in your initial testing.)

You know, it really is quite liberating to admit when you are
wrong; there is no shame in learning. You should try doing it
sometime.

KHD

Jerry Coffin

unread,
Aug 1, 2009, 5:28:48 PM8/1/09
to
In article <8b807dad-e6d5-41d4-8f07-9ffefe91bcc3
@g6g2000vbr.googlegroups.com>, dug...@alum.mit.edu says...

Obviously you don't even know how to read. What I said was: "if the

native floating point representation is drastically different from
that, they could end up pretty slow"

Since apparently don't know English very well, and are just too
damned lazy to look it up, "drastically" is defined as: "Extreme,
severe; Acting rapidly or violently; Extremely severe or extensive"

In case that didn't make it obvious, NOT EVERY POSSIBLE DIFFERENCE IS
DRASTIC!

> The
> most obvious interpretation of that does not include systems
> that store as a significand and a power of 8 or 16 etc. Though
> it is possible your words also meant to include such powers of
> powers of 2 as well, the operations you sketched for frexp seem
> to indicate you did not mean to include them. So either James
> correctly called you on your faulty assumptions and you are
> backpedaling (as you have done elsewhere), or it was a totally
> understandable miscommunication.

Nonsense! What I said was that when/if the representation is
drastically different, that frexp/ldexp might be quite slow. When the
base isn't binary, but is still a power of two, there's no real
reason to believe that frexp or ldexp will be particularly slow.
Specifically, they will almost certainly still be substantially
faster than pow().



> By the way, are you just going to slither quietly away from the
> goto debate after you were trounced or are you going to man up
> and admit goto was faster and that your "analysis" (to use the
> word extremely loosely) of modern machines that was filled with
> numerous faulty assumptions (such as the possibility above) was
> flawed?

There's not "slithering" involved. There was a conscious decision to
ignore you because you're clearly an liar who simply started making
up nonsense rather than admitting that he was dead from from
beginning to end.



> http://groups.google.com/group/comp.lang.c++.moderated/msg/3ac2368e485e740d
> http://groups.google.com/group/comp.lang.c++.moderated/msg/5d471364e76392d9
>
> (First link presents hard data proving you wrong and the second
> calls you out on your flawed understanding and assumptions and
> challenges you to explain the data or provide new measurements
> that correct the proven problems in your initial testing.)

All you presented was a bunch of unsupported claims that didn't prove
anything.



> You know, it really is quite liberating to admit when you are
> wrong; there is no shame in learning. You should try doing it
> sometime.

I've done so many times, and when I'm wrong again, I'll do so again.

In this case, you're the one who was wrong, and you're the one who
has failed to admit it.

In this thread, you've not only been wrong, but clearly lied through
your teeth, not even attempting to quote me accurately at all. At
least previously you attempted to post your lies in a way that it was
difficult to prove them wrong. Now you're not only lying, but doing
it in an incredibly STUPID manner that was trivial to prove.

Go away and grow up you pathetic worm!

--
Later,
Jerry.

Jerry Coffin

unread,
Aug 1, 2009, 6:18:06 PM8/1/09
to
In article <069cb282-77ec-45c0-bcd6-2aa88e53ec48
@k6g2000yqn.googlegroups.com>, james...@gmail.com says...

[ ... ]

> Not really. I don't know of any mainframe which uses base 2 for
> its floating point: IBM uses base 16, and both Unisys
> architectures use base 8; Unisys MCP also normalizes in a
> somewhat strange way. The fact that the bases are powers of two
> means that the operation can still be done with masking and
> shifting, but it requires a bit more than a base 2
> representation would.

Since Kieth decided to jump in with his usual unprovoked and
inaccurate attack, I decided to do a bit of fact checking.

At: http://www.ibm.com/developerworks/library/pa-bigiron1/

IBM says: "The System/390 implemented the IEEE 754 BFP formats in
addition to HFP for the first time in 1995" (using "BFP" as an
abbreviation for "binary floating point"), so their mainframes have
had binary floating point for about 14 years now.

At:
http://portal.acm.org/citation.cfm?id=1241826
&dl=GUIDE&coll=GUIDE&CFID=46010576&CFTOKEN=73022622

is an article from the IBM Journal of Research and Development. The
full article is not free, but the abstract says:

[ ... ] Use of the newly defined decimal floating-point
(DFP) format instead of binary floating-point is
expected to significantly improve the performance of
such applications. System z9TM is the first IBM machine
to support the DFP instructions.

To summarize, then: current IBM mainframes support floating point in
three bases: hexadecimal, binary, and decimal. Support for the bases
was added in that order. Hexadecimal has been supported for decades,
binary for a decade and a half, and decimal for something like a year
and a half.

As far as the original question goes, the only time you're likely to
run into a substantial difference in speed when using frexp/ldexp
would be with decimal -- for either binary or hexadecimal (or octal),
ldexp() and frexp() will usually be pretty simple and fast, though
differences in (or lack of) normalization will have _some_ effect as
well.

--
Later,
Jerry.

Keith H Duggar

unread,
Aug 1, 2009, 7:28:02 PM8/1/09
to

Obviously you don't know how to reasonably interpret what you read.
James' comment wasn't about them being slow or not. It was about your
usual faulty claims about implementations you are ignorant of being
"pretty unusual".

> Since apparently don't know English very well, and are just too
> damned lazy to look it up, "drastically" is defined as: "Extreme,
> severe; Acting rapidly or violently; Extremely severe or extensive"
>
> In case that didn't make it obvious, NOT EVERY POSSIBLE DIFFERENCE IS
> DRASTIC!

That's it, clutch to your "drastically" straw to cover up your faulty
assumptions.

> > The
> > most obvious interpretation of that does not include systems
> > that store as a significand and a power of 8 or 16 etc. Though
> > it is possible your words also meant to include such powers of
> > powers of 2 as well, the operations you sketched for frexp seem
> > to indicate you did not mean to include them. So either James
> > correctly called you on your faulty assumptions and you are
> > backpedaling (as you have done elsewhere), or it was a totally
> > understandable miscommunication.
>
> Nonsense! What I said was that when/if the representation is
> drastically different, that frexp/ldexp might be quite slow. When the
> base isn't binary, but is still a power of two, there's no real
> reason to believe that frexp or ldexp will be particularly slow.
> Specifically, they will almost certainly still be substantially
> faster than pow().

Again, this wasn't about your performance claims it was about your
"pretty usual" claims which you are now trying to link to your vague
"drastically" straw. One must wonder now what "drastic" even means to
you if it is to filter implementations in any meaningful way at all.
Maybe drastic to you means base != 2^n so maybe decimal is "drastic".
Maybe drastic to you means "anything that makes Jerry's performance
claims right". But then who can knows what you meant by such vague
words in a technical context.

> > By the way, are you just going to slither quietly away from the
> > goto debate after you were trounced or are you going to man up
> > and admit goto was faster and that your "analysis" (to use the
> > word extremely loosely) of modern machines that was filled with
> > numerous faulty assumptions (such as the possibility above) was
> > flawed?
>
> There's not "slithering" involved. There was a conscious decision to
> ignore you because you're clearly an liar who simply started making
> up nonsense rather than admitting that he was dead from from
> beginning to end.

LOL, prove it! All the evidence you need should be there: public
code, public timing data, public postings, etc. So back it up!
Demonstrate that I have lied or made something up. So far, about
the only thing you demonstrated in that forum is that you do not
know how to accurately profile, that you speculate wildly beyond
your technical knowledge, that you throw around completely bogus
assumptions about areas you are completely ignorant of (such as
digital geometry), and that when confronted with hard evidence
proving you wrong you either try to quietly slither away or
accuse your opponent of being a "liar".

> > http://groups.google.com/group/comp.lang.c++.moderated/msg/3ac2368e485e740d
> > http://groups.google.com/group/comp.lang.c++.moderated/msg/5d471364e76392d9
>
> > (First link presents hard data proving you wrong and the second
> > calls you out on your flawed understanding and assumptions and
> > challenges you to explain the data or provide new measurements
> > that correct the proven problems in your initial testing.)
>
> All you presented was a bunch of unsupported claims that didn't prove
> anything.

LOL, I think that is the first time I have ever seen someone try
to dismiss source code and timing data as "unsupported claims".
You, on the other hand, provided little beyond personal attacks,
broken profiling, speculation, and lack of comprehension.

> > You know, it really is quite liberating to admit when you are
> > wrong; there is no shame in learning. You should try doing it
> > sometime.
>
> I've done so many times, and when I'm wrong again, I'll do so again.
>
> In this case, you're the one who was wrong, and you're the one who
> has failed to admit it.

The evidence shows otherwise and you know it. That's why you are
ignoring it.

> In this thread, you've not only been wrong, but clearly lied through
> your teeth, not even attempting to quote me accurately at all. At

These "liar" claims are, honestly, truly amusing given how
ridiculous they are.

> least previously you attempted to post your lies in a way that it was
> difficult to prove them wrong.

It fact so far they have been *impossible* for you to prove them
wrong. Probably because they are, in fact, not lies but truths.
It is easy to try; just run the provided test suites and post
your results. That would speak far louder than your bluster.

> Now you're not only lying, but doing
> it in an incredibly STUPID manner that was trivial to prove.
>
> Go away and grow up you pathetic worm!

That's it, keep revealing your nasty disposition.

KHD

Keith H Duggar

unread,
Aug 1, 2009, 7:36:01 PM8/1/09
to
On Aug 1, 6:18 pm, Jerry Coffin <jerryvcof...@yahoo.com> wrote:
> In article <069cb282-77ec-45c0-bcd6-2aa88e53ec48
> @k6g2000yqn.googlegroups.com>, james.ka...@gmail.com says...

> > Not really. I don't know of any mainframe which uses base 2 for
> > its floating point: IBM uses base 16, and both Unisys
> > architectures use base 8; Unisys MCP also normalizes in a
> > somewhat strange way. The fact that the bases are powers of two
> > means that the operation can still be done with masking and
> > shifting, but it requires a bit more than a base 2
> > representation would.
>
> Since Kieth decided to jump in with his usual unprovoked and
> inaccurate attack, I decided to do a bit of fact checking.
>
> ...

>
> To summarize, then: current IBM mainframes support floating point in
> three bases: hexadecimal, binary, and decimal. Support for the bases
> was added in that order. Hexadecimal has been supported for decades,
> binary for a decade and a half, and decimal for something like a year
> and a half.
>
> As far as the original question goes, the only time you're likely to
> run into a substantial difference in speed when using frexp/ldexp
> would be with decimal -- for either binary or hexadecimal (or octal),
> ldexp() and frexp() will usually be pretty simple and fast, though
> differences in (or lack of) normalization will have _some_ effect as
> well.

Great. Now tell us whether you 1) consider decimal to be
"drastically" different 2) consider IBM mainframes to be
"pretty usual" 3) knew this about IBM mainframes before
your "Fact checking" wikipeducation trip.

KHD

James Kanze

unread,
Aug 2, 2009, 5:59:39 AM8/2/09
to
On Aug 1, 4:38 pm, Jerry Coffin <jerryvcof...@yahoo.com> wrote:
> In article <069cb282-77ec-45c0-bcd6-2aa88e53ec48
> @k6g2000yqn.googlegroups.com>, james.ka...@gmail.com says...

The algorithm you described only applies to machines with base 2
floating point, not base 8 or 16. And IBM mainframes aren't
"pretty unusual", they're the most common mainframe around. (I
think that Unisys is second in the market, and the Unisys MCP
does have a somewhat exotic representation, normalizing with the
decimal to the right, and not to the left, and not requiring
normalization at all---the idea is that an integer and a
floating point have the same bitwise representation.)

As for speed compared to pow(), I don't know. Depending on the
speed of hardware floating point and how much support the
hardware has for pow, it could vary. A lot of 32 bit processors
had one clock hardware support for the four basic operations,
but it was fairly expensive to shift 64 bits. (On at least one
processor I worked on, all shift operations were done with a
loop in microcode.) Of course, that's not the case of IBM
mainframes, which support 64 bit shifts directly as well. (But
IIRC, it's 64 bit shifts were optimized for, and maybe only
supported, multiples of four bits---for normalizing BCD values,
etc.)

> I have to admit I'm a bit surprised though -- I thought IBM's
> mainframes used decimal floating point, not hexadecimal...

They may have both, but the floating point I've always seen was
base 16. And the IMB compatibles from Siemens and Fujitsu, back
when I worked on them, didn't have a single instruction for
decimal floating point (although the instruction set did allow
implementing it in five or six instructions, without any loops,
even for division, since the machines did have fixed point
decimal instructions, and 64 bit shifts).

Of course, this was all a fairly long time ago---I've not had
the occasion to look at the IBM instruction set since then---,
so the situation could easily have changed.

James Kanze

unread,
Aug 2, 2009, 6:07:35 AM8/2/09
to
On Aug 2, 12:18 am, Jerry Coffin <jerryvcof...@yahoo.com> wrote:
> In article <069cb282-77ec-45c0-bcd6-2aa88e53ec48
> @k6g2000yqn.googlegroups.com>, james.ka...@gmail.com says...

> [ ... ]
> At:http://www.ibm.com/developerworks/library/pa-bigiron1/

> IBM says: "The System/390 implemented the IEEE 754 BFP formats
> in addition to HFP for the first time in 1995" (using "BFP" as
> an abbreviation for "binary floating point"), so their
> mainframes have had binary floating point for about 14 years
> now.

I'm not sure, but I think that this support was optional. On
the machines we were running in 1999, it had just been added,
and it was significantly slower than the traditional IBM base 16
format. (More importantly for us, it wasn't supported in IBM
Cobol, which meant that floating point data coming from the
mainframe still had to be translated in our Java-based
applications running under AIX.)

[...]


> As far as the original question goes, the only time you're
> likely to run into a substantial difference in speed when
> using frexp/ldexp would be with decimal -- for either binary
> or hexadecimal (or octal), ldexp() and frexp() will usually be
> pretty simple and fast, though differences in (or lack of)
> normalization will have _some_ effect as well.

As I mentionned in another posting, this depends very much on
the speed of the various operations. A good implementation of
pow will not take that many floating point operations, and on
some machined (*not* the IMB mainframes), 64 bit shifts can be
very expensive.

Of course, the only reasonable course is to use pow, and if the
profiler shows that to be a problem, measure ldexp, and use it,
if it is faster (likely, but not certain). But especially: when
you get to that point, don't speculate like we're doing, but
measure. That way, you know the answer is correct for your
machine, regardless.

Keith H Duggar

unread,
Aug 2, 2009, 12:50:38 PM8/2/09
to

Don't forget about Jerry's "trump card": the word "drastically".
If some variation of his shifting method is slow then the float
representation is "drastically" different and he is right. If some
variation of his shifting method is fast then the representation
is not "drastically" different and he is *still* right.

Furthermore, even if he determines that base-16 is "drastically"
different, since IBM mainframes support both base-16 and base-2
(even if it is optional) he is *still* right because it either
works with his method (base-2) or the users are running the IBM
mainframe in a "drastically" different configuration (even if
it is the default and preferred configuration).

Finally, similar "trump reasoning" will apply to Unisys perhaps
with some market share speculations or buzzwords like "modern"
thrown into the mix as well.

In other words, there is no defense against Jerry's ignorant
speculations. No matter what hard evidence and solid reasoning
you supply he will always be right. And even if by miracle you
convince him, we will probably never know because he likely
will not admit he was wrong here publicly (preferring to
slither quietly away instead).

KHD

Jerry Coffin

unread,
Aug 2, 2009, 1:30:52 PM8/2/09
to
In article <092a2519-745d-46ae-b881-ab96405c6bc6
@o6g2000yqj.googlegroups.com>, james...@gmail.com says...

Ah, I can finally see the source of the misunderstanding.

The algorithm I mentioned was for machines at one end of the spectrum
-- basically IEEE 754 or something very similar.

I also mentioned machines with "weird" representations, for which
frexp/ldexp could be quite slow -- quite possibly slower than pow,
being the important point under the circumstances. Though not stated
directly, that was more or less a hint that the only way to be sure
about things was to profile, not depend on the "fact" that frexp and
ldexp would necessarily be the fastest approach.

I did not say, or mean to imply, that there was no middle ground
between those extremes. I guess since I didn't mention the middle
ground, I can understand how somebody could infer that I didn't
intend for there to be any.

--
Later,
Jerry.

Richard Herring

unread,
Aug 3, 2009, 5:26:49 AM8/3/09
to
In message <MPG.24de01445...@news.sunsite.dk>, Jerry Coffin
<jerryv...@yahoo.com> writes

Depends on how much extra bit-twiddling is needed to normalize the
results. IIRC IBM floating-point instructions can only be applied to
data in dedicated FP registers, so some extra copying may be needed.


>
>I have to admit I'm a bit surprised though -- I thought IBM's
>mainframes used decimal floating point, not hexadecimal...

No, but they have a decimal fixed-point format for currency etc.

--
Richard Herring

Keith H Duggar

unread,
Aug 3, 2009, 1:57:27 PM8/3/09
to
Jerry Coffin wrote:
> Ah, I can finally see the source of the misunderstanding.

Ah-ha-ha-ha ... it was just a <drumroll> "misunderstanding"!! LOL
still refusing in every sad way possible to admit you were wrong.

> The algorithm I mentioned was for machines at one end of the spectrum
> -- basically IEEE 754 or something very similar.
>
> I also mentioned machines with "weird" representations, for which
> frexp/ldexp could be quite slow -- quite possibly slower than pow,
> being the important point under the circumstances.

Not only did you mention "weird" (in your own words "dramatically
different" or "strange") machines but you also ignorantly claimed
they were "unusual" which we all take to mean *uncommon*.

> I did not say, or mean to imply, that there was no middle ground
> between those extremes. I guess since I didn't mention the middle
> ground, I can understand how somebody could infer that I didn't
> intend for there to be any.

I'm sure many will accept that weak cop-opt and let you slither
quietly away. Here is the simple proof that even using your own
terms and admitted Jerry "facts" (let's just call them "jerry"),
you were wrong.

http://groups.google.com/group/comp.lang.c++/msg/f3f7d4df96ec0124


Jerry Coffin wrote:
> OTOH, if the native floating point representation is drastically

> different from that [base-2 maybe base-2^n], they could end up pretty


> slow -- but machines with such strange floating point representations
> are pretty unusual.

jerry-1) machines with "drastically different" or "strange" floating
point representation are "pretty unusual"

http://groups.google.com/group/comp.lang.c++/msg/5cd89bd1b73504be
Jerry Coffin wrote:


> Pete Becker wrote:
> > Strange floating-point representations like decimal? <g> Decimal
> > floating-point is specified (along with binary) in IEEE 754-2008, and
> > provided as extensions to C by TR 24732 and to C++ by DTR 24733
> > (approved for balloting at the Frankfurt meeting).
>
> Yes, that's probably the most obvious anyway.

jerry-2) decimal floating-point representations are "strange"

http://groups.google.com/group/comp.lang.c++/msg/6bead24b1aeae51a


Jerry Coffin wrote:
> I have to admit I'm a bit surprised though -- I thought IBM's
> mainframes used decimal floating point, not hexadecimal...

jerry-3) thought IBM mainframes used decimal floating point

From those jerrys we can easily deduce

jerry-3 AND jerry-2 IMPLIES jerry-4) thought IBM mainframes used
"strange" floating point representation

jerry-4 AND jerry-1 IMPLIES jerry-5) thought IBM mainframes are
"pretty unusual".

And there you have it, a clear deductive demonstration from your
own jerrys (Jerry Facts) and words that you were ignorant of the
actual facts and were wrong. Sadly, the closest you can come to
publicly admitting this is to say that you were "surprised" and
that there was "a misunderstanding".

KHD

0 new messages