Integer :: ix, iy, iz ! assume these are 32 bit integers for
now
::: code to manipulate 'any or all' of the bits in ix :::
iy = ix
Would the above code be OK. ie... After manipulating the bits in ix,
can I do an assignment to iy without error? I would never expect to
use ix or iy in a mathematical operation. I'm just wondering if I can
do simple assignments for the purpose of copying the bit pattern to
another integer object... or if I can use ix as an actual argument
in a subroutine call of some sort, etc., etc.. For that matter,
since the data type is Integer, could I actually write the ix value
to a file in the normal way or would I risk odd results due to a bit
pattern that doesn't fit the compiler/computers rules for a valid
integer object?
Thanks, Dan
The short answer is yes, you can do whatever you want with integer
values, with one caveat:
What is "the normal way" of writing a value to a file? Formatted?
Unformatted? List-directed? If you use a format field that's too
short, you'll lose data. If you're writing and reading your data on the
same system, and if it doesn't need to be human-readable, I'd go with
unformatted.
When in doubt, demystify the whole thing: try it, and see what happens.
Louis
> On 4/22/2011 12:09 PM, Dan wrote:
> > I have a LOT of data that I need to keep track of that can have only 1
> > of 2 specific values. The possible memory requirements of tracking
> > this data are huge. Thus... I'm considering using fortrans bit
> > computation procedures to keep track of things. Theoretically,
> > using a default Integer type with my compiler (32 bits), I can keep
> > track of up to 32 data items using a single default integer.
> > However, I've not used the bit functions before and I don't want to
> > trip over my feet while using them.
> The short answer is yes, you can do whatever you want with integer
> values, with one caveat:
>
> What is "the normal way" of writing a value to a file? Formatted?
> Unformatted? List-directed? If you use a format field that's too
> short, you'll lose data. If you're writing and reading your data on the
> same system, and if it doesn't need to be human-readable, I'd go with
> unformatted.
>
> When in doubt, demystify the whole thing: try it, and see what happens.
I'd add a caveat that I think is a bigger one. Stay clear of the sign
bit. Keep it down to no more than 31 bits per integer - not 32. The
Fortran standard punts on the sign bit; it pretty much guarantees
nothing once you hit that.
Odds are good that you can get by with iusing the sign bit as long as
you never do arithmetic on the values. But odds are better (and actually
guaranteed by the standard) if you avoid that bit.
Note that just trying it to see what happens will *NOT* address this one
because the whole point is that different systems might do things
differently. That's why the standard avoids it.
--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain
And you really DON'T want to know what C does with it :-(
Yes, agreed. And make sure that shifts don't affect it.
Regards,
Nick Maclaren.
After incurring many self inflicted wounds with the maintaining of a
fair sized (20 or so) set of toggles for data items I eventually figured
out how to lower my error rate. When the number of toggles was larger
(nearer 40) I used two words and had to use a naming convention to
keep track of which toggle word was in use.
There are three stages that work for me.
1. Set up some parameters to allow octal names for various values.
....
integer, parameter :: OCT1000 = 512
integer, parameter :: OCT2000 = 1024
2. Set up some meaningful names for the toggles using the octal names.
One of the earliest lessons was that using numerical values for the
toggles lead to many mistakes and was hell on wheels to change things.
The octal stuff helps prevent errors in setting up the values.
3. Do all testing and setting with some simple procedues.
see below for
if ( set ( var, atr ) ) ...
call seton ( var, atr )
call setoff ( var, atr )
which were in a module so typing is easier.
function set ( var, atr ) ! test if attribute set
!
implicit none
!
logical :: set
integer :: var, atr
!
if ( mod ( var / atr, 2 ) == 1 )
set = Yes
else
set = No
!
return
end
subroutine setoff ( var, atr ) ! set attribute off
!
implicit none
!
integer :: var, atr
!
if ( set ( var, atr ) )
var = var - atr
!
return
end
subroutine seton ( var, atr ) # set attribute on
!
implicit none
!
integer :: var, atr
!
if ( ! set ( var, atr ) )
var = var + atr
!
return
end
and makes sure var starts at zero. Oh yes, I also have parameters to
rename .true. and .false.. These guys have been around for a long time
so it might make sense to use a bit number rather than a value to
use the bit intrinsics rather than integer arithmetic. Even a lot of
setting and testing is unlikely to ever add up to more than a fraction of
a second compared to hours of arithmetic. Understandable code saves
erroneous runs which more than make up for minor overhead on successful
runs. Such micro-optimization is a waste of effort as it distracts from
serious algorithm concerns.
For my applications the toggles are either strictly internal or have to
be user understandable. Thus no i/o. The toggles are either set after
decoding input or used to construct readable output.
Dan :-)
>>I'd add a caveat that I think is a bigger one. Stay clear of the sign
>>bit. Keep it down to no more than 31 bits per integer - not 32. The
>>Fortran standard punts on the sign bit; it pretty much guarantees
>>nothing once you hit that.
The main reason for the restriction is that Fortran (like C) still
allows for sign magnitude and digit complement (ones complement
in binary) representation. In those cases, bad things can happen
when you use the sign bit, especially if your bits represent
negative zero.
I suppose some twos complement machines could trap on the most
negative twos complement integer, but that doesn't seem likely
(for any machine that I know of).
Note also that your program is necessarily not portable if
it requires 32 bit integers. (What is someone supposed to do
on a machine with a different INTEGER size?) It doesn't seem
that much more to say that it requires twos complement and
that it requires nothing funny to happen with the most negative
value.
> And you really DON'T want to know what C does with it :-(
> Yes, agreed. And make sure that shifts don't affect it.
Yes, you do have to watch out for shifts. But the usual C
solution is unsigned, which isn't a choice in Fortran.
-- glen
Er, only somewhat. It also allows for overflow checking, and
some compilers do just that. Experience from the 1970s was
that nearly half of array bounds errors started with an integer
overflow, and detecting those was of great benefit.
>I suppose some twos complement machines could trap on the most
>negative twos complement integer, but that doesn't seem likely
>(for any machine that I know of).
But the compiler can do it.
>> And you really DON'T want to know what C does with it :-(
>> Yes, agreed. And make sure that shifts don't affect it.
>
>Yes, you do have to watch out for shifts. But the usual C
>solution is unsigned, which isn't a choice in Fortran.
You know C tolerably well, but I doubt that even you know all
of the gotchas to do with using unsigned integers in C. Or
signed ones, for that matter :-(
Regards,
Nick Maclaren.
Dick Hendrickson
But not for storing single bits in the manner proposed by the OP.
| I suppose some twos complement machines could trap on the most
| negative twos complement integer, but that doesn't seem likely
| (for any machine that I know of).
|
| Note also that your program is necessarily not portable if
| it requires 32 bit integers. (What is someone supposed to do
| on a machine with a different INTEGER size?)
The most likely word size these days is something more than 32 bits.
Thus, using 32 bits is going to be pretty portable.
| It doesn't seem
| that much more to say that it requires twos complement and
| that it requires nothing funny to happen with the most negative
| value.
What machines made in the past 30 years do something "funny"
with the most negative value?
| > And you really DON'T want to know what C does with it :-(
| > Yes, agreed. And make sure that shifts don't affect it.
|
| Yes, you do have to watch out for shifts. But the usual C
| solution is unsigned, which isn't a choice in Fortran.
What machines made in the last 30 years do not involve the MS bit
in a logical shift? Or, put another way, what sign-magnitude machines
(integer arithmetic) do not involve the sign bit in a logical shift?
Then 16 is a good number to use.
But 32 also is good.
Done by the hardware, and nothing to do with any such restriction.
| and some compilers do just that. Experience from the 1970s was
| that nearly half of array bounds errors started with an integer
| overflow, and detecting those was of great benefit.
Detecting 100% of array bounds errors is significantly better,
and is provided as a language feature by such languages as PL/I and Ada.
| >I suppose some twos complement machines could trap on the most
| >negative twos complement integer, but that doesn't seem likely
| >(for any machine that I know of).
Have you ever come across a machine that did that?
Here's a basic description of the actual application I'm using bit
manipulation for:
I have a LOT of equations.
Each equation can have several terms in it, where each term has a
double precision coefficient of either val(1) or val(2).
For each equation, I'm keeping a default integer variable --> key <--
(32 bits for now), to track which coeff. goes with each term.
For each equation, I pass over each term of the equation.
For term #1:
I set bit #1 of key to 0 if val(1) is the coefficient to use
I set bit #1 of key to 1 if val(2) is the coefficient to use
For term #2:
I set bit #2 of key to 0 if val(1) is the coefficient to use
I set bit #2 of key to 1 if val(2) is the coefficient to use
etc., etc.
The bits are numbered from 0 to 31 (for this particular 32 bit integer
object). From what I've read recently, I'm semi-assuming that the
sign bit is the last bit, #31. However, it's not at all clear to me
that I should assume this at all. So, to be as safe as possible, I'm
only using bit positions 1 to 30. This has the added advantage that
bit #i aligns cleanly with equation term #i, so that I don't have to
worry about any "one-off" math to get things right. I realize that
this logic cost me two bits per key and will therefor, only allow me
30 useful bits... thus... a maximum of 30 terms per equation. 30
terms is enough for my particular application.
Last night, I wrote several beta routines to do this in my application
and so far... it appears to be working fine and with no noticeable
loss of speed efficiency.
I've noticed that a few of you have mentioned problems if you "shift"
the bits. I can see where this could be problematic if you are
actually trying to read an integer value from the object after the
shift. However, it seems to me that this should not be a concern if
you are merely using the bits to track on/off or true/false data as I
am doing?
Dan
Depending on how you set bit 0, etc., you should be able to determine if
you're setting the low-order bit (an integer with only the low-order bit
set has the value 1) or the sign bit (on a twos-complement machine,
you'll get a very large negative number). If you follow Gordon's
advice, the mapping from integer values to bit numbers should be clear.
I think you would be safe to go wild and use 31 bits per word. But
that's just me.
Louis
But a "LOT" is bounded by 2^30 which is not all that large. This
all sounds like a hyper-sparse system where the number of values
the coefficients can take is very limited. The hyper seems to be
more of an issue than the sparse in your description.
What happens when you find you need three distinct values? Say zero
suddenly becomes an option.
My impression is this to simply too complicated for the stated problem.
I would think about using a short integer (1 byte) to keep track of
which value is in use. If the fraction of zeroes goes up then you need
to look at the usual methods for sparse system as well. If you are wasteful
and use one byte to record which coefficent is in use then you will not
hit a wall at 30 which seems to be a worry aready. It is called using
a short look up table for the coefficients.
This seems to be a case where thinking about much harder versions of
your problem clarifies the simpler sitauation as well.
What are you going to do with these equations? That may have a
strong imfluence on what data structure it makes sense to use.
(A pet peave is that too many discussions of fancy data
structures completely omit any discussion of what operations
are easy. One should start with operations and then get to data
structues. Not the other way round. In extereme case it makes sense
to have redundent data structures to make all operations easy.)
A cynic hears that as the time is so short that even an inefficent
algorithm runs in very little absolute time. Just another way of asking
what the operations are that you are performing on yor "LOT" of equations.
> Depending on how you set bit 0, etc., you should be able to determine if
> you're setting the low-order bit (an integer with only the low-order bit
> set has the value 1) or the sign bit (on a twos-complement machine,
> you'll get a very large negative number). If you follow Gordon's
> advice, the mapping from integer values to bit numbers should be clear.
> I think you would be safe to go wild and use 31 bits per word. But
As long as you use the standard intrinsics to do your bit setting and
testing, the mapping from bits to integers is defined by the standard
independent of hardware issues. In fact, strictly speaking, it is
independent of whether the hardware even has bits (as opposed to trits
or some other thing unlikely to be observed in the "wild").
The standard gives as an example that IBSET(12,1) has the value 12.
That's basically from taking 12, which is 1100 in binary, and setting
bit number 1 (numbered from the right, with the righmost bit being bit
number 0), to give 1110 binary, or 14 decimal as the result. This
example in the standard does *NOT* depend on anything about the physical
representation. In the handy and usual (as in universal) case where the
physical representation is binary, the mapping to the physical
representation is clear enough. But even if the physical representation
used BCD (to use an eample somewhat more plausible than trits), the
standard says that IBSET(12,1) is still 14, even though the bits in a
BCD representation won't match up the same way. The bits in BCD for 12
would be 001 010 (that's a 1 for the first digit and a 2 for the
second), while BCD for 14 is 001 100. So IBSET(12,1) doesn't equate to
just setting a bit in BDC. It sets one bit and clears another. If you
have a mythical Fortran compiler for a trinary machine, that 12 decimal
would be 110 in trinary, while the 14 decimal would be 112 trinary, with
no bits in sight.
You probably won't run into a representation using BCD for Fortran
integers today (much less trits), though it is at least vaguely possible
that such a thing could happen in the future at least for a non-default
kind of integer. I give the example just to illustrate that you don't
have to worry about things like what order the hardware might number
bits in. As long as you refer to bit numbers only through the standard
Fortran intrinsics, they will translate for you as needed such that bits
number 0 through 30 inclusive are "safe" to use in a 32-bit word.
More precisely, they are safe for any integer kind where bit_size
returns 32 or greater. That covers even oddball cases such as having a
32-bit word, but only 16 bits of it being used. (Compilers like that
used to exist, but you won't find any current ones; yes, I know that you
can probably still run some of those 30-year-old compilers on at least
some current hardware, but that doesn't make them count as current
compilers in my book and they sure don't support the f90 standard, which
is the first one to integrate the bit intrinsics into the base Fortran
standard). If you did happen to have such a compiler, then bit-size
would return 16. By definition, bit-size returns the number of bits that
can be used with the bit intrinsics.
If bit_size returns 32 (as it probably will unless you happen to be
using a compiler where it returns 64), that means you can use bits
numbered 0 through 31 with the bit intrinsics. It also means that the
bit numbered 31 is the "problematic" one. I'd think of bit number 31 as
the sign bit, but the standard doesn't strictly say that. It just says
that all bets are off as to the interpretation of values with that bit
set. It will be the bit numbered 31 - not the bit numbered 0. Again, any
possible hardware bit numbering or ordering doesn't matter and will be
translated as needed.
Dick's got the heart of it, but this is all standard Market Research
data processing, where iword3=IAND(iword1,iword2) and the rest of the
binary operations are provided as 32-bit sign disregarding
subroutines, than the technique is combined with the data access via
direct access of blocks of data, which get at the 32-bit word needed
directly. This is how tausystems software actually processes in the
TAU-SAP suites.
Provided that you use solely the bit intrinsics to access the data
then, yes, Fortran defines what happens. However, that may not
be quite what is provided by the hardware, which is something that
risks compiler bugs and/or gotchas in the standard. As far as I
can see, this area is moderately free of gotchas, though there is
at least one.
That is that integer constants and I/O editing (even the B, O and
Z descriptors) use the arithmetic integer model and not the bit
model. Therefore no bit pattern that might be negative should be
allowed to go through those, or the results are processor dependent
which may include undefined behaviour.
Regards,
Nick Maclaren.
>I've noticed that a few of you have mentioned problems if you "shift"
>the bits. I can see where this could be problematic if you are
>actually trying to read an integer value from the object after the
>shift. However, it seems to me that this should not be a concern if
>you are merely using the bits to track on/off or true/false data as I
>am doing?
Of course not. Take no notice of the fearmongers.
Is that even *remotely* likely? It is required that the size of an integer
be the same as that of single precision real.
For a 32-bit machine, that would mean only 7 decimal digits.
But the real problem is indexing an array. With BCD? I think not.
Unlikely. An integer is an integer. An output routine is going to be
able to print any integer. The only think about printing an integer that
has been configured to represent bits is that negative values (when printed)
require interpretation when printed as if it *were* an integer.
It's straightforward to reconstruct (from decimal integer output)
the bit pattern to which it corresponds.
However, it's much simpler to rig up a procedure to print
the individual bits of an integer.
> Is that even *remotely* likely? It is required that the size of an integer
> be the same as that of single precision real.
> For a 32-bit machine, that would mean only 7 decimal digits.
> But the real problem is indexing an array. With BCD? I think not.
Historically, there were many machines with BCD addressing.
The IBM 650 is the one I think of first (not that I ever used one),
and there is even a Fortran compiler for it.
The 7070 uses the two-of-five code that I mentioned previously,
as an upgrade, but not instruction set compatible, to the 650.
And the 1620, a BCD machine with BCD addressing, was very popular
for Fortran programming.
It does seem strange today, but presumably didn't 50 years ago.
K&R C allows for decimal representation, but ANSI removed it.
Fortran still allows any base greater than one, and the bit
operations still have to work.
It is likely that no decimal machines have a Fortran 90 or
later compiler, though the standard does allow for it.
-- glen
On 2011-04-27 06:44:12 -0400, robin said:
> Is that even *remotely* likely? It is required that the size of an integer
> be the same as that of single precision real.
> For a 32-bit machine, that would mean only 7 decimal digits.
> But the real problem is indexing an array. With BCD? I think not.
Both the default real and the default integer are stored using
one numeric storage unit.
That makes no statement of how the default integer and default real
are stored. It is solely a statement of the size of the storage unit
containing them.
One may, of course, make statements of one's belief of what
a processor ought to do to be commercially successful, but,
as far as the standard is concerned, that is a different discussion.
--
Cheers!
Dan Nagle
> "Richard Maine" <nos...@see.signature> wrote in message
news:1k08bw0.5a0lsu26q4c8N%nos...@see.signature...
[about the physical representation of integers]
> | But even if the physical representation used BCD
>
> Is that even *remotely* likely?
I believe I already anwered that a few paragraphs later in the part that
you didn't quote where I said
>> You probably won't run into a representation using BCD for Fortran
>> integers today (much less trits), though it is at least vaguely
>> possible that such a thing could happen in the future at least for a
>> non-default kind of integer.
To rephrase it using the specific words of your question, no I don't
consider that it is even remotely likely in today's compilers. As Glen
noted, compilers of the past are a different matter; the 1620 that Glen
mentioned is a good counterexample to many things that people tend to
assume are always the case.
But also note my comment about the future. Yes, I do consider it at
least remotely likely that you might find future compilers that have
integers with BCD representations, particularly for a non-default kind.
I hesitate to guess how high the likelihood is, but "remotely" isn't a
very high bar; I'd say it passes that one.
> It is required that the size of an integer
> be the same as that of single precision real.
> For a 32-bit machine, that would mean only 7 decimal digits.
That computation seems to make various assumptions, none of which are
particularly well justified. No, the size of an integer is not required
to be the same as that of a single precision real - anyway not any more.
That was true in f77. As of f90, the standard allows for multiple kinds
of integers and that statement applies only to default integer kind. I
was *NOT* restricting my comments to default integer kind. Note that
I even specifically mentioned non-default kind when I suggested the
possibilities for the future. If I recall correctly, f2008 even requires
that there be an integer kind (not necessarily default) that can't
physically be represented in 32 bits (obviously concieved as a 64-bit
integer, but not required to be exactly that).
The bit about 32-bit machines seems somewhat of a non-sequitur. I
specifically mentioned that I was talking about being safe on possibly
unusual architectures. It doesn't take very unusual at all to have
machines that aren't 32 bit ones. I've used plenty of such. Also, the
size of Fortran single precision real has very little to do with whether
or not you have a "32-bit machine". I have used Fortran compilers that
have 64-bit single precision reals on 32-bit machines; multiple such
compilers exist today. I have also used Fortran compilers that have
32-bit single precision reals on 64-bit machines; that's at least
getting close to the most common case today - far from a rarity. And as
Dan noted, the standard's restriction on the size of default integer and
default real related only to the storage used. It does not translate
directly to computations of range. I have used plenty of compilers with
counterexamples to that - for example compilers for 16-bit (or even
8-bit) machines where single precision real used 32 bits, and default
integer used the same 32 bits for storage in order to meet the
requirements of the standard, but only 16 bits of the integer were
relevant to its value.
Not to speak of the fact that having only 7 decimal digits for an
integer doesn't sound so obviously ludicrous that it couldn't be
remotely possible. After all, the above-mentioned compilers that used 16
bits for default integers didn't support integers nearly as large as 7
decimal digits; they had only (approximately) 4 and a half decimal
digits. I tended to use compiler options to select larger range for the
default integers for my applications, but the compilers are an existance
proof for default integers with less than 7 decimal digits.
(I normally wouldn't have even seen Robin's post, but I was curious
about what Glen and Dan were replying to. I probably should have
resisted the temptation to reply. My normal killfile entry is to help me
avoid such temptations. Don't expect to see further replies from me in
this subthread, though.)
I'm not talking about 60 years ago.
I'm talking about the present, 2011.
| The IBM 650 is the one I think of first (not that I ever used one),
| and there is even a Fortran compiler for it.
|
| The 7070 uses the two-of-five code that I mentioned previously,
"uses"? Used, I think. That's a 1960s machine.
| as an upgrade, but not instruction set compatible, to the 650.
|
| And the 1620, a BCD machine with BCD addressing, was very popular
| for Fortran programming.
Another 1960s machine. Still not relevant.
Take a look at the intrinsics.
BIT_SIZE, for example, tells you the number of bits in the INTEGER argument.
BIT, may I remind you, is a contraction of "BINARY DIGIT".
| One may, of course, make statements of one's belief of what
| a processor ought to do to be commercially successful, but,
| as far as the standard is concerned, that is a different discussion.
See above.
(snip, I wrote)
> | The IBM 650 is the one I think of first (not that I ever used one),
> | and there is even a Fortran compiler for it.
> | The 7070 uses the two-of-five code that I mentioned previously,
> "uses"? Used, I think. That's a 1960s machine.
The architecture exists, even if no implementations do, but
in many cases there is at least one in a museum somewhere.
-- glen
Obviously it does [apply to default integer kind], as it is well-known
that current Fortran compilers typically offer two or more integer kinds,
as they typically offer two or more real kinds, and have done so since F90.
| I was *NOT* restricting my comments to default integer kind.
I was.
| Note that I even specifically mentioned non-default kind when I suggested the
| possibilities for the future. If I recall correctly, f2008 even requires
| that there be an integer kind (not necessarily default) that can't
| physically be represented in 32 bits (obviously concieved as a 64-bit
| integer, but not required to be exactly that).
|
| The bit about 32-bit machines seems somewhat of a non-sequitur.
It was an example to illustrate the problem that it would cause with a
32-bit default integer. Many programs wouldn't work with a
precision of 7 decimal digits if BCD integers were used,
together with 32-bit reals.
| I specifically mentioned that I was talking about being safe on possibly
| unusual architectures. It doesn't take very unusual at all to have
| machines that aren't 32 bit ones. I've used plenty of such. Also, the
| size of Fortran single precision real has very little to do with whether
| or not you have a "32-bit machine".
Whatever is the size of default REAL, that it size of default INTEGER.
| I have used Fortran compilers that
| have 64-bit single precision reals on 32-bit machines; multiple such
| compilers exist today. I have also used Fortran compilers that have
| 32-bit single precision reals on 64-bit machines; that's at least
| getting close to the most common case today - far from a rarity. And as
| Dan noted, the standard's restriction on the size of default integer and
| default real related only to the storage used. It does not translate
| directly to computations of range. I have used plenty of compilers with
| counterexamples to that - for example compilers for 16-bit (or even
| 8-bit) machines where single precision real used 32 bits, and default
| integer used the same 32 bits for storage in order to meet the
| requirements of the standard, but only 16 bits of the integer were
| relevant to its value.
30-40 years ago when microcomputers were introduced,
such things existed, and probably existed until about 20 years ago.
For professional use, including on the PC, anything less than
32 bits for INTEGER would not be acceptable.
| Not to speak of the fact that having only 7 decimal digits for an
| integer doesn't sound so obviously ludicrous that it couldn't be
| remotely possible. After all, the above-mentioned compilers that used 16
| bits for default integers didn't support integers nearly as large as 7
| decimal digits; they had only (approximately) 4 and a half decimal
| digits.
Do you really think that something mickey-mouse like that
would be introduced now?
| I tended to use compiler options to select larger range for the
| default integers for my applications, but the compilers are an existance
| proof for default integers with less than 7 decimal digits.
Again, that's irrelevant, archaic, and not likely to happen in the 21st century.
It would have to be currently working, if it existed.
But in any case, "used" is appropriate.
(snip)
> Take a look at the intrinsics.
> BIT_SIZE, for example, tells you the number of bits in the
> INTEGER argument. BIT, may I remind you, is a contraction of
> "BINARY DIGIT".
Take a look at the standard, especially the section "Bit model."
Then look at the section "Numeric models" and notice the difference.
Especially notice the difference in when each is applicable.
-- glen
> Do you really think that something mickey-mouse like that
> would be introduced now?
> | I tended to use compiler options to select larger range for the
> | default integers for my applications, but the compilers are an existance
> | proof for default integers with less than 7 decimal digits.
> Again, that's irrelevant, archaic, and not likely to happen in
> the 21st century.
It is extremely unlikely that a base 7 computer will be introduced
commercially, and even less likely that one will succeed in the
market, but the standard still allows for it.
-- glen
On 2011-04-28 02:07:18 -0400, robin said:
>
> Take a look at the intrinsics.
>
> BIT_SIZE, for example, tells you the number of bits in the INTEGER argument.
> BIT, may I remind you, is a contraction of "BINARY DIGIT".
bit_size() tells you how many bits are in the bit model
supported by its argument. It makes no statement regarding
the size of a numeric storage unit.
> | One may, of course, make statements of one's belief of what
> | a processor ought to do to be commercially successful, but,
> | as far as the standard is concerned, that is a different discussion.
>
> See above.
See above.
--
Cheers!
Dan Nagle
I just said that, for an integer argument.
| It makes no statement regarding
| the size of a numeric storage unit.
Ellis, Phillips, Lahey: "Fortran 90 Programming" p. 703
says "BIT_SIZE(I) ... Returns the number of bits in the integer I".
| > | One may, of course, make statements of one's belief of what
| > | a processor ought to do to be commercially successful, but,
| > | as far as the standard is concerned, that is a different discussion.
| >
| > See above.
|
| See above.
See above.
Gee, robin, don't go back to your tired old ways
of demonstrating to the world your lack of understanding ...
On 2011-04-29 01:00:12 -0400, robin said:
> Ellis, Phillips, Lahey: "Fortran 90 Programming" p. 703
> says "BIT_SIZE(I) ... Returns the number of bits in the integer I".
But what does "in the integer" mean?
Please see 13.3 Bit model page 317 of 10-007r1.
Then skip ahead to 13.7.32 bit_size( i) on page 336.
Note the description: "Number of bits in integer model 13.3"
If you want the storage size of an object, try reading 13.7.160
storage_size( a [, kind]) on page 390.
Note the description: "Storage size in bits"
Alternatively, one could read 13.8.2.18 numeric_storage_size.
Now we'll all get to see how long it takes robin
to demonstrate either that he knows the difference, or doesn't.
oo
--
Cheers!
Dan Nagle
| Gee, robin, don't go back to your tired old ways
| of demonstrating to the world your lack of understanding ...
Seems that you are still the same old arrogant insulting self.
| On 2011-04-29 01:00:12 -0400, robin said:
|
| > Ellis, Phillips, Lahey: "Fortran 90 Programming" p. 703
| > says "BIT_SIZE(I) ... Returns the number of bits in the integer I".
|
| But what does "in the integer" mean?
If you don't know by now, you never will.
Hint: "bits" = "binary digits".