Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

how to declare doubles in f95

5,278 views
Skip to first unread message

Eric

unread,
Jan 28, 2009, 8:18:44 PM1/28/09
to
I have this fortran program that's kind of a hash-mash of 77 and
90/95. The original author
used real*8 to declare his floats (I'm guessing the intent was for
them to be double precision
floats). Gfortran, using the -pedantic and -std=f95 flags, complains
about real*8.

How do you declare a double precision float in "proper" f95 style? I
understand there's a
KIND= modifier which could be used but it sounds like the meaning of
any given KIND is
compiler dependent, i.e. non-portable. I want portable.

I'm trying to get this program to compile on gfortran and g95. (G95
seems to be able to
handle anything but gfortran seems really fragile.)

TIA
eric

e p chandler

unread,
Jan 28, 2009, 8:42:32 PM1/28/09
to

This is more portable.
----- start text -----
integer, parameter :: dp = kind(1.0d0)

real(dp) a, b, c

a=1
b=2
c=3.14159265358979323846_dp

print *,'pi=',c
print *,'pi/2=',c/b

end
---- end text ----

Also see selected_real_kind(). Kind numbers themselves are not
portable but the results of an inquiry function should be.

- e

Richard Maine

unread,
Jan 28, 2009, 8:44:37 PM1/28/09
to
Eric <einaz...@yahoo.com> wrote:

> How do you declare a double precision float in "proper" f95 style? I
> understand there's a
> KIND= modifier which could be used but it sounds like the meaning of
> any given KIND is
> compiler dependent, i.e. non-portable. I want portable.

Well, if double precision is what you really want, then the declaration
"double precision" does that just fine. But wanting double precision
isn't as portable as might be in terms of what precision it gets you.
More often, one wants some precision of your choice rather than whatever
the compiler happens to choose for double. THen you wangt to use the
KIND facility.

KIND is *VERY* portable, at any rate more so than any of the
alternatives. It was designed for portability. But that's only if you
use it portably. Specifying a particular hard-wired kind value (such as
kind=8) is what is non-portable.

For the best portability, you should use the selected_real_kind
intrinsic to select a kind value based on your precision requirements.
Then you'll want to save that value as a named constant (aka parameter)
for convenience.

Something like

integer, parameter :: r8_kind = selected_real_kind(12,30)

which I just copied from my precision module. You will want to put it in
a module so that you only have to have that long-winded mess once.
Thereafter, use something like

real(r8_kind) :: whatever

--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain

Ragu

unread,
Jan 28, 2009, 10:35:17 PM1/28/09
to
From what I know, here's my 2c.

You could use

integer, parameter, public :: sp_k = kind(1.0) ! Default Type of Real
integer, parameter, public :: dp_k = selected_real_kind(2*precision
(1.0_sp_k))

or you could use
integer, parameter, public :: dp_k = selected_real_kind(p = 15, r =
300)

The p and r type definition is more specific meant for x64 and some
special architecture machines. Others should be able to expalin the
details in a better way.

Larry Gates

unread,
Jan 29, 2009, 12:05:38 AM1/29/09
to

Aren't implementations required to have both a single and double precision?
--
larry gates

Well, I think Perl should run faster than C. :-)
-- Larry Wall in <1998012003...@wall.org>

Richard Maine

unread,
Jan 29, 2009, 12:27:31 AM1/29/09
to
Larry Gates <la...@example.invalid> wrote:

> Aren't implementations required to have both a single and double precision?

Yes. Indeed, that seemed rather implicit in all the preceeding posts,
which mentioned how to declare double precision without any caveats
about whether or not it existed. However I fail to see the relevance of
this to anything prior in the thread. I suppose I could make some
guesses as to what you think the relevance is, but none of the guesses I
can come up with seem particularly good. Most of the guesses I might try
are answered by what I already said, so rather than repeat it, I'll
mostly just refer back to it.

I'll try a bit more long-winded version of the reply to one of the
guesses.

Note in particular, that although implementations are required to have
both single and double precision, there are almost no requirements on
what precisions those must be. The only such requirement is that double
precision must have more precision than single. It doesn't have to be
twice as much (double has to use twice as much storage in storage
association contexts, but that doesn't constitute a requirement related
to precision - indeed counterexamples exist). Neither single nor double
have to have any particular precision that you might have in mind. Nor
do they have to have any particular relationship to the precisions
natively supported by the hardware in question; practical considerations
often come into play in that regard, but the Fortran standard doesn't.

Larry Gates

unread,
Jan 29, 2009, 1:10:57 AM1/29/09
to
On Wed, 28 Jan 2009 21:27:31 -0800, Richard Maine wrote:

> Note in particular, that although implementations are required to have
> both single and double precision, there are almost no requirements on
> what precisions those must be. The only such requirement is that double
> precision must have more precision than single. It doesn't have to be
> twice as much (double has to use twice as much storage in storage
> association contexts, but that doesn't constitute a requirement related
> to precision - indeed counterexamples exist). Neither single nor double
> have to have any particular precision that you might have in mind. Nor
> do they have to have any particular relationship to the precisions
> natively supported by the hardware in question; practical considerations
> often come into play in that regard, but the Fortran standard doesn't.

Alright. I guess I was thinking about what numbers you would ask for as
far as precision and range if you wanted dp on other machines. I think of
the arrangement I have, a middle of the road desktop computer with xp, as
the smallest computer fortran runs on, but apparently the standard allows
flexibility downward.

An srk(12,30) call could very well in these meager circumstances return a
negative number. I think your first advice to use double precision might
be the "most portable." But what does it avail you to have dp if you don't
have the tag to hang on your constants?

Anyways, for my two cents, srk(12, 30) works and gives you a means to make
your compiler render your constants dp as well.
--
larry gates

Sometimes we choose the generalization. Sometimes we don't.
-- Larry Wall in <1997090323...@wall.org>

Richard Maine

unread,
Jan 29, 2009, 2:37:04 AM1/29/09
to
Larry Gates <la...@example.invalid> wrote:

> Alright. I guess I was thinking about what numbers you would ask for as
> far as precision and range if you wanted dp on other machines.

That seems backwards. If you want double precision, then ask for double
precision (which is trivial to do). If you want a particular precision
and range instead, then ask for that precision and range. This doesn't
seem like rocket science to me.

Why would you do the Rube Goldberg thing of trying to figure out what
range and precision to specify double precision if what you want is
double precision rather than a range and precision? Yes, you could do
that, but darned if I know why you would.

It all comes back to actually specifying your requirements. Are your
requirements to use whatever the compiler has for double precision or
are they to use some specific precision? Those are two different
requirements, both possibly meaningful, depending on the situation.

> I think of
> the arrangement I have, a middle of the road desktop computer with xp, as
> the smallest computer fortran runs on, but apparently the standard allows
> flexibility downward.

Um...

1. Fortran has run on machines many orders of magnitude smaller than
that. That's a monster machine compared to even the biggest ones around
when Fortran first came about.

2. Size of the machine has little or nothing to do with Fortran
floatting point precision. First, size of the machine doesn't have a lot
to do with the machine's hardware floatting point (if any) precision.
Second, see previous comments about Fortran precisions not necessarily
having anything to do with hardware precisions. I have run Fortran
programs on systems that didn't have hardware floatting point at all.



> An srk(12,30) call could very well in these meager circumstances return a
> negative number. I think your first advice to use double precision might
> be the "most portable."

I would disagree with that. In most circumstances, I think that a
specified precision requirement is a better statement of the
requirements. Note that I said "statement of requirements". Requirements
don't usually depend on what system you are using. That's backwards. You
start with the requirements. Then if the system can't meet the
requirements, it is inadequate to the task. Or perhaps you reconsider
whether you had the requirements right. But you don't start by
specifying requirements in terms of whatever the machine in front of you
happens to support - at least not usually.

It isn't "portable" to get wrong answers because you are running with
insuficient precision for the application. On the other end, it isn't
usually a good idea to use twice the precision that you need just
because that's what double precision gets you on some system. For
today's systems, that's a more realistic scenario - that you might
specify double precision expecting to get 64 bits and find that instead
it gets you 128.

> But what does it avail you to have dp if you don't
> have the tag to hang on your constants?

I don't understand your comment about not having a "tag to hang on your
constants." It sounds like you don't think you can portably determine
the appropriate kind value for double precision. On the contrary, that
is trivial to do. See the kind intrinsic. Kind(0.0d0) is the idiomatic
way to determine that kind value.

glen herrmannsfeldt

unread,
Jan 29, 2009, 4:36:56 AM1/29/09
to
Richard Maine <nos...@see.signature> wrote:
> Larry Gates <la...@example.invalid> wrote:

>> Alright. I guess I was thinking about what numbers you would ask for as
>> far as precision and range if you wanted dp on other machines.

> That seems backwards. If you want double precision, then ask for double
> precision (which is trivial to do). If you want a particular precision
> and range instead, then ask for that precision and range. This doesn't
> seem like rocket science to me.

I agree. It seems to me more often that you write the program
in double precision and then state the set of input values that
the program works well with. (Sometimes single precision, but
that is rare.) That is, for whole programs, not necessarily
so for individual subroutines.



> Why would you do the Rube Goldberg thing of trying to figure out what
> range and precision to specify double precision if what you want is
> double precision rather than a range and precision? Yes, you could do
> that, but darned if I know why you would.

I suppose if machines varied a lot in available precision, but
mostly they don't. (Unless you run into a CDC 7600 or Cray-1.)

Sometimes it would seem nice to have variable precision, where
you could specify the number of bits (digits) you needed and
the hardware would supply it.

It seems that there is some discussion as 64 bit machines get
more popular, that single precision should be 64 bits and
double 128 bits. But only discussion.



> It all comes back to actually specifying your requirements. Are your
> requirements to use whatever the compiler has for double precision or
> are they to use some specific precision? Those are two different
> requirements, both possibly meaningful, depending on the situation.

(snip)

> 1. Fortran has run on machines many orders of magnitude smaller than
> that. That's a monster machine compared to even the biggest ones around
> when Fortran first came about.

It seems that 4K (36 bit words) for Fortran I on the 704 is enough.

I believe the smallest I ever ran a Fortran compiler on was
56K bytes on an LSI-11.


> 2. Size of the machine has little or nothing to do with Fortran
> floatting point precision. First, size of the machine doesn't have a lot
> to do with the machine's hardware floatting point (if any) precision.
> Second, see previous comments about Fortran precisions not necessarily
> having anything to do with hardware precisions. I have run Fortran
> programs on systems that didn't have hardware floatting point at all.

Well, a smaller machine gives more incentive to use single precision
when it is enough, but otherwise yes.



>> But what does it avail you to have dp if you don't
>> have the tag to hang on your constants?

> I don't understand your comment about not having a "tag to hang on your
> constants." It sounds like you don't think you can portably determine
> the appropriate kind value for double precision. On the contrary, that
> is trivial to do. See the kind intrinsic. Kind(0.0d0) is the idiomatic
> way to determine that kind value.

If you don't like D0.

-- glen

Ron Shepard

unread,
Jan 29, 2009, 10:22:48 AM1/29/09
to
In article <1iua0ko.lz5o02c9847kN%nos...@see.signature>,
nos...@see.signature (Richard Maine) wrote:

> For
> today's systems, that's a more realistic scenario - that you might
> specify double precision expecting to get 64 bits and find that instead
> it gets you 128.

Not just today's systems, this is the way many computers have worked in
the past, 64-bit words for single precision, and 128-bit words for
double precision. The same is true for 60 and 120 bit word machines,
which have been common in the past. And if the 128-bit arithmetic is
software emulated, then there is a huge, 10x to 20x, penalty for using
that arithmetic if single precision is all you really wanted in the
first place.

$.02 -Ron Shepard

nm...@cam.ac.uk

unread,
Jan 29, 2009, 10:32:42 AM1/29/09
to
In article <ron-shepard-D7D3...@news60.forteinc.com>,

"Many"? And "common"? Not really. The only ones that I can think
of were CDCs and Crays, all of which were extremely specialised
supercomputers that sold in very small numbers. What else were you
thinking of?

Ones using 48 bits for single and 96 for double were probably more
common. Once upon a time ....


Regards,
Nick Maclaren.

Ron Shepard

unread,
Jan 29, 2009, 1:39:26 PM1/29/09
to
In article <glsi6q$r0a$1...@smaug.linux.pwf.cam.ac.uk>, nm...@cam.ac.uk
wrote:

> "Many"? And "common"? Not really. The only ones that I can think
> of were CDCs and Crays, all of which were extremely specialised
> supercomputers that sold in very small numbers. What else were you
> thinking of?

There were also the fortran programmable attached processors that were
fairly common in the 80's. These included the FPS-164 and FPS-264
machines which I used. Certainly if you count every computational
scientist and engineer who used a CDC, Cray, ETA, SCS, or one of these
attached processors in the 80's, you would get a high percentage,
probably over 90% of us. Not only were these machines sold to
individual groups and local departments, but there was also a number of
national supercomputer centers (NSF, DOE, DoD) which gave access to
almost everyone who applied. If that >90% estimate is correct, then
"common" would be the correct description, right? My feeling is that
scientists in the UK (your address) had wide access to these kinds of
machines earlier than they did here in the US.

> Ones using 48 bits for single and 96 for double were probably more
> common. Once upon a time ....

On the other hand, I don't know anyone who used a machine like this. I
think Harris computers used 3-byte and 6-byte data, which is almost what
you are claiming, but I think there were fewer of these machines than
the types described above, and they were almost certainly less commonly
used. There were no national supercomputer centers based on Harris
computers, for example. I personally never used one, but I exchanged
code with collaborators who did, and wrestled with their integer*3 and
real*6 declarations.

$.02 -Ron Shepard

Craig Powers

unread,
Jan 29, 2009, 1:56:42 PM1/29/09
to
Eric wrote:
> I'm trying to get this program to compile on gfortran and g95. (G95
> seems to be able to handle anything but gfortran seems really fragile.)

If you're using pedantic and std=f95 flags, of course it's going to be
picky. You're asking it to be picky.

If by "fragile" you mean something other than that it's being picky,
e.g. that you're getting errors/crashes, then perhaps there are bugs
that need to be reported (depending on which version you're using...
4.1.2 is in several distributions, but the current development version
is 4.4 and the most recent stable release is 4.3).

nm...@cam.ac.uk

unread,
Jan 29, 2009, 1:58:43 PM1/29/09
to
In article <ron-shepard-1EBB...@news60.forteinc.com>,

Ron Shepard <ron-s...@NOSPAM.comcast.net> wrote:
>
>> "Many"? And "common"? Not really. The only ones that I can think
>> of were CDCs and Crays, all of which were extremely specialised
>> supercomputers that sold in very small numbers. What else were you
>> thinking of?
>
>There were also the fortran programmable attached processors that were
>fairly common in the 80's. These included the FPS-164 and FPS-264
>machines which I used. Certainly if you count every computational
>scientist and engineer who used a CDC, Cray, ETA, SCS, or one of these
>attached processors in the 80's, you would get a high percentage,
>probably over 90% of us. Not only were these machines sold to
>individual groups and local departments, but there was also a number of
>national supercomputer centers (NSF, DOE, DoD) which gave access to
>almost everyone who applied. If that >90% estimate is correct, then
>"common" would be the correct description, right? My feeling is that
>scientists in the UK (your address) had wide access to these kinds of
>machines earlier than they did here in the US.

90% of who?

Most of the coprocessor compilers I saw took a 'conventional' view
of precision, but I didn't see many as those coprocessors never
really took off.

>> Ones using 48 bits for single and 96 for double were probably more
>> common. Once upon a time ....
>
>On the other hand, I don't know anyone who used a machine like this. I
>think Harris computers used 3-byte and 6-byte data, which is almost what
>you are claiming, but I think there were fewer of these machines than
>the types described above, and they were almost certainly less commonly
>used. There were no national supercomputer centers based on Harris
>computers, for example. I personally never used one, but I exchanged
>code with collaborators who did, and wrestled with their integer*3 and
>real*6 declarations.

The ICL 1900 range was perhaps the main one, but nowhere near the only
one. I think it took that word size from Ferranti.


Regards,
Nick Maclaren.

Richard Maine

unread,
Jan 29, 2009, 2:12:50 PM1/29/09
to
Ron Shepard <ron-s...@NOSPAM.comcast.net> wrote:

> In article <glsi6q$r0a$1...@smaug.linux.pwf.cam.ac.uk>, nm...@cam.ac.uk
> wrote:
>
> > "Many"? And "common"? Not really. The only ones that I can think
> > of were CDCs and Crays, all of which were extremely specialised
> > supercomputers that sold in very small numbers. What else were you
> > thinking of?
>

> Certainly if you count every computational
> scientist and engineer who used a CDC, Cray, ETA, SCS, or one of these
> attached processors in the 80's, you would get a high percentage,
> probably over 90% of us.

It sure includes me. I considered CDCs the premier scientific machines
of the late 70s and early 80s. I used them, as did a large fraction of
the scientific users I dealt with. I could also add a few other brands,
admitedly lesser known, to the above list (Elxsi, for one).

**HOWEVER...***

The subject was the precision used by Fortran compilers - not hardware.
SInce I'm the one who initially made the statement in question, I can be
pretty confident of that. See my prior posts where I mentioned,
repeatedly, and more than once, that the two are not the same thing.
Yes, Fortran compilers today where single precision is or can be 64 bits
are certainly both "many" and "common". One of the reasons for this is
to accomodate the many scientific programs written in the era where
machines like the CDCs et al were widely used for scientific work.
Mentioning that they sold in small numbers is misleading. *ALL*
computers of at least the early part of that era sold in small numbers.
The NASA facility where I worked only had one computer, period. It was a
CDC. (I haven't tried to count accurately, but they presumably have a
few thousand computers now, even by a conservative definition of what
constitutes a "computer"). CDCs and Crays were used for a major fraction
of the scientific programming work. Programs written for these machines
often assumed that single precision (60 or 64 bits) was adequate.

It became common practice for compilers for other machines to have at
least an option to make single precision be 64 bits in order to
facilitate porting of such code. It is such a common option that people
today continue using those options and writing code that assumes single
precision is 64 bits, even if they have never seen anything like a CDC.
To my knowledge, a majority, or at least a large fraction, of today's
compilers have such options. Questions relating to this come up quite
regularly on this newsgroup.

Thus, if one really wants to put it in terms of machines, add such
things as current PCs, Macs, Suns,.... well pretty much any machine you
are likely to find today.

Gordon Sande

unread,
Jan 29, 2009, 2:20:38 PM1/29/09
to

Burroughs was the B in BUNCH. The others were Univac, NCR, CDC and Honeywell.

Burroughs was 48 bit with a tagged architechure so there were several more bits
for the tag field. Variable segement size allowed for much higher
virtual to real
ratios than the fixed page size guys. Automation subscript checking on the
array descriptors. Pointer-Number based memory protection. Etc, etc...
Technically nice but did not win the commercial battle. Like KDF9 in design.
Like many in not surviving.

Burroughs Fortran (an Algol behind the curtains!) was a "conformance
stress test"
to the Fortran standard as it was very fussy about things that others ignored
or even allowed major violations. Often a big shock to those who thought they
standard and portable.


> Regards,
> Nick Maclaren.


nm...@cam.ac.uk

unread,
Jan 29, 2009, 2:35:03 PM1/29/09
to
In article <1iuawnt.17aehjr1dmhi26N%nos...@see.signature>,

Richard Maine <nos...@see.signature> wrote:
>
>It sure includes me. I considered CDCs the premier scientific machines
>of the late 70s and early 80s. I used them, as did a large fraction of
>the scientific users I dealt with. I could also add a few other brands,
>admitedly lesser known, to the above list (Elxsi, for one).

However, may I point out that the vast majority of statistical (and
many other) packages of that era were written in Fortran, and very
few were even ported to those machines?

Even by the late 1960s, CDCs and a few others were being classed as
"supercomputers", as distinct from the run-of-the-mill scientific
or general purpose computers.

>**HOWEVER...***
>
>The subject was the precision used by Fortran compilers - not hardware.
>SInce I'm the one who initially made the statement in question, I can be
>pretty confident of that. See my prior posts where I mentioned,
>repeatedly, and more than once, that the two are not the same thing.

And that is the aspect I was addressing.

>Yes, Fortran compilers today where single precision is or can be 64 bits
>are certainly both "many" and "common". One of the reasons for this is
>to accomodate the many scientific programs written in the era where
>machines like the CDCs et al were widely used for scientific work.
>Mentioning that they sold in small numbers is misleading. *ALL*
>computers of at least the early part of that era sold in small numbers.

Everything is relative, and you have made one, very serious, omission.

Fortran lost out as a language for general computing in the early
1980s, and the remaining Fortran users correspond to the supercomputing
users of the 1960s and 1970s, to a first approximation. So, OF COURSE,
most of the older people used CDCs and Crays, and most of the compilers
have options to double the precision.

But, back in the 1970s, many compilers for other systems didn't have
such a feature, or it worked very badly and wasn't well supported.
Why not? Well, there wasn't the demand ....

>The NASA facility where I worked only had one computer, period. It was a
>CDC. (I haven't tried to count accurately, but they presumably have a
>few thousand computers now, even by a conservative definition of what
>constitutes a "computer"). CDCs and Crays were used for a major fraction
>of the scientific programming work. Programs written for these machines
>often assumed that single precision (60 or 64 bits) was adequate.

Yes, that is true. But, in the early 1970s, there were hundreds of
computers used for Fortran programming in UK universities, and there
were two CDCs. Yes, that was because of 'buy ICL', but most other
countries had far more of the smaller systems used for Fortran.

>Thus, if one really wants to put it in terms of machines, add such
>things as current PCs, Macs, Suns,.... well pretty much any machine you
>are likely to find today.

Well, I understood you to be referring to the default (i.e. 'native')
mode. In which case, pretty well no machine you are likely to find
today outside a few 'star wars' sites.


Regards,
Nick Maclaren.

dpb

unread,
Jan 29, 2009, 3:23:08 PM1/29/09
to
nm...@cam.ac.uk wrote:
...

> However, may I point out that the vast majority of statistical (and
> many other) packages of that era were written in Fortran, and very
> few were even ported to those machines?

...
I have no way of know percentages of totals, of course, but certainly we
had at least two statistical packages and more general purpose packages
(IMSL, specifically, comes to mind) on the CDCs. I was never aware of
not having something suitable for the task, certainly although my
particular area was to maintain/enhance proprietary (civilian) nuclear
codes rather than utilize other packages but did on occasion use them...

--

nm...@cam.ac.uk

unread,
Jan 29, 2009, 3:31:22 PM1/29/09
to

IMSL was a library, not a package - those terms were distinguished,
then. SPSS, Genstat, Minitab, Clustan etc. The first was ported
to the CDCs, but rather half-heartedly; I don't think any of the
others were.


Regards,
Nick Maclaren.

glen herrmannsfeldt

unread,
Jan 29, 2009, 3:40:42 PM1/29/09
to
Richard Maine <nos...@see.signature> wrote:
(snip)


> It became common practice for compilers for other machines to have at
> least an option to make single precision be 64 bits in order to
> facilitate porting of such code. It is such a common option that people
> today continue using those options and writing code that assumes single
> precision is 64 bits, even if they have never seen anything like a CDC.
> To my knowledge, a majority, or at least a large fraction, of today's
> compilers have such options. Questions relating to this come up quite
> regularly on this newsgroup.

I don't remember that option on the IBM compilers, other than adding

IMPLICIT REAL*8 (A-H,O-$)

which I did see many times. ($ was the 27th letter of the alphabet.)
(As far as I know, IBM started the use of IMPLICIT, maybe for this.)

Also, there were many routines with sets of declarations and
DATA statements at the beginning where you uncomment the ones
appropriate for your machine.

> Thus, if one really wants to put it in terms of machines, add such
> things as current PCs, Macs, Suns,.... well pretty much any machine you
> are likely to find today.

-- glen

Gordon Sande

unread,
Jan 29, 2009, 3:47:45 PM1/29/09
to

A selection bias problem rears its ugly head as you name packages which were
common the UK but rare in North America. I would have guessed the better
statement was that the 7090 era Fortran coded packages were ported to CDC
and then to IBM/360. The IBM/360 versions were maintaned and developed
while the
CDC versions died a slow death of neglect. The same can said for VAX
and various
other vendors. Many of the ports were contracts paid by the machine vendors.

Richard Maine

unread,
Jan 29, 2009, 3:51:31 PM1/29/09
to
<nm...@cam.ac.uk> wrote:

[I'll skip all the rest. We appear to have some different viewpoints.]

> Well, I understood you to be referring to the default (i.e. 'native')
> mode. In which case, pretty well no machine you are likely to find
> today outside a few 'star wars' sites.

No, that's not what I was referring to. I was referring to whatever
situation one might end up writing code for, which could be any of
numerous situations. One might call them the "default" in some sense in
that they are the default situation for your code. It happens all the
time for any number of reasons. Your code is used in some larger
environment where the rest of the code requires such settings, or
whatever.

And still, you seem to be referring to machines instead of compilers. I
say yet again, repeatedly, and not for the first time, that I am not
referring to machines, but to compilers.

What I am talking about is things that users today might indeed run into
- not some historical or arcane scenario. I am talking about that
because I think it helps people - one's who do come here and need the
help, not for the amusement of abstract argument.

There are common compilers available and used on desktop machines today
where 64-bit reals are the default. See, for example
<http://www.g95.org/downloads.shtml> and note in particular all the
entries labelled as having 64-bit default integers (I think you'll also
find that they have 64-bit default reals, though I didn't go check).

I suppose you will now argue that those versions don't "count" for some
reason just like you argue that the computers used by most people that I
worked with in the 70s don't count. Perhaps even though 64-bits is the
default for those versions of the compiler, those versions of the
compiler aren't themselves the "default"... or something. Well, if so,
you can argue that, but I'm not interested in such a philosophical
debate. They are real compilers used by real people who will get in
trouble if they code in a way that assume that such compilers don't
exist or don't matter.

Note the comments on that page about how the versions with 64-bit
default integers "may break older programs." Apparently someone thought
this matter worth mentioning on a web site that does not appear to be
aimed exclusively at "a few 'star wars' sites". That is *EXACTLY* the
point I was making - that one should not code so as to assume that
default reals (and integers) are no more than 32 bits. It today's
environments, it is probably safe to assume that they are no less, but
it is not safe to assume that they are no more.

I have spent a non-trivial faction of my career fixing code that
hard-wired assumptions about data size. Some of the fixes were easy.
Some required comple rewrites from scratch. I recall being greatly
thrilled that f90 finally made it possible to write code that asked for
the precision I needed instead of whatever the compiler happened to
default to. I considered it one of the most important features of f90. I
still regularly see related questions in this forum, occasionally from
people who have ignored relevant advice and then later had it come back
to bite them. You are not going to be able to convince me that all such
issues should henceforth be ignored.

I do and will continue to recommend that people use the facilities of
f90+ to select precision rather than asuming that the compiler's
defaults will be right. I have 40 years of experience in this field
(yes, I realize yours is comparable) that makes me confident this is
good advice. See my signature, which is extremely pertinent.

I see no point in prolonging this discussion, at least on my side. There
simply is nothing that you will be able to say that will convince me
that it is good advice to tell people to ignore the issue. If that's not
what you are suggesting, then good.

nm...@cam.ac.uk

unread,
Jan 29, 2009, 3:51:37 PM1/29/09
to
In article <glt48a$ai5$1...@naig.caltech.edu>,
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

>Richard Maine <nos...@see.signature> wrote:
>
>> It became common practice for compilers for other machines to have at
>> least an option to make single precision be 64 bits in order to
>> facilitate porting of such code. It is such a common option that people
>> today continue using those options and writing code that assumes single
>> precision is 64 bits, even if they have never seen anything like a CDC.
>> To my knowledge, a majority, or at least a large fraction, of today's
>> compilers have such options. Questions relating to this come up quite
>> regularly on this newsgroup.
>
>I don't remember that option on the IBM compilers, other than adding
>
> IMPLICIT REAL*8 (A-H,O-$)
>
>which I did see many times. ($ was the 27th letter of the alphabet.)
>(As far as I know, IBM started the use of IMPLICIT, maybe for this.)
>
>Also, there were many routines with sets of declarations and
>DATA statements at the beginning where you uncomment the ones
>appropriate for your machine.

It was called AUTODBL. If I recall (and I may well not), it was not
present in IBM's original development compilers for the System/360
(Code-and-go and G), and may not have been even in the Program Product,
G1. I think I have enough old manuals to check ....

Anyway, it was a crock.


Regards,
Nick Maclaren.

dpb

unread,
Jan 29, 2009, 3:55:31 PM1/29/09
to
Gordon Sande wrote:
> On 2009-01-29 16:31:22 -0400, nm...@cam.ac.uk said:
>
>> In article <glt39e$9fe$1...@aioe.org>, dpb <no...@non.net> wrote:
>>>
>>>> However, may I point out that the vast majority of statistical (and
>>>> many other) packages of that era were written in Fortran, and very
>>>> few were even ported to those machines?
>>> ...
>>> I have no way of know percentages of totals, of course, but certainly we
>>> had at least two statistical packages and more general purpose packages
>>> (IMSL, specifically, comes to mind) on the CDCs. I was never aware of
>>> not having something suitable for the task, certainly although my
>>> particular area was to maintain/enhance proprietary (civilian) nuclear
>>> codes rather than utilize other packages but did on occasion use them...
>>
>> IMSL was a library, not a package - those terms were distinguished,
>> then. SPSS, Genstat, Minitab, Clustan etc. The first was ported
>> to the CDCs, but rather half-heartedly; I don't think any of the
>> others were.
>>
>>
>> Regards,
>> Nick Maclaren.
>
> A selection bias problem rears its ugly head as you name packages which
> were common the UK but rare in North America. ...

I wasn't much into statistical computing per se at the time but of those
only SPSS do I recall as being in time and it sorta' seems like it was
rather late to have made the CDC party, anyway. That of course, could
simply be specialization area bias, I don't know.

Seems like perhaps SAS was available; I can't recall--again my areas of
usage were in maintaining/developing "packages" for use of others in the
company rather than using them hence my concentration on "libraries"
rather than "packages". I didn't know the distinction was of concern in
the question of availability; but I'd consider that in those days most
anybody I knew using CDC machines would have felt to put tools together
if required was just "part of the job"...

--

Richard Maine

unread,
Jan 29, 2009, 4:02:57 PM1/29/09
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> Richard Maine <nos...@see.signature> wrote:
>
> > It became common practice for compilers for other machines to have at
> > least an option to make single precision be 64 bits in order to

> > facilitate porting of such code...


>
> I don't remember that option on the IBM compilers, other than adding
>
> IMPLICIT REAL*8 (A-H,O-$)

Indeed. I said "common practice" rather than "universal". My terminology
was intentional. This is precisely why IBM was not even close to
competitive in a multi-mullion dollar procurement I was involved in in
the early to mod 80s. It was a procurement to replace what was no longer
the only computer at our facility, but was still the "main" computer.
The software conversion cost was what made IBM non-competitive enough
that they didn't bother to bid. (I wrote the software conversion cost
estimation, so I have some familliarity with it, far more so than worth
going into here).

The system that won the award did have such an option, which was indeed
heavily used in the software conversion, substantially lowering the
conversion costs.

nm...@cam.ac.uk

unread,
Jan 29, 2009, 4:17:34 PM1/29/09
to
In article <1iub0nh.xkbcbr1pbu00eN%nos...@see.signature>,

Richard Maine <nos...@see.signature> wrote:
>
>> Well, I understood you to be referring to the default (i.e. 'native')
>> mode. In which case, pretty well no machine you are likely to find
>> today outside a few 'star wars' sites.
>
>No, that's not what I was referring to. I was referring to whatever
>situation one might end up writing code for, which could be any of
>numerous situations. One might call them the "default" in some sense in
>that they are the default situation for your code. It happens all the
>time for any number of reasons. Your code is used in some larger
>environment where the rest of the code requires such settings, or
>whatever.
>
>And still, you seem to be referring to machines instead of compilers. I
>say yet again, repeatedly, and not for the first time, that I am not
>referring to machines, but to compilers.

I could respond that you seem to be referring to arcane hacking,
rather than mainstream programming.

Yes, I was referring to compilers, too, but in the sense of an actual
development environment. Most users want to include a library, often
several, and they are rarely available for unusual compiler options.
It was and is almost impossible to build many programs in such a
way unless you work solely from source and are prepared to hack.
For example, consider a fairly typical program that uses LAPACK
(the tuned one supplied with the system), MPI (the one that comes
with the interconnect) and a couple of others.


Regards,
Nick Maclaren.

nm...@cam.ac.uk

unread,
Jan 29, 2009, 4:27:28 PM1/29/09
to
In article <glt565$b17$1...@aioe.org>, dpb <no...@non.net> wrote:

>Gordon Sande wrote:
>>>
>>> IMSL was a library, not a package - those terms were distinguished,
>>> then. SPSS, Genstat, Minitab, Clustan etc. The first was ported
>>> to the CDCs, but rather half-heartedly; I don't think any of the
>>> others were.
>>
>> A selection bias problem rears its ugly head as you name packages which
>> were common the UK but rare in North America. ...

Please don't be ridiculous. That was true of Genstat and Clustan,
but obviously not of SPSS and Minitab. Both originated in the USA,
and were more widespread there than in the UK.

>I wasn't much into statistical computing per se at the time but of those
>only SPSS do I recall as being in time and it sorta' seems like it was
>rather late to have made the CDC party, anyway. That of course, could
>simply be specialization area bias, I don't know.

A bit of both. It wasn't widespread before the early 1970s.

> Seems like perhaps SAS was available; ...

At that date, SAS was IBM-specific - VERY much so! It was also
written in PL/I, and lots of truly horrible assembler.


Regards,
Nick Maclaren.

Clive Page

unread,
Jan 29, 2009, 4:30:22 PM1/29/09
to
In message <glt0d7$rho$1...@soup.linux.pwf.cam.ac.uk>, nm...@cam.ac.uk
writes

>Yes, that is true. But, in the early 1970s, there were hundreds of
>computers used for Fortran programming in UK universities, and there
>were two CDCs. Yes, that was because of 'buy ICL', but most other
>countries had far more of the smaller systems used for Fortran.

True, but a bit later CDCs became more widespread. At an unimportant
university (Leicester) we had a CDC Cyber-72 which had a 60-bit
word-length (later upgraded to something with a 64-bit word if I
remember correctly). Our research group was simultaneously running
Fortran on this and on PDP-8s with a 12-bit word-length. Trying to
write Fortran that worked equally well on both of these was an
interesting experience. If we'd had the KIND mechanism then it would
have been incredibly valuable.

Now that we have KIND, my opinion has changed somewhat: it seems to me
that KIND is almost overkill when all systems are byte-oriented. It is
arguably much more obvious what a programmer wanted if they specified
(non-standard) REAL*8 than if they specified the (standard) DOUBLE
PRECISION. In practice the former is just as portable.

Fortunately in Fortran 2008 there's a useful module that will allow
something like INTEGER(KIND=int16) and thus allow us to get back to a
byte-based (or at least bit-based) word-length specification.


--
Clive Page

nm...@cam.ac.uk

unread,
Jan 29, 2009, 4:44:30 PM1/29/09
to
In article <McqnKpTu$hgJ...@page.demo.co.uk>,

Clive Page <ju...@main.machine> wrote:
>
>>Yes, that is true. But, in the early 1970s, there were hundreds of
>>computers used for Fortran programming in UK universities, and there
>>were two CDCs. Yes, that was because of 'buy ICL', but most other
>>countries had far more of the smaller systems used for Fortran.
>
>True, but a bit later CDCs became more widespread. At an unimportant
>university (Leicester) we had a CDC Cyber-72 which had a 60-bit
>word-length (later upgraded to something with a 64-bit word if I
>remember correctly). Our research group was simultaneously running
>Fortran on this and on PDP-8s with a 12-bit word-length. Trying to
>write Fortran that worked equally well on both of these was an
>interesting experience. If we'd had the KIND mechanism then it would
>have been incredibly valuable.

I contributed a fair amount to NAG, then - add ICL 1900, System/370,
Old Uncle Tom Cobbley and all :-) Actually, it isn't much harder
writing highly portable Fortran than writing for 2-3 systems.

>Now that we have KIND, my opinion has changed somewhat: it seems to me
>that KIND is almost overkill when all systems are byte-oriented. It is
>arguably much more obvious what a programmer wanted if they specified
>(non-standard) REAL*8 than if they specified the (standard) DOUBLE
>PRECISION. In practice the former is just as portable.

Just you wait :-)

IBM and Intel have both said that they will inflict decimal floating
point on an unsuspecting community of programmers, so there will be
TWO different 8-byte floating point formats ....


Regards,
Nick Maclaren.

glen herrmannsfeldt

unread,
Jan 29, 2009, 5:27:31 PM1/29/09
to
Richard Maine <nos...@see.signature> wrote:

> Note the comments on that page about how the versions with 64-bit
> default integers "may break older programs." Apparently someone thought
> this matter worth mentioning on a web site that does not appear to be
> aimed exclusively at "a few 'star wars' sites". That is *EXACTLY* the
> point I was making - that one should not code so as to assume that
> default reals (and integers) are no more than 32 bits. It today's
> environments, it is probably safe to assume that they are no less, but
> it is not safe to assume that they are no more.

But there are no SELECTED_???_KIND functions for a "no more than"
x digits for a type. Maybe additional arguments for maximum
digits, and we get -1 if it doesn't support that type.

There is C_INT32_T for integer, but no C_REAL32_T.

-- glen

glen herrmannsfeldt

unread,
Jan 29, 2009, 5:35:39 PM1/29/09
to
nm...@cam.ac.uk wrote:
> In article <glt48a$ai5$1...@naig.caltech.edu>,
(snip, I wrote regarding automatic double precision)

>>I don't remember that option on the IBM compilers, other than adding

>> IMPLICIT REAL*8 (A-H,O-$)

>>which I did see many times. ($ was the 27th letter of the alphabet.)
>>(As far as I know, IBM started the use of IMPLICIT, maybe for this.)

> It was called AUTODBL. If I recall (and I may well not), it was not


> present in IBM's original development compilers for the System/360
> (Code-and-go and G), and may not have been even in the Program Product,
> G1. I think I have enough old manuals to check ....

I didn't remember it in H either. Possibly H extended.
I never used VS Fortran, though I did have some of the manuals.

Some are on bitsavers.org.

-- glen

Gordon Sande

unread,
Jan 29, 2009, 6:34:10 PM1/29/09
to

Not likely as SAS was writen in IBM/360 assembley language initially. Later
it was translated into C. They ported their comiler rather than porting their
code to others compilers. The compiler was less code!

Gordon Sande

unread,
Jan 29, 2009, 6:37:04 PM1/29/09
to
On 2009-01-29 17:27:28 -0400, nm...@cam.ac.uk said:

> In article <glt565$b17$1...@aioe.org>, dpb <no...@non.net> wrote:
>> Gordon Sande wrote:
>>>>
>>>> IMSL was a library, not a package - those terms were distinguished,
>>>> then. SPSS, Genstat, Minitab, Clustan etc. The first was ported
>>>> to the CDCs, but rather half-heartedly; I don't think any of the
>>>> others were.
>>>
>>> A selection bias problem rears its ugly head as you name packages which
>>> were common the UK but rare in North America. ...
>
> Please don't be ridiculous. That was true of Genstat and Clustan,
> but obviously not of SPSS and Minitab. Both originated in the USA,
> and were more widespread there than in the UK.

My strong impression is that Genstat started in Australia. The name I
recall is Graham Wilkinson.

dpb

unread,
Jan 29, 2009, 8:15:14 PM1/29/09
to
Gordon Sande wrote:
> On 2009-01-29 16:55:31 -0400, dpb <no...@non.net> said:
...

>> Seems like perhaps SAS was available;
>

> Not likely as SAS was writen in IBM/360 assembley language initially. ..

Well, I was fishing...I'm sure the guys that did do statistical analyses
had stuff they used; it had to be on the CDC because they pulled the
plug on the Philco 2000 after a while w/ the 6600 and then ended up w/
two 7600s at the end. There was no other machine on site; interestingly
enough before we got all the Philco-specific code NRC-qualified for
safety analyses still had to commute to Philly to the service facility
to run quite a lot of Philco simulations.

Anyway, I didn't get interested in that type of analysis until quite a
bit later so I should've stuck w/ the part I did know about IMSL and
just taken the hit for it not being a "package"... :)

--

Larry Gates

unread,
Jan 29, 2009, 9:20:07 PM1/29/09
to
On Wed, 28 Jan 2009 17:42:32 -0800 (PST), e p chandler wrote:

> integer, parameter :: dp = kind(1.0d0)

Is this guaranteed to give dp as opposed to qp?
--
larry gates

: I'm about to learn myself perl6 (after using perl5 for some time).

I'm also trying to learn perl6 after using perl5 for some time. :-)
-- Larry Wall in <20040709202...@wall.org>

Richard Maine

unread,
Jan 29, 2009, 9:28:51 PM1/29/09
to
Larry Gates <la...@example.invalid> wrote:

> On Wed, 28 Jan 2009 17:42:32 -0800 (PST), e p chandler wrote:
>
> > integer, parameter :: dp = kind(1.0d0)
>
> Is this guaranteed to give dp as opposed to qp?

Yes. My imagination fails me in trying to come with a way that one might
even think otherwise.

Step by step.

1. 1.0d0 is a double precision literal by definition. Yes, always. No,
not quad or any other random possibility.

2. The kind intrinsic gives the kind of its argument. Yes, always.

Q.E.D.

glen herrmannsfeldt

unread,
Jan 29, 2009, 11:15:37 PM1/29/09
to
Richard Maine <nos...@see.signature> wrote:
> Larry Gates <la...@example.invalid> wrote:

>> On Wed, 28 Jan 2009 17:42:32 -0800 (PST), e p chandler wrote:

>> > integer, parameter :: dp = kind(1.0d0)

>> Is this guaranteed to give dp as opposed to qp?

> Yes. My imagination fails me in trying to come with a way
> that one might even think otherwise.

Well, if you have AUTODBL, or whatever the automatic
double option is, then it could be what otherwise would
have been quad. But I agree, that previous quad is
now double.

It is the type that you would get with a

DOUBLE PRECISION

statement, which is double precision.

-- glen

Larry Gates

unread,
Jan 29, 2009, 11:30:13 PM1/29/09
to
On Wed, 28 Jan 2009 23:37:04 -0800, Richard Maine wrote:

>> But what does it avail you to have dp if you don't
>> have the tag to hang on your constants?
>
> I don't understand your comment about not having a "tag to hang on your
> constants." It sounds like you don't think you can portably determine
> the appropriate kind value for double precision. On the contrary, that
> is trivial to do. See the kind intrinsic. Kind(0.0d0) is the idiomatic
> way to determine that kind value.

What I meant here is if you declare

double precision xx

as opposed to

integer, parameter :: dp = selected_real_kind(12,30)
real(kind=dp) :: xx

then you don't have _dp to hang on your constants, which I greatly prefer
to the dO in its stead. I appear to have a different imagination than you,
one that avoids things that look like dO, in particular because the 0 and O
keys are adjacent.
--
larry gates

It's certainly easy to calculate the average attendance for Perl
conferences.
-- Larry Wall in <1997100717...@wall.org>

Clive Page

unread,
Jan 30, 2009, 2:24:22 AM1/30/09
to
In message <glt7vu$rjr$1...@smaug.linux.pwf.cam.ac.uk>, nm...@cam.ac.uk
writes

>IBM and Intel have both said that they will inflict decimal floating
>point on an unsuspecting community of programmers, so there will be
>TWO different 8-byte floating point formats ....

Just say no! (I've had enough of decimal floating point when using
Oracle, thanks very much).


--
Clive Page

nm...@cam.ac.uk

unread,
Jan 30, 2009, 3:40:20 AM1/30/09
to
In article <2009012919370475249-gsande@worldnetattnet>,

Gordon Sande <g.s...@worldnet.att.net> wrote:
>
>My strong impression is that Genstat started in Australia. The name I
>recall is Graham Wilkinson.

Certainly, from 1972 onwards, it was John Nelder at Rothamsted, but it
is possible that it started off as a CSIRO project (in Australia), and
Graham Wilkinson was certainly involved. I am no longer in contact
with any of those people, so can't check, I am afraid.


Regards,
Nick Maclaren.

Richard Maine

unread,
Jan 30, 2009, 3:43:49 AM1/30/09
to
Larry Gates <la...@example.invalid> wrote:

> What I meant here is if you declare
>
> double precision xx
>
> as opposed to
>
> integer, parameter :: dp = selected_real_kind(12,30)
> real(kind=dp) :: xx
>
> then you don't have _dp to hang on your constants, which I greatly prefer
> to the dO in its stead. I appear to have a different imagination than you,
> one that avoids things that look like dO, in particular because the 0 and O
> keys are adjacent.

I also happen to prefer the style with the kind parameter. But I think
you are confusing unrelated issues - at least you are sure confusing me
when you talk about them.

In particular, realize that you *CAN* declare double precision variables
using the kind syntax. See other posts in the thread, and in particular
the business about kind(0.0d0). So no, using double precision does not
mean that you have to use double precision statements for your
declarations or d0 for your literals.

0 new messages