Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

zero-initialization of variables

191 views
Skip to first unread message

Paul Anton Letnes

unread,
Aug 14, 2011, 5:00:18 AM8/14/11
to
Hi!

I just encountered a bug which was easily fixed by initializing an array
to zero in the beginning of the relevant subroutine. I was working on
gfortran, and this was a bug for me. I know someone else uses the PGI
compiler, and did not see this issue.

- Do some compilers do this by default (zero variables by default)?
- Do others have flags for this?
- What does the standard have to say on this topic? (i.e. was the code
standard conforming before I fixed the bug?)
- What is good form, standards and compilers aside?

Cheers
Paul.

Arjen Markus

unread,
Aug 14, 2011, 6:11:07 AM8/14/11
to
On 14 aug, 11:00, Paul Anton Letnes <paul.anton.let...@gmail.com>
wrote:

If I understand it correctly, the standard is explicit about it:
there is NO default initialisation.

Regards,

Arjen

Tobias Burnus

unread,
Aug 14, 2011, 7:35:54 AM8/14/11
to
Paul Anton Letnes wrote:
> I just encountered a bug which was easily fixed by initializing an array
> to zero in the beginning of the relevant subroutine. I was working on
> gfortran, and this was a bug for me. I know someone else uses the PGI
> compiler, and did not see this issue.
>
> - Do some compilers do this by default (zero variables by default)?

Kind of: Yes. It depends on the variable type, the compiler, the flags
and the way of memory allocation. Static variables are in practice
usually zero initialized, with ALLOCATE that also often happens. Other
variables are often not initialized, but may.

> - Do others have flags for this?

gfortan: -finit-local-zero. Intel's ifort: -zero. Note: Which variables
are affected by those flags is also compiler dependent. Often, it only
affects scalar (non-character?) variables of intrinsic types.

The main reason for the flag is that it used to be very common to have
zero initialized variables - hence, in particular old programs rely on it.

> - What does the standard have to say on this topic? (i.e. was the code
> standard conforming before I fixed the bug?)

No.

> - What is good form, standards and compilers aside?

To be standard conform and initialize - or assign a value to - the
variables before using them.

See "16.6 Definition and undefinition of variables" in the Fortran 2008
standard (or a similarly named section in other versions of the
standard). Cf. http://gcc.gnu.org/wiki/GFortranStandards

Nomen Nescio

unread,
Aug 14, 2011, 9:32:16 AM8/14/11
to
> I just encountered a bug which was easily fixed by initializing an array
> to zero in the beginning of the relevant subroutine. I was working on
> gfortran, and this was a bug for me. I know someone else uses the PGI
> compiler, and did not see this issue.

I haven't used FORTRAN lately (can you tell from the way I write it?) but
when I started programming in the 1970s they taught us to always, always,
ALWAYS initialize variables before using them. This advice is still good in
2011.

> - Do some compilers do this by default (zero variables by default)?

Maybe, but if you don't do it yourself it's bad practice and it will haunt
you when you write code for some other platform/compiler where storage isn't
initialized for you.

> - Do others have flags for this?

Probably the good ones do.

> - What does the standard have to say on this topic? (i.e. was the code
> standard conforming before I fixed the bug?)

I'll leave this to the language lawyers but Chapman's Fortran 95/2003 For
Scientists & Engineers says "The value of an uninitialized variable is not
defined by the Fortran 95/2003 standard. Some compilers automaticallyi set
uninitialized variables to zero, and some set them to different arbitrary
patterns. Some compilers for older version [sic] of Fortran leave whatever
values previously existed at the memory location of the variables. Some
compilers even produce a run-time error if a variable is used without first
being initialized."

The paragraph seems a bit lame to me since it should be painfully obvious
that if memory is set to "arbitrary patterns" or "whatever values previously
existed" remain, then it's certain a run-time error will be produced when
those variables are referenced since the expected value will not be present
or at least you will not get the results you expected.

> - What is good form, standards and compilers aside?

Good form is to always, always, ALWAYS initialize variables before first
use. On some platforms certain types of storage are defined to be binary
zeros but depending on your data types this may not be a good value.

On Intel the .data segment is initialized to zeros but .bss IIRC is
not. Roughly, depending on the compiler on Intel, this would mean any
variables you define on the help would be expected to contain binary zero
and any variables on the stack or any variables declared but not defined (if
that is even possible in Fortran nowadays) will be unpredictable before
being set. And that means your program will eventually go bang! or do
something you don't want.

It's important to initialize variables before use in every language, not
just Fortran.

Ron Shepard

unread,
Aug 14, 2011, 10:49:25 AM8/14/11
to
In article <j282r3$o7b$1...@dont-email.me>,

Paul Anton Letnes <paul.ant...@gmail.com> wrote:

> Hi!
>
> I just encountered a bug which was easily fixed by initializing an array
> to zero in the beginning of the relevant subroutine. I was working on
> gfortran, and this was a bug for me. I know someone else uses the PGI
> compiler, and did not see this issue.
>
> - Do some compilers do this by default (zero variables by default)?

Yes, but if you are interested in using the code with more than one
compiler, you should write portable code rather than
compiler-specific code.

> - Do others have flags for this?

Yes, often. One thing that has not been mentioned by previous
posters is the purpose of these compiler options. The purpose of
these options (whether initialization to zero or to some other
value) is to help you debug you code. For example, some compilers
allow floating point values to be initialized to NaN, which
propagates through operations with that value and allows you to
trace back to the first illegal reference of that value. Some
compilers allow integers to be initialized to large positive or
large negative values, which helps track down array indexing errors.

The purpose of these initialization options is *NOT* to allow you to
keep using your buggy code unchanged.

> - What does the standard have to say on this topic? (i.e. was the code
> standard conforming before I fixed the bug?)

The standard says that the code was nonconforming (i.e. illegal).
Your code had a bug in it before you fixed it.

> - What is good form, standards and compilers aside?

You should define the values of all variables before referencing
them. There are several ways to do this, particularly with modern
fortran. Some of these are more efficient than others, particularly
for large arrays.

$.02 -Ron Shepard

Tim Prince

unread,
Aug 14, 2011, 11:57:20 AM8/14/11
to
On 8/14/2011 7:35 AM, Tobias Burnus wrote:
> Paul Anton Letnes wrote:
>> I just encountered a bug which was easily fixed by initializing an array
>> to zero in the beginning of the relevant subroutine. I was working on
>> gfortran, and this was a bug for me. I know someone else uses the PGI
>> compiler, and did not see this issue.
>>
>> - Do some compilers do this by default (zero variables by default)?
>
> Kind of: Yes. It depends on the variable type, the compiler, the flags
> and the way of memory allocation. Static variables are in practice
> usually zero initialized, with ALLOCATE that also often happens. Other
> variables are often not initialized, but may.
>
>> - Do others have flags for this?
>
> gfortan: -finit-local-zero. Intel's ifort: -zero. Note: Which variables
> are affected by those flags is also compiler dependent. Often, it only
> affects scalar (non-character?) variables of intrinsic types.
I'm not clear on whether you are pointing out that these options are
more likely to work on SAVEd (static) variables. In order to have a
chance of emulating past compilers which had auto-save and default
initialization, the save status would have to be dealt with.

>
> The main reason for the flag is that it used to be very common to have
> zero initialized variables - hence, in particular old programs rely on it.
>

Some of those "old programs" predate the time when explicit initializers
(e.g. DATA) always implied SAVE (Fortran's near equivalent of static).

Such options are often incompatible with parallelization (e.g.
auto-parallel, openmp,...) which most current compilers support in some
way, but were not envisioned in those "old programs."


>> - What is good form, standards and compilers aside?
>
> To be standard conform and initialize - or assign a value to - the
> variables before using them.
>
> See "16.6 Definition and undefinition of variables" in the Fortran 2008
> standard (or a similarly named section in other versions of the
> standard). Cf. http://gcc.gnu.org/wiki/GFortranStandards
>

Failing to comply with standards not only inhibits portability; it may
severely restrict success with optimizations, including parallel execution.

--
Tim Prince

Richard Maine

unread,
Aug 14, 2011, 12:12:27 PM8/14/11
to
Tobias Burnus <bur...@net-b.de> wrote:

> Paul Anton Letnes wrote:
> > I just encountered a bug which was easily fixed by initializing an array
> > to zero

...


> > - What is good form, standards and compilers aside?
>
> To be standard conform and initialize - or assign a value to - the
> variables before using them.

In addition to echoing the universal coments of others here, I'll add a
question of my own. Given that the standard requires this, is there any
reason why one would even consider doing otherwise in new code today?
The OP's question seems to imply that multiple answers might be
reasonable. I suppose that perhaps the question might have been intended
in case there was a different answer to the previous (elided) questions
about whether the standard required it, but given that the standard
requires it, I would hope that is sufficient answer here. The standard
isn't an absolute dictate on everything; I certainly never did stick
100% to it, sometimes violating it by accident, and other times by
intent. But when one does it intentionally, one should have some kind of
reason; I just don't see one here.

You sure would not be adding clarity by assuming that a variable starts
at zero instead of making it explicit.

You wouldn't be helping efficiency either. In cases where the code does
require the initialization, the worst case for performance is that you
are telling the compiler to do the same thing that it would have done
anyway. In other cases, you can sometimes improve performance (such as
by doing run-time initialization, which will sometimes be much faster
than loading huge arays from the compiled executable), or, even more
importantly, make the program work correctly.

I guess I have known people who would be excited by the prospect of
saving the line of code (or sometimes just part of a line). I don't buy
it.

I understand the issue of having to deal with existing code, which might
have such bugs. Been there - many times. I also understand writing buggy
code by just overlooking the fact that a variable needed initialization.
I tend to be pretty careful about that one, but I understand it. But if
one is writing new code and realizes that the algorithm requires a
particular variable to be initialized, I can't fathom why one would even
consider doing anything other than making the initialization explicit.

Note that I'm not advocating that one blithely initialize everything
just in case one might otherwise forget to do a variable that needs it.
I've know people who coded like that, but I'd say it basically amounts
to admitting defeat in terms of writing code that is good or even
correct. If one really can't figure out whether a particular variable
needs to be initialized or not, I'd say to just pack it in and try a
different line of business. (That's for new code - debugging an existing
mess can be a *VERY* different matter.)

In fact, I consider that forcing yourself to think about whether
variables need initialization tends to result in better code. Among
other things, that forces you to think about what the initial value
needs to be; sometimes it isn't zero. Ocasionally, thinking about what
the initial value needs to be can lead you to realizing that the
algorithm is flawed.

--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain

Gordon Sande

unread,
Aug 14, 2011, 12:46:07 PM8/14/11
to

Two quibbles:

1. If you use whatever was leftover from a previous program you may not notice.
If the leftover is from a previous call to the same subroutine it will not
give give you the hint of changing between runs. Then you add an extra write
statement and maybe the answer changes a bit - a Heisenbug!

2. The cited run time error that some may produce is much more definitive
that your explanation. If it were a signalling NaN then you get a hardware
exception. Some systems put a disinctive value there, so do not need the
special NaN, which they actively watch for and then give you an "undefined
variable" run time error. See the options of NAG, Silverfrost (ex Salford)
or Lahey for the undefined checking. Various others use the signalling NaN
which can only be used with reals. Undefined integers are bad subscripts so
the lack of integers is more than just a minor incompleteness in the checking.

Richard Maine

unread,
Aug 14, 2011, 2:07:29 PM8/14/11
to
Gordon Sande <Gordon...@gmail.com> wrote:

> On 2011-08-14 10:32:16 -0300, Nomen Nescio said:
>
> > it should be painfully obvious
> > that if memory is set to "arbitrary patterns" or "whatever values previously
> > existed" remain, then it's certain a run-time error will be produced when
> > those variables are referenced since the expected value will not be present
> > or at least you will not get the results you expected.
>
> Two quibbles:
>
> 1. If you use whatever was leftover from a previous program you may not
> notice. If the leftover is from a previous call to the same subroutine it
> will not give give you the hint of changing between runs. Then you add an
> extra write statement and maybe the answer changes a bit - a Heisenbug!

It can even give "correct" answers. I've actually had that happen, not
even particularly rarely. Some of the apps I used to work with a lot
involved iterative optimization. It wasn't unusual at all to have
something in the code that wasn't quite right, but for the optimization
algorithm to compensate for the flaw. It was really very easy to have
errors go unnoticed for a long time, with the program even giving
adequately good results until you happened to run into a case that was
particularly sensitive to the error.

This tendency of the algorithms to compensate for some kinds of errors
sometimes added to the pain of code debugging. After all, one really did
want to get the code correct so that all cases would work instead of
just having the errors waiting to bite you later. If one tested the code
on real data, some errors in the code would end up with effects that
didn't show up because they were within the differences you get from
numerical roundoff anyway. Testing with "perfect" artificial data
actually caused odd symptoms because the algorithms weren't actually
designed for perfect data; zero noise is even a point of singularity for
some of them. I learned to understand that "good" behavior on a
zero-noise test case looked a lot different from "good" behavior on real
data. A zero-noise test case should go to a very low cost value and then
start bouncing around pretty randomly. If it actually converged stably,
I'd know something wasn't quite right. That happened to me more than
once. I'd eventually find and fix "the" (hopefully :-)) error, after
which the cost value would drop by several more orders of magnitude and
no longer converge (at least using the kind of convergence tests that
were fine for real data).

So no, errors from uninitialized data are not necessarily "painfully
obvious". They can sometimes evade even fairly careful testing. I've
been in the position of finding errors, including errors from
unitialized data, that were in programs that had been in active
production use for over a decade. That sometimes results in needing to
evaluate whether the code error might necessitate redoing the last
decade of results or whether its effects were small enough to not have
been serious (or perhaps that most results were ok, but some special
cases were not).

Paul Anton Letnes

unread,
Aug 14, 2011, 4:23:12 PM8/14/11
to
Thanks to all posters for good advice.

>> - What is good form, standards and compilers aside?
>
> You should define the values of all variables before referencing
> them. There are several ways to do this, particularly with modern
> fortran. Some of these are more efficient than others, particularly
> for large arrays.

Are there other ways than
variable = 0
?
(or possibly variable = 0.0 or variable = real(0.0, kind=8) but I never
understood the differences here).

And what's more efficient? I would think that this is only very very
rarely an issue.

Paul

Richard Maine

unread,
Aug 14, 2011, 5:04:54 PM8/14/11
to
Paul Anton Letnes <paul.ant...@gmail.com> wrote:

> Thanks to all posters for good advice.
>
> >> - What is good form, standards and compilers aside?
> >
> > You should define the values of all variables before referencing
> > them. There are several ways to do this, particularly with modern
> > fortran. Some of these are more efficient than others, particularly
> > for large arrays.
>
> Are there other ways than
> variable = 0
> ?
> (or possibly variable = 0.0 or variable = real(0.0, kind=8) but I never
> understood the differences here).

There are zillions of ways to define a variable. For just one completely
different example, one could read in its value from a file. The standard
has a ludicrously long and complicated list of the ways, some of them a
bit obscure. No way I'm going to copy that here (much less explain it
all).

It is also critically important to understand the difference between
initialization and assignment. Your examples above all appear to be
assignment statements unless they are out of context. Assignment
statements are executable statements that happen when they are executed.
However, that is *NOT* what the zero initialization that you originally
asked about does. That initializes variables only once at the beginning
of the program - not each time that a subroutine is called. The
difference can be hugely important.

There are also multiple ways to specify initialization, notably data
statement or an initializer on a type declaration.

real :: a = 0

is an initialization. It is not the same thing as

real :: a
a = 0

See prior para for why.

There is essentially no difference between

variable = 0.0

and

variable = real(0.0,kind=8)

other than the degree of explicitness (and the assumption that 8 is a
valid real kind at all, much less the kind of variable). But there is a
lot bigger difference between those and

variable = 0.0_8

You won't see the diference for the particular value of 0, but you ar
elikely to get very different results between

variable= real(0.1,kind=8)

versus

variable = 0.1_8

I don't have time to go into those issues here. Way too big a subject
and I'm rushed.

> And what's more efficient? I would think that this is only very very
> rarely an issue.

For a scalar, it is hard to imagine any diference being measurable;
don't worry about it.

For large arrays, things can get more complicated in terms of run-time
definition versus initialization.

Sorry this is probably a bit randomly written. I have to leave the house
in about 2 minutes for a 2-week trip. THis will have to do.

Dick Hendrickson

unread,
Aug 14, 2011, 5:18:40 PM8/14/11
to
On 8/14/11 11:46 AM, Gordon Sande wrote:
> On 2011-08-14 10:32:16 -0300, Nomen Nescio said:
>
>>>
>>> I just encountered a bug which was easily fixed by initializing an array
>>> to zero in the beginning of the relevant subroutine. I was working on
>>> gfortran, and this was a bug for me. I know someone else uses the PGI
>>> compiler, and did not see this issue.
>>
>> I haven't used FORTRAN lately (can you tell from the way I write it?) but
>> when I started programming in the 1970s they taught us to always, always,
>> ALWAYS initialize variables before using them. This advice is still
>> good in
>> 2011.
>>
>>> - Do some compilers do this by default (zero variables by default)?
>>
Back in the 60s one system that had 10 character words would initialize
memory to

'URAHORSES*'

when it started a user job.

I thought that was cute at the time.

Dick Hendrickson

Nomen Nescio

unread,
Aug 14, 2011, 5:25:35 PM8/14/11
to
Gordon Sande <Gordon...@gmail.com> wrote:

> On 2011-08-14 10:32:16 -0300, Nomen Nescio said:
>
> >>
> Two quibbles:
>
> 1. If you use whatever was leftover from a previous program you may not notice.
> If the leftover is from a previous call to the same subroutine it will not
> give give you the hint of changing between runs. Then you add an extra write
> statement and maybe the answer changes a bit - a Heisenbug!

True in some cases on a single user system like a PC. I'm used to programming
on systems with thousands of concurrent users (mainframes) so I've seen what
you describe but I've also seen other errors that come from seemingly random
data. It's Programming 101 to initialize variables before reading from them.

> 2. The cited run time error that some may produce is much more definitive
> that your explanation.

True but that runtime error may not be available on all systems.

> If it were a signalling NaN then you get a hardware exception.

Yes but on many systems anything will be a valid signed or unsigned integer,
and perhaps even a valid real. You have to know your compiler and hardware very
well, relying on this isn't a good idea in general and certainly not safe
for anyone who has to ask about it, like the OP.

> Some systems put a disinctive value there, so do not need the special NaN,
> which they actively watch for and then give you an "undefined variable"
> run time error. See the options of NAG, Silverfrost (ex Salford) or Lahey
> for the undefined checking.

That seems silly to me. I should think a warning at compile time would be
more useful than waiting until you run and slapping you on the wrist. Does
anyone know the reasoning behind this, or is it just adherence to the
standard gone mad?

> Various others use the signalling NaN which can only be used with
> reals.

Perhaps, and maybe not even with reals on some platforms.

> Undefined integers are bad subscripts so the lack of integers is more than
> just a minor incompleteness in the checking.

Yes, very good point.

Gordon Sande

unread,
Aug 14, 2011, 6:24:54 PM8/14/11
to
On 2011-08-14 18:25:35 -0300, Nomen Nescio said:

> Gordon Sande <Gordon...@gmail.com> wrote:
>
>> On 2011-08-14 10:32:16 -0300, Nomen Nescio said:
>>
>>>>
>> Two quibbles:
>>
>> 1. If you use whatever was leftover from a previous program you may not notice.
>> If the leftover is from a previous call to the same subroutine it will not
>> give give you the hint of changing between runs. Then you add an extra write
>> statement and maybe the answer changes a bit - a Heisenbug!
>
> True in some cases on a single user system like a PC. I'm used to programming
> on systems with thousands of concurrent users (mainframes) so I've seen what
> you describe but I've also seen other errors that come from seemingly random
> data. It's Programming 101 to initialize variables before reading from them.

Lots of mainframes will not bother with zeroing out the partition or will
leave junk that the system creates. Leaving trash behind is not automatically
a security violation. Lots of early mainframes were not concerned with
security so the trash was both system and prior user dependent. The history of
blank common is that it was an area where the system put temporary code
before starting the user program, so nothing to do with single user PCs.
Much of storage was defined by "blank" areas in the object code when static
storeage was the typical implementation. Stacks for storage virtually invite
reuse of areas that have not been zeroed.

>> 2. The cited run time error that some may produce is much more definitive
>> that your explanation.
>
> True but that runtime error may not be available on all systems.

That is why the qualifier *some* was there.

>> If it were a signalling NaN then you get a hardware exception.

Careful snipping to change the meaning to make your point. Not helpful.

> Yes but on many systems anything will be a valid signed or unsigned integer,
> and perhaps even a valid real. You have to know your compiler and hardware very
> well, relying on this isn't a good idea in general and certainly not safe
> for anyone who has to ask about it, like the OP.
>
>> Some systems put a disinctive value there, so do not need the special NaN,
>> which they actively watch for and then give you an "undefined variable"
>> run time error. See the options of NAG, Silverfrost (ex Salford) or Lahey
>> for the undefined checking.
>
> That seems silly to me. I should think a warning at compile time would be
> more useful than waiting until you run and slapping you on the wrist. Does
> anyone know the reasoning behind this, or is it just adherence to the
> standard gone mad?

Many undefined variable errors can not be detected at compile time. A
trivial case is where either subscripts or flow of control depend upon
data which is read in. It would be silly to ignore such cases. ;-)

Optimizing compilers that do extensive flow analysis are able to detect
a few cases and might provide compile time diagnosis. Such partial analysis
is often harmful in the large as it provides a false sense of safety. The
ones I have seen are carefully worded to indicate just how restricted their
coverage is.

The few folks who go to the extra trouble of full run time analysis are well
aware of what errors can or can not be found at compile time. They certainly
would not have gone to the trouble it it were as easy as you seem to think.
Perhaps the problem is a bit (or a lot) more difficult than you realize.
The troubles are not that subtle as is evident from the trivial example
I cited.

Ron Shepard

unread,
Aug 14, 2011, 7:14:54 PM8/14/11
to
In article <j29arh$m95$1...@dont-email.me>,

Paul Anton Letnes <paul.ant...@gmail.com> wrote:

When I wrote that sentence I was thinking of initializing large
arrays with data statements, initialization in the declaration
statement, and initialization using assignments (yes,
"initialization" is not the correct term in this last case for
fortran, but that is the appropriate english word to use). The
first two are more or less equivalent in the language, but there are
significant differences in the syntax between the two. The last
possibility is almost always more efficient than the first two,
sometimes by many orders of magnitude if you count machine cycles.
The reason is simply that memory is much faster to access than disk
i/o.

But there are also other ways to define values with modern fortran.
Suppose you have some user defined type

type mytype
...maybe a lot of stuff here...
integer :: num = -7
end type mytype

and you declare a local array like

type(mytype) :: a(large)

where large is some large integer. The values

a(i)%num

will all be initialized. I used "-7" just to show that the language
does not treat zero as a special case. On the other hand, if mytype
was defined without the default initialization, then these values
would of course not be defined. Maybe that is obvious, but it is a
feature of modern fortran that has no counterpart in f77 and prior.

If you allocate such an array,

allocate( a(large) )

then the same thing holds. a(i)%num is initialized to "-7" for
every element of the array.

Now suppose that you have a dummy array of that type, and it is
declared as one of

type(mytype), intent(in) :: a(:)
type(mytype), intent(inout) :: a(:)
type(mytype), intent(out) :: a(:)

In the first case, all of the components of all the elements that
are referenced in the subprogram must have been defined prior to
entry to the subprogram; and if they aren't, then you can't define
them. In the second case, the elements that you reference before
assignment must be previously defined, but now you can define or
redefine any of the elements that you want. In the last case, it
does not matter which components were set previously, everything
must be assumed to be reset "as if" a new array had been created or
allocated.

Finally, as others have mentioned already, even if a(i)%num has been
defined at one time (through default initialization or some other
way), that does not mean that it is always defined. There are many
ways that a variable can become undefined. So even though the
programmer has lots of ways to define variables, it is still up to
the programmer to keep things sorted out so that undefined variables
are not referenced.

$.02 -Ron Shepard

Fritz Wuehler

unread,
Aug 15, 2011, 1:50:27 AM8/15/11
to
nos...@see.signature (Richard Maine) wrote:

> So no, errors from uninitialized data are not necessarily "painfully
> obvious".

I think the error is painfully obvious. What's not obvious is the *cause* of
the error. I believe we're all saying the same thing here. I said it's good
practice to always, always, ALWAYS initialize variables before using them
and happily, all of you seem to be agreeing with me vehemently ;-)

And if that's *not* what you meant, and you didn't agree with what I said,
then it proves my point that relying on a compiler to protect against
programmer laziness doesn't work in practice.

glen herrmannsfeldt

unread,
Aug 15, 2011, 2:11:48 AM8/15/11
to
Nomen Nescio <nob...@dizum.com> wrote:

(snip)


> I'll leave this to the language lawyers but Chapman's Fortran 95/2003 For
> Scientists & Engineers says "The value of an uninitialized variable is not
> defined by the Fortran 95/2003 standard. Some compilers automaticallyi set
> uninitialized variables to zero, and some set them to different arbitrary
> patterns. Some compilers for older version [sic] of Fortran leave whatever
> values previously existed at the memory location of the variables. Some
> compilers even produce a run-time error if a variable is used without first
> being initialized."

> The paragraph seems a bit lame to me since it should be painfully obvious
> that if memory is set to "arbitrary patterns" or "whatever values previously
> existed" remain, then it's certain a run-time error will be produced when
> those variables are referenced since the expected value will not be present
> or at least you will not get the results you expected.

A common meaning of "run-time error" is that a message is generated.
Yes, some programs give the wrong answer, and that could be
considered an error. WATFIV, at least, gives a fatal error at
run-time if a variable hasn't been given a value, with the exception
that if its value is printed the field is filled with UUUU and that
isn't fatal.

>> - What is good form, standards and compilers aside?

> Good form is to always, always, ALWAYS initialize variables before first
> use. On some platforms certain types of storage are defined to be binary
> zeros but depending on your data types this may not be a good value.

That is, at least, a good rule for Fortran. C requires static data
to be zeroed by the system. I suppose sometimes I add an initializer
and sometimes not.

Java requires that you give variables a value before they are
used, and also requires the compiler to attempt to detect cases
where you don't. Now, the language definition could have just
required compilers to zero all variables, but by not doing that,
they give programmers one last reason to check for coding error.

I have had cases where a variable was definitely initialized,
that the Java compiler couldn't figure out, and so just add
an initializer (usually with a comment complaining about the
compiler). Java arrays are always allocated zero filled.

> On Intel the .data segment is initialized to zeros but .bss IIRC is
> not. Roughly, depending on the compiler on Intel, this would mean any
> variables you define on the help would be expected to contain binary zero
> and any variables on the stack or any variables declared but not defined (if
> that is even possible in Fortran nowadays) will be unpredictable before
> being set. And that means your program will eventually go bang! or do
> something you don't want.

I believe most operating systems now zero dynamically allocated
memory for security reasons. It used to be you got whatever was
in memory, possibly including data from a previous program.
That is pretty much not allowed today.

> It's important to initialize variables before use in every
> language, not just Fortran.

Except languages that require variable to already be zero.

-- glen

glen herrmannsfeldt

unread,
Aug 15, 2011, 2:20:45 AM8/15/11
to
Richard Maine <nos...@see.signature> wrote:

(snip)

> This tendency of the algorithms to compensate for some kinds of errors
> sometimes added to the pain of code debugging. After all, one really did
> want to get the code correct so that all cases would work instead of
> just having the errors waiting to bite you later. If one tested the code
> on real data, some errors in the code would end up with effects that
> didn't show up because they were within the differences you get from
> numerical roundoff anyway. Testing with "perfect" artificial data
> actually caused odd symptoms because the algorithms weren't actually
> designed for perfect data; zero noise is even a point of singularity for
> some of them.

More than once I have seen that in standard deviation and least squares
problems. With perfect data, you should have a square root of zero,
but small rounding makes the value negative.

> I learned to understand that "good" behavior on a zero-noise
> test case looked a lot different from "good" behavior on real
> data. A zero-noise test case should go to a very low cost value and then
> start bouncing around pretty randomly. If it actually converged stably,
> I'd know something wasn't quite right. That happened to me more than
> once. I'd eventually find and fix "the" (hopefully :-)) error, after
> which the cost value would drop by several more orders of magnitude and
> no longer converge (at least using the kind of convergence tests that
> were fine for real data).

And I once had a program that accidentally required a variable
to be non-zero. It worked fine until I ran it on a system that
did zero all variables.

> So no, errors from uninitialized data are not necessarily "painfully
> obvious". They can sometimes evade even fairly careful testing. I've
> been in the position of finding errors, including errors from
> unitialized data, that were in programs that had been in active
> production use for over a decade. That sometimes results in needing to
> evaluate whether the code error might necessitate redoing the last
> decade of results or whether its effects were small enough to not have
> been serious (or perhaps that most results were ok, but some special
> cases were not).

-- glen

glen herrmannsfeldt

unread,
Aug 15, 2011, 2:34:14 AM8/15/11
to
Richard Maine <nos...@see.signature> wrote:

(snip)


> It is also critically important to understand the difference between
> initialization and assignment. Your examples above all appear to be
> assignment statements unless they are out of context. Assignment
> statements are executable statements that happen when they are executed.
> However, that is *NOT* what the zero initialization that you originally
> asked about does. That initializes variables only once at the beginning
> of the program - not each time that a subroutine is called. The
> difference can be hugely important.

Note that this Fortran feature is not in many other languages.
C, Java, and PL/I (to name three) allow initializers on automatic
variables, and initialize them each time the variable is created.

People used to one of those languages will likely be surprised
to find what Fortran does.

> There are also multiple ways to specify initialization, notably
> data statement or an initializer on a type declaration.

> real :: a = 0

> is an initialization. It is not the same thing as

> real :: a
> a = 0

Though the term "initialization" is commonly, even if incorrectly,
used for the latter case.

As it might not have been mentioned yet, initializing large
arrays either in DATA statemnts or on the declaration can result
in large object programs.

real:: a(1000000)=1.0

can result in a million 1.0's being written to disk, and being
read in when the program is loaded.

-- glen

Paul Anton Letnes

unread,
Aug 15, 2011, 4:17:21 AM8/15/11
to
> There are also multiple ways to specify initialization, notably data
> statement or an initializer on a type declaration.
>
> real :: a = 0
>
> is an initialization. It is not the same thing as
>
> real :: a
> a = 0
>
> See prior para for why.

Aha, I see that my terminology is off. I suppose I should re-read
Metcalf and Reid in minute detail... Too bad that life is short.

I was under the impression that typing
real :: a = 0
would mean that a gets the "save" attribute and hence would retain its
value on the next entry into the subroutine. That would essentially
re-introduce the bug I experienced, I believe (although I am not sure -
the code is not mine, after all). The bug was caused by a "work" array
containing random junk in the first iteration of an iterative equation
solver.

> There is essentially no difference between
>
> variable = 0.0
>
> and
>
> variable = real(0.0,kind=8)
>
> other than the degree of explicitness (and the assumption that 8 is a
> valid real kind at all, much less the kind of variable). But there is a
> lot bigger difference between those and
>
> variable = 0.0_8
>
> You won't see the diference for the particular value of 0, but you ar
> elikely to get very different results between
>
> variable= real(0.1,kind=8)
>
> versus
>
> variable = 0.1_8
>
> I don't have time to go into those issues here. Way too big a subject
> and I'm rushed.

Does real(0.1, kind=8) at run-time convert a single precision number to
a double precision number (assuming 4 is single and 8 is double
precision, which the compilers I have used have assumed)?

> For large arrays, things can get more complicated in terms of run-time
> definition versus initialization.

Aha. Well, the array is small compared to other arrays I am working
with, so I'd be shocked if it would matter much either way.

>
> Sorry this is probably a bit randomly written. I have to leave the house
> in about 2 minutes for a 2-week trip. THis will have to do.

Thanks for a lot of good advice, nonetheless!

Paul

glen herrmannsfeldt

unread,
Aug 15, 2011, 5:43:13 AM8/15/11
to
Paul Anton Letnes <paul.ant...@gmail.com> wrote:

(snip)


> Does real(0.1, kind=8) at run-time convert a single precision number to
> a double precision number (assuming 4 is single and 8 is double
> precision, which the compilers I have used have assumed)?

Most likely at compile time, but it does do that conversion.

I do remember when people worried about conversions being
done at run-time, and it was suggested not to use integer
constants in floating point expressions, such as:

x(i)=1

(for read variable x). Compilers are required to do a certain
amount of constant expression evaluation, and aren't likely
to miss the easy ones.

>> For large arrays, things can get more complicated in terms of
>> run-time definition versus initialization.

> Aha. Well, the array is small compared to other arrays I am working
> with, so I'd be shocked if it would matter much either way.

Maybe not, but you might get an unusually large object file.

-- glen

Nomen Nescio

unread,
Aug 15, 2011, 8:03:51 AM8/15/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> I believe most operating systems now zero dynamically allocated
> memory for security reasons. It used to be you got whatever was
> in memory, possibly including data from a previous program.
> That is pretty much not allowed today.

It should be noteworthy that IBM went the other way recently, for
performance reasons. Specifically DB2 performance. They always documented
that storage allocated was not zeroed except under certain specific
conditions but they never enforced it until z/OS 1.10. Now they are
enforcing it and many old, badly written pieces of code are breaking.

>
> > It's important to initialize variables before use in every
> > language, not just Fortran.
>
> Except languages that require variable to already be zero.

Fine, but the number of languages that require you to not reference a
variable you didn't assign a value to far exceed the number of nanny
languages, at least in 2011.

Richard Maine

unread,
Aug 15, 2011, 8:32:37 AM8/15/11
to
Fritz Wuehler <fr...@spamexpire-201108.rodent.frell.theremailer.net>
wrote:

I'm not sure whether I agree with you, but it might well be just a
confusion in the precise English terminology.

I agree that it is obvious in terms of the standard that it is an error
to fail to define a variable before it is used. I didn't think that's
what you were saying, but I wonder if perhaps that was your intent.

What is very often not at all obvious is the symptoms of the error. That
is, a user looing at the results from the program will not necessarily
see any obvious symptoms. I would not describe that as that the cause of
the error being nonobvious; I'd say rather that it was symptoms that
were nonobvious. Causes and symptoms are in some sense on the opposite
end of the process, so I'm making a bit of a stretch in guessing that's
what you mean. If that is what you mean, then I agree... except for the
terminology.

Plane boarding (at IAD now). Gotta go.

--
Richard Maine


email: last name at domain . net

domain: summer-triangle

Aris

unread,
Aug 15, 2011, 8:35:21 AM8/15/11
to
Paul Anton Letnes <paul.ant...@gmail.com> wrote:
>> There are also multiple ways to specify initialization, notably data
>> statement or an initializer on a type declaration.
>>
>> real :: a = 0
>>
>> is an initialization. It is not the same thing as
>>
>> real :: a
>> a = 0
>>
>> See prior para for why.
>
> Aha, I see that my terminology is off. I suppose I should re-read
> Metcalf and Reid in minute detail... Too bad that life is short.
>
> I was under the impression that typing
> real :: a = 0
> would mean that a gets the "save" attribute and hence would retain its
> value on the next entry into the subroutine. That would essentially
> re-introduce the bug I experienced, I believe (although I am not sure -
> the code is not mine, after all).

That is exactly what it does, and thus is not what you want. As I
understand, you want to "initialize your algorithm", not just
"initialize your program". So you need to put

Ron Shepard

unread,
Aug 15, 2011, 10:57:27 AM8/15/11
to
In article <j2adb3$4lb$1...@speranza.aioe.org>,
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> I believe most operating systems now zero dynamically allocated
> memory for security reasons. It used to be you got whatever was
> in memory, possibly including data from a previous program.
> That is pretty much not allowed today.

I think this applies to the first time you allocate a block of
memory out of your address space. It does not necessarily apply to
heap/stack memory that your process has previously used.

I first learned to program on a computer that in the daytime was
used for school administration and accounting, and at nights it was
used to process student homework assignments. If you printed the
elements of integer arrays with an A4 format at the very beginning
of your job, you could sometimes see strings of words and sentences
which were left over from the previously running job. I don't think
any of us students ever saw anything useful, but it was the sort of
thing that would not pass security today. On the other hand, I ran
some jobs remotely a few years later on an Air Force computer, and
even short jobs would take a long time to execute. I eventually
learned that the reason was that the memory (65K words) and attached
hard disk (which I think was all of 40 MB) had to be zeroed sector
by sector after each job ran.

$.02 -Ron Shepard

Richard Maine

unread,
Aug 15, 2011, 11:53:48 AM8/15/11
to
Paul Anton Letnes <paul.ant...@gmail.com> wrote:

> I was under the impression that typing
> real :: a = 0
> would mean that a gets the "save" attribute and hence would retain its
> value on the next entry into the subroutine.

That's correct. I apologize if my somewhat hastily written post started
you thinking that I was contradicting that.

> Does real(0.1, kind=8) at run-time convert a single precision number to
> a double precision number (assuming 4 is single and 8 is double
> precision, which the compilers I have used have assumed)?

The standard has no explicit concept of run-time versus compile time.
There are some parts of the standard where you can see the concept
underneath the surface, but it is never explicit. (In particular,
constraints are intended to be diagnosed at compile time; some of the
funny conditions on constraints only make sense if you realize that they
are targetted at compile-time diagnosis. You read that some particular
case of something is prohibitted and then wonder why other cases of the
same obviously bad thing aren't simillarly prohibitted. Turns out that
the other cases are also prohibitted, but that more general prohibition
is stated elsewhere instead of in the constraint because the general
case isn't reasonably diagnosable at compile time.)

The form real(0.1,kind=8) could appear both in expressions that are
meant to be evaluated at compile time (even though the standard doesn't
say it that way) and in expressions that are notionally run time. I'd
expect compilers to optimize such a simple expression by evaluating it
at compile time, but the standard doesn't say anything about that.

The standard just says that it converts the precisions. It doesn't say
anything about compile time versus run time.

I a little bit wonder whether your question about it being "at run-time"
might really be about something else. After all, it isn't as though you
would be able to measure any performance difference between doing this
at compile time or run time.

Much more fundamental, and something that *IS* covered by the standard
is that the 0.1 is a single precision number, which gets converted to
double precision. Regardless of whether this happens at compile time or
run time, in either case it is the single precision number that gets
converted to double - what does not happen is looking back at the 0.1 in
the source code and converting directly from that decimal source code
form to double precision. There might possibly be some compilers that
"help" you out by doing that, but that's certainly not what the standard
describes; it is arguable whether the standard even allows it. (Yes, I
see arguments on both sides, but I don't feel like arguing them, or even
summarizing them now; I'll just leave it as being "arguable").

--
Richard Maine


email: last name at domain . net

domain: summer-triangle

glen herrmannsfeldt

unread,
Aug 15, 2011, 12:56:35 PM8/15/11
to
Richard Maine <nos...@see.signature> wrote:

(snip)


> Plane boarding (at IAD now). Gotta go.

Always one of my favorite airports. It might be becasue it
was the first one I went on flights without other family members,
but also its unusual architecture. As I remember it, there are
special buses that take you from the gate to the plane, rising
up to the door level.

Back to the subject, yes, a surprising number of programs seem
to give the right answers even when they aren't right.

-- glen

glen herrmannsfeldt

unread,
Aug 15, 2011, 1:10:00 PM8/15/11
to
Nomen Nescio <nob...@dizum.com> wrote:

(after I wrote)


>> I believe most operating systems now zero dynamically allocated
>> memory for security reasons. It used to be you got whatever was
>> in memory, possibly including data from a previous program.
>> That is pretty much not allowed today.

> It should be noteworthy that IBM went the other way recently, for
> performance reasons. Specifically DB2 performance. They always documented
> that storage allocated was not zeroed except under certain specific
> conditions but they never enforced it until z/OS 1.10. Now they are
> enforcing it and many old, badly written pieces of code are breaking.

That is interesting. As I understand it, some virtual memory systems
(I believe linux was the one I first heard) have a single page of
zeros that is returned for all allocation requests, using copy on
write methodology. In that case, read before write always returns
zero with no performance cost. I belive that z/Architecture has
instructions for quickly zeroing a page, but presumably still
with some cost.

One system that I used in the OS/VS2 days filled static memory
sith X'81'. As I understand it, that requires both the linkage
editor (which fills in small holes (DS opcode), and program fetch
(which handles larger holes). X'81' gives a very negative integer
value, and a negative value with a very negative exponent for
floating point. The idea was to expose such problems. It may
also have done it for dynamic (GETMAIN) memory.

>> > It's important to initialize variables before use in every
>> > language, not just Fortran.

>> Except languages that require variable to already be zero.

> Fine, but the number of languages that require you to not reference a
> variable you didn't assign a value to far exceed the number of nanny
> languages, at least in 2011.

Well, C static memory and Java arrays are zeroed. I haven't been
counting lately, so I don't know what fraction that is in 2011.

-- glen

James Van Buskirk

unread,
Aug 15, 2011, 1:44:12 PM8/15/11
to
"Fritz Wuehler" <fr...@spamexpire-201108.rodent.frell.theremailer.net> wrote
in message
news:e5da580d1e6acf72...@msgid.frell.theremailer.net...

> I think the error is painfully obvious. What's not obvious is the *cause*
> of
> the error. I believe we're all saying the same thing here. I said it's
> good
> practice to always, always, ALWAYS initialize variables before using them
> and happily, all of you seem to be agreeing with me vehemently ;-)

There are a few instances where a variable need not be initialized
before use. Specification inquiries and the MOLD argument to the
TRANSFER intrinsic or the ALLOCATE statement are the examples that
come to mind.

Here is an example where an unitialized variable (FlagsVar in
subroutine sub) get partially initialized but the parts that
get initialized are the only ones used. This program is
nonconforming because Fortran really only allows you to do that
for an array where only the elements to be used are initialized,
but unless the Fortran processor is intentionally keeping track
of uninitialized variables it should still work:

C:\gfortran\clf\undef>type undef.f90
module params
implicit none
integer, parameter :: ik4 = selected_int_kind(9)
integer, parameter :: SetBits = 8
integer, parameter :: ClearBits = 8
end module params

module funcs
use params
implicit none
contains
subroutine sub(FlagsVar,A,B)
integer(ik4) FlagsVar
integer(ik4) A
integer(ik4) B
integer(ik4) C

FlagsVar = ior(FlagsVar,maskr(SetBits))
FlagsVar = iand(FlagsVar, &
not(ishft(maskr(ClearBits),SetBits)))
C = ibits(iparity([FlagsVar,A,B]),0, &
SetBits+ClearBits)
write(*,'(z8.8)') C
end subroutine sub
end module funcs

program undef
use params
use funcs
implicit none
integer(ik4) FlagsVar
integer(ik4) A
integer(ik4) B

A = int(Z'0A0A0A0A',ik4)
B = int(Z'50505050',ik4)
call sub(FlagsVar,A,B)
end program undef

C:\gfortran\clf\undef>gfortran undef.f90 -oundef

C:\gfortran\clf\undef>undef
00005AA5

--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end


Gordon Sande

unread,
Aug 15, 2011, 1:45:33 PM8/15/11
to

Same type of policy as the installation that ran outside jobs once their
own hard disk drives had spun down and stopped. Needless to say the outside
jobs ran in a fixed block of time in the small hours of the morning.


Nomen Nescio

unread,
Aug 15, 2011, 6:09:55 PM8/15/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> Nomen Nescio <nob...@dizum.com> wrote:
>
> That is interesting. As I understand it, some virtual memory systems
> (I believe linux was the one I first heard) have a single page of
> zeros that is returned for all allocation requests, using copy on
> write methodology. In that case, read before write always returns
> zero with no performance cost. I belive that z/Architecture has
> instructions for quickly zeroing a page, but presumably still
> with some cost.

I do not know of any specific z/Architecture instruction for zeroing a page
or zeroing any area of storage. The most common way to zero an area up to
256 bytes is an XC, just like it always was. Now you can do a long move
(MVCL) and specify a zero length for the source field and a pad character of
x'00'. That is how we almost always zero working storage. But the cost is
significant, or IBM would certainly not go to the trouble of eliminating
this as a default behavior and doing the rest of the work to support it:

> One system that I used in the OS/VS2 days filled static memory
> sith X'81'. As I understand it, that requires both the linkage
> editor (which fills in small holes (DS opcode), and program fetch
> (which handles larger holes). X'81' gives a very negative integer
> value, and a negative value with a very negative exponent for
> floating point. The idea was to expose such problems. It may
> also have done it for dynamic (GETMAIN) memory.

Since so many people didn't follow the doc and took advantage of the fact
IBM was zeroing getmained storage even though they said they wouldn't, there
is an option in z/OS 1.10 and above to use z/OS 1.9 rules for storage
initialization which effectively places us back to where getmained storage
is zeroed. But they also give you another interesting tuning parameter to
allow you to select which value is used for uninitialized storage so you can
check for failures. That helped us spot tons of bad code. I do not believe
the default is X'81' but you could set it to that. Looking at a dump a
string of whatever character you chose stands out like a sore thumb. It was
one of the more ingenious tricks I've seen IBM pull in a long time. Whoever
thought that one up deserves a big promotion.

Fritz Wuehler

unread,
Aug 15, 2011, 10:50:54 PM8/15/11
to
Gordon Sande <Gordon...@gmail.com> wrote:

> Lots of mainframes will not bother with zeroing out the partition or will
> leave junk that the system creates.

You are not answering anything I wrote but I will answer what you said
anyway. The default is not to zero out storage unless you request specific
types of storage and specific quantities. Application storage is never
zeroed out by the OS on IBM mainframes.

> Leaving trash behind is not automatically a security violation.

Nobody said it was. I don't know who you're talking to.

> Lots of early mainframes were not concerned with security so the trash was
> both system and prior user dependent.

That is nonsense. IBM was always concerned with security and since 1964 it
hasn't ever been broken.

> The history of blank common is that it was an area where the system put
> temporary code before starting the user program, so nothing to do with
> single user PCs.

There is no such thing as "blank common" on mainframes. Have you changed the
subject?

> Much of storage was defined by "blank" areas in the object code when static
> storeage was the typical implementation. Stacks for storage virtually invite
> reuse of areas that have not been zeroed.

I guess you have changed the subject since there are no stacks on IBM
mainframes.

> >> If it were a signalling NaN then you get a hardware exception.
>
> Careful snipping to change the meaning to make your point. Not helpful.

No, I snipped to try and answer you. I can see that wasn't helpful.

glen herrmannsfeldt

unread,
Aug 16, 2011, 12:30:51 AM8/16/11
to

>> Lots of mainframes will not bother with zeroing out the partition or will
>> leave junk that the system creates.

(snip)

>> Leaving trash behind is not automatically a security violation.

> Nobody said it was. I don't know who you're talking to.

>> Lots of early mainframes were not concerned with security so the
>> trash was both system and prior user dependent.

> That is nonsense. IBM was always concerned with security
> and since 1964 it hasn't ever been broken.

I don't remember seeing anything in memory, but I did once have
the linkage editor read a file that hadn't been written, and instead
read the data that was previously on those disk tracks. That was
OS/360.

>> The history of blank common is that it was an area where the
>> system put temporary code before starting the user program,
>> so nothing to do with single user PCs.

> There is no such thing as "blank common" on mainframes.
> Have you changed the subject?

That sounds about right for the early IBM Fortran systems.

>> Much of storage was defined by "blank" areas in the object
>> code when static storeage was the typical implementation.
>> Stacks for storage virtually invite reuse of areas that
>> have not been zeroed.

> I guess you have changed the subject since there are no stacks on IBM
> mainframes.

For OS/360, the DS instruction leaves gaps in the object program.
Small gaps are filled by the linkage editor, such that larger
output records can be generated. Large gaps stay gaps in the load
module. When loaded by program fetch, retain what was there before.
That might be the initiator code that opens files, and otherwise
prepares to actually run the program.

-- glen

Louisa

unread,
Aug 16, 2011, 6:13:46 AM8/16/11
to
On Aug 14, 7:00 pm, Paul Anton Letnes <paul.anton.let...@gmail.com>
wrote:
From: Paul Anton Letnes <paul.anton.let...@gmail.com>
Date: Sun, 14 Aug 2011 10:00:18 +0100

>Hi!

>I just encountered a bug which was easily fixed by initializing an array
>to zero in the beginning of the relevant subroutine. I was working on
>gfortran, and this was a bug for me. I know someone else uses the PGI
>compiler, and did not see this issue.

>- Do some compilers do this by default (zero variables by default)?

Variables are not initialized to anything by default.

Some compilers might do that, but there is no guarantee.

Usually, if a variable is not initialized, anything in storage

that was there before the procedure is executed

is what the variable has.


>- Do others have flags for this?

Some compilers offer a facility to check for uninitilzed variable.


>- What does the standard have to say on this topic? (i.e. was the code
>standard conforming before I fixed the bug?)


No, it wasn't.

>- What is good form, standards and compilers aside?

It's essential that a variable has been assigned a value

before using the contents of that variable.

You can ensure that by either using an initial value, or by

specific assignment.

Louisa

unread,
Aug 16, 2011, 6:15:35 AM8/16/11
to
On Aug 14, 9:35 pm, Tobias Burnus <bur...@net-b.de> wrote:
From: Tobias Burnus <bur...@net-b.de>
Date: Sun, 14 Aug 2011 13:35:54 +0200

>> - Do others have flags for this?


>gfortan: -finit-local-zero. Intel's ifort: -zero. Note: Which variables
>are affected by those flags is also compiler dependent. Often, it only
>affects scalar (non-character?) variables of intrinsic types.

>The main reason for the flag is that it used to be very common to have
>zero initialized variables

No it wasn't.

> - hence, in particular old programs rely on it.

No they didn't.

In olden times, it was necessary then -- as is now -- to ensure that

every variable was initialized with a value (assignment, etc)

before trying to use the value of that variable.

Richard Maine

unread,
Aug 16, 2011, 8:23:18 AM8/16/11
to
Louisa <louisa...@gmail.com> wrote:

> On Aug 14, 9:35 pm, Tobias Burnus <bur...@net-b.de> wrote:
> From: Tobias Burnus <bur...@net-b.de>
> Date: Sun, 14 Aug 2011 13:35:54 +0200

> >The main reason for the flag is that it used to be very common to have


> >zero initialized variables
>
> No it wasn't.
>
> > - hence, in particular old programs rely on it.
>
> No they didn't.

Those seem like rather "abrupt" and absolute claims. I certainly
personally saw *LOTS* of programs that depended on this behavior and I
used many compilers that implemented it. It wasn't exactly rare. I'd
have said that it was typical of most older compilers. This is not some
theoretical musing. I saw them myself. I've also personally helped many
people fix such old programs so that they work in environments without
zero initialization. I've also seen (here and elsewhere) plenty of user
requests for compiler options to implement zero initialization in order
to help such old programs work.

In light of all that direct personalexperience, I'm a bit unsure how to
reply to a blatant "no they didn't" that appears to deny what I've seen
with my own eyes.

> In olden times, it was necessary then -- as is now -- to ensure that
> every variable was initialized with a value (assignment, etc)
> before trying to use the value of that variable.

Perhaps there is some confusion about the use of the word "necessary"
here. It was certainly necessary in terms of the standard (at least once
there was a published standard well over a decade of Fortran use). But
it most certainly was not "necessary" in terms of making programs work
correctly on many machines/compilers of the day.

Richard Maine

unread,
Aug 16, 2011, 8:39:35 AM8/16/11
to
Fritz Wuehler <fr...@spamexpire-201108.rodent.frell.theremailer.net>
wrote:

> Gordon Sande <Gordon...@gmail.com> wrote:
>
> > The history of blank common is that it was an area where the system put
> > temporary code before starting the user program, so nothing to do with
> > single user PCs.
>
> There is no such thing as "blank common" on mainframes. Have you changed the
> subject?

I'll not address the other questions. It looks a bit to me like the
discussion in the subthread is devolving.

But while one is mentioning changes of subject, I'll note that this is
comp.lang.fortran and the original post was rather specifically about
Fortran.

Fortran *DOES* have blank common. It has it on all implementations,
including those on all mainframes, and it has done so pretty much
forever. There was certainly blank common at least as early as Fortran
II days; I'm not so sure about Fortran I, which I never personally used.

Of course mainframes themselves don't have blank common. Blank common is
a language feature - not a hardware or OS feature. The implementation of
the language feature must, of course, make use of the underlying
hardware and OS somehow or other. There might even be hardware or OS
features that pretty directly map onto Fortran blank common. But blank
common itself is a Fortran feature. There is no need for the quotes, as
it is well defined in the Fortran standard (unless perhaps one is
quibbling about the exact term; I don't recall without checking whether
it might formally be called something like "unnamed common" instead.)

Klaus Wacker

unread,
Aug 16, 2011, 9:18:04 AM8/16/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Fritz Wuehler <fr...@spamexpire-201108.rodent.frell.theremailer.net> wrote:
[...]

>> That is nonsense. IBM was always concerned with security
>> and since 1964 it hasn't ever been broken.
>
> I don't remember seeing anything in memory, but I did once have
> the linkage editor read a file that hadn't been written, and instead
> read the data that was previously on those disk tracks. That was
> OS/360.
>

I once had a table of monthly salary payments of the organization were
I worked in my printout. I'm sure it wasn't meant to be seen by
me. This was part of a rather verbose error message. The error handler
dumped the complete input buffer to the printer, in spite of the fact
that it "knew" none of my data were in there. The error message said
that nothing could be read. It printed what happened to be left in
memory from the previous job.

The OS/360 batch systems and its successors that I used to work on
until the early 1990s had pretty much no security at all, once you
were allowed to use the system. Pretty much all files were accessible
to everybody, you only had to know their names and locations. Some of
the systems ran Wylbur as online system. A batch job could read the
(unencrypted) password file.

I guess the last part of Fritz' rather bold statement is correct:
What's not there can't be broken.


--
Klaus Wacker klaus....@udo.edu
51°29'7"N 7°25'7"E http://www.physik.tu-dortmund.de/~wacker

glen herrmannsfeldt

unread,
Aug 16, 2011, 1:59:21 PM8/16/11
to
Richard Maine <nos...@see.signature> wrote:
> Louisa <louisa...@gmail.com> wrote:

(snip)


>> >The main reason for the flag is that it used to be very
>> > common to have zero initialized variables

>> No it wasn't.

>> > - hence, in particular old programs rely on it.

>> No they didn't.

> Those seem like rather "abrupt" and absolute claims. I certainly
> personally saw *LOTS* of programs that depended on this behavior and I
> used many compilers that implemented it.

Did the compilers implement it, or was it a side effect of zeroing
memory before loading the program? (And not otherwise storing anything
there before the program starts.)

> It wasn't exactly rare. I'd have said that it was typical of
> most older compilers. This is not some theoretical musing.

Documented in the appropriate manual, or statistically determined?

> I saw them myself. I've also personally helped many
> people fix such old programs so that they work in environments without
> zero initialization. I've also seen (here and elsewhere) plenty of user
> requests for compiler options to implement zero initialization in order
> to help such old programs work.

> In light of all that direct personalexperience, I'm a bit unsure how to
> reply to a blatant "no they didn't" that appears to deny what I've seen
> with my own eyes.

I do know the OS/360 compilers didn't zero variables.

For the PDP-10/TOPS-10 system, the whole SAV file is loaded into
memory, so whatever is in that spot in the file is the initial
value, which I believe is whatever LINK leaves there.

>> In olden times, it was necessary then -- as is now -- to ensure that
>> every variable was initialized with a value (assignment, etc)
>> before trying to use the value of that variable.

> Perhaps there is some confusion about the use of the word "necessary"
> here. It was certainly necessary in terms of the standard (at least once
> there was a published standard well over a decade of Fortran use). But
> it most certainly was not "necessary" in terms of making programs work
> correctly on many machines/compilers of the day.

I would want to see it documented in the manuals for at least two
systems to support that. If only one, it could be considered
a rare exception. If it is the linker or program loader, then it
could change with a system update even without a compiler change.
(And even for already compiled programs.)

An interesting note in the Fortran I manual:

"The topmost section of data storage is occupied by those variables
which appear in DIMENSION or EQUIVALENCE statements. The
arrangement of this region is such that two programs, whose
DIMENSION and EQUIVALENCE statements are identical, will have this
region allocated identically. This makes it possible to write
families of programs which deal with the same data."

(Emphasis in the original.)

Now, since there were no SUBROUTINE and FUNCTION statements yet,
this "families of programs" must mean successive main programs.
It seems, then, that under certian conditions one could count
on variables having specific values on program start.

This sounds like what might have lead up to the COMMON statement,
adding to the language what was previously a side effect.
Also, it sounds familiar that some systems allocated COMMON
down from the top of memory. (Note that on the 704, arrays
were stored in decreasing order in memory.)

-- glen

glen herrmannsfeldt

unread,
Aug 16, 2011, 2:12:33 PM8/16/11
to
Klaus Wacker <klaus.w...@t-online.de> wrote:

(snip on memory protection, and other system protection)

> I once had a table of monthly salary payments of the organization were
> I worked in my printout. I'm sure it wasn't meant to be seen by
> me. This was part of a rather verbose error message. The error handler
> dumped the complete input buffer to the printer, in spite of the fact
> that it "knew" none of my data were in there. The error message said
> that nothing could be read. It printed what happened to be left in
> memory from the previous job.

> The OS/360 batch systems and its successors that I used to work on
> until the early 1990s had pretty much no security at all, once you
> were allowed to use the system. Pretty much all files were accessible
> to everybody, you only had to know their names and locations. Some of
> the systems ran Wylbur as online system. A batch job could read the
> (unencrypted) password file.

The OS/360 file password protection system requries the system
operator to enter the password on an attempt to access a protected
file. How the operator is supposed to know who ran a job, or who
was allowed to access a file, I don't know.

Otherwise, all system files (OS, compilers, libraries) were world
readable, and protected against accidental deletion by an expiration
date. When they actually do need to be overwritten (system update),
the operator overrides the EXPDT test.

Story I remember was that the WYLBUR password file was encrypted,
but knowing the plaintext for one password was enough to break it.
It should have had OS password (see above) protection, though I
don't know that it did. (Maybe system dependent.)

Later, I believe in MVS, the APF (Authorized Program Facility)
added some more protection. Somewhat similar to unix root,
there are only two levels (APF=0 or 1). Maybe a little like
unix SETUID, where only privileged programs can do certain
things. Again it depends on the system operator to work.

> I guess the last part of Fritz' rather bold statement is correct:
> What's not there can't be broken.

-- glen

Nomen Nescio

unread,
Aug 16, 2011, 2:18:39 PM8/16/11
to
Klaus Wacker <klaus.w...@t-online.de> wrote:

(perfect name by the way, judging from your post)

> >> That is nonsense. IBM was always concerned with security
> >> and since 1964 it hasn't ever been broken.
> >

> The OS/360 batch systems and its successors that I used to work on
> until the early 1990s had pretty much no security at all, once you
> were allowed to use the system. Pretty much all files were accessible
> to everybody, you only had to know their names and locations.

Just because you and your buddies didn't know how to set it up doesn't mean
it's not secure. RACF was around in 1974, and probably earlier than
that. You can set up a system securely or not. You can control access to
every data set, tapes had passwords since the 1960s, disk data sets had two
sets of controls, a password for each data set and RACF to control
access. And there was APF since the 1960s that prevents application code
from getting control of the system

> the systems ran Wylbur as online system. A batch job could read the
> (unencrypted) password file.

Wylbur isn't part of OS/360. It's a 3rd party product, you dumb shit. And if
the idiot sysprog you had (if indeed it was not you) knew anything, he could
have stopped that from happening with RACF.

> I guess the last part of Fritz' rather bold statement is correct:
> What's not there can't be broken.

You're a dumb wacker of a sonofabitch, and you're wrong. It is there, and
it's better than anything you or your buddies will ever live to come up
with. Even your teen idol Eric Raymond points out MVS security has never
been broken. "No known exploits".

Read it and weep, wacker!

glen herrmannsfeldt

unread,
Aug 16, 2011, 2:24:27 PM8/16/11
to
Richard Maine <nos...@see.signature> wrote:
(snip, someone wrote)
>> There is no such thing as "blank common" on mainframes.
>> Have you changed the subject?

> I'll not address the other questions. It looks a bit to me like the
> discussion in the subthread is devolving.

> But while one is mentioning changes of subject, I'll note that this is
> comp.lang.fortran and the original post was rather specifically about
> Fortran.

> Fortran *DOES* have blank common. It has it on all implementations,
> including those on all mainframes, and it has done so pretty much
> forever. There was certainly blank common at least as early as Fortran
> II days; I'm not so sure about Fortran I, which I never personally used.

As I don't remember reading until today, it seems to have been a
documented side effect of the Fortran I system. If two programs
have the same combination of DIMENSION and EQUIVALENCE statements,
those variables will be allocated in the same place.

> Of course mainframes themselves don't have blank common. Blank common is
> a language feature - not a hardware or OS feature. The implementation of
> the language feature must, of course, make use of the underlying
> hardware and OS somehow or other. There might even be hardware or OS
> features that pretty directly map onto Fortran blank common. But blank
> common itself is a Fortran feature. There is no need for the quotes, as
> it is well defined in the Fortran standard (unless perhaps one is
> quibbling about the exact term; I don't recall without checking whether
> it might formally be called something like "unnamed common" instead.)

Since many of the needed OS features were first designed around
support for Fortran, it isn't so surprising. For OS/360, the
system just puts blanks into the appropriate name field. That
isn't quite as easy as it sounds. Try to name a variable with
all blanks in any language. PL/I and C have features similar
to named common, but maybe not blank common.

-- glen

Gordon Sande

unread,
Aug 16, 2011, 2:40:35 PM8/16/11
to

Fortran II on the IBM 7090 had the notion of a chain. You established
various programs and stored the linked programs on tape (that was all there
was). In one of the program you then called chain (presumabley with the
chain number) then the chain routine replaced the current program with the
desired one, all the while leaving high memory untouched. High memory
is where blank common lived (as well as the linker and some other pieces
of the system - not that there were many). My hazy guess is that the
various chain links were established with control cards for the linker,
perhaps by indicating in advance that the next link was part of a chain.
My hazy guess is that one built a tape with several chain links and then
executed the contents of that tape. Various canned programs, like statistical
packages, were chain tapes.

The notion behind chain was that you knew what was going to come next,
or eqivalently what had just been there. The first link in the chain had
to tolerate whatever was in high memory, usually the by then dead linker.
In the small memory (all 32k words) common was a way of sharing space with
the linker. It was possible for a program to be too big until some
storage was moved into blank common.

Overlays came with Fortran IV (on IBSYS) and were a big step forward.

> This sounds like what might have lead up to the COMMON statement,
> adding to the language what was previously a side effect.
> Also, it sounds familiar that some systems allocated COMMON
> down from the top of memory. (Note that on the 704, arrays
> were stored in decreasing order in memory.)

The index registers subtracted, if memory is correct, so decreasing order
made sense.

> -- glen


glen herrmannsfeldt

unread,
Aug 16, 2011, 3:34:40 PM8/16/11
to
Gordon Sande <Gordon...@gmail.com> wrote:

(snip, I wrote)


>> An interesting note in the Fortran I manual:

>> "The topmost section of data storage is occupied by those variables
>> which appear in DIMENSION or EQUIVALENCE statements. The
>> arrangement of this region is such that two programs, whose
>> DIMENSION and EQUIVALENCE statements are identical, will have this
>> region allocated identically. This makes it possible to write
>> families of programs which deal with the same data."

(snip)

> Fortran II on the IBM 7090 had the notion of a chain. You established
> various programs and stored the linked programs on tape (that was all there
> was). In one of the program you then called chain (presumabley with the
> chain number) then the chain routine replaced the current program with the
> desired one, all the while leaving high memory untouched.

That is what the 704 manual seems to allow for, but it doesn't mention
how the programs get run.

> High memory
> is where blank common lived (as well as the linker and some other pieces
> of the system - not that there were many). My hazy guess is that the
> various chain links were established with control cards for the linker,
> perhaps by indicating in advance that the next link was part of a chain.
> My hazy guess is that one built a tape with several chain links and then
> executed the contents of that tape. Various canned programs,
> like statistical packages, were chain tapes.

Also, reminds me of the HP2000 BASIC system. That is how it runs
larger programs. There is the COM statement that looks similar to
Fortran COMMON, and CHAIN to run the next program.

> The notion behind chain was that you knew what was going to come next,
> or eqivalently what had just been there. The first link in the chain had
> to tolerate whatever was in high memory, usually the by then dead linker.
> In the small memory (all 32k words) common was a way of sharing space with
> the linker. It was possible for a program to be too big until some
> storage was moved into blank common.

> Overlays came with Fortran IV (on IBSYS) and were a big step forward.

Likely for both users and compilers.

>> This sounds like what might have lead up to the COMMON statement,
>> adding to the language what was previously a side effect.
>> Also, it sounds familiar that some systems allocated COMMON
>> down from the top of memory. (Note that on the 704, arrays
>> were stored in decreasing order in memory.)

> The index registers subtracted, if memory is correct, so
> decreasing order made sense.

Yes, but it is also convenient for COMMON allocated from the
end of memory. At least by Fortran 66 blank common is allowed
to be different size in different routines. Named common blocks
are supposed to be the same size. (Systems may allow different
sizes as an extension.)

As it is described, the relocatable addresses go down from 77777,
but on the object machine (run time), they are the end of memory
of whatever size, I believe from 4K to 32K words.
The compiler is supposed to be able to run in 4K (36 bit words).

-- glen

Nomen Nescio

unread,
Aug 16, 2011, 4:17:23 PM8/16/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> Later, I believe in MVS, the APF (Authorized Program Facility)
> added some more protection. Somewhat similar to unix root,
> there are only two levels (APF=0 or 1). Maybe a little like
> unix SETUID, where only privileged programs can do certain
> things. Again it depends on the system operator to work.

No, APF is a separate issue, it controls what systems services you can use.
RACF deals with authorities for all types of resources (Resource Access
Control Facility) and it has been around since MVS. All this preceded UNIX,
so it is not like UNIX. If anything, UNIX implemented what little "security"
it has so badly in an attempt to not be like IBM.

glen herrmannsfeldt

unread,
Aug 16, 2011, 5:08:53 PM8/16/11
to
Nomen Nescio <nob...@dizum.com> wrote:

(snip, I wrote)


>> Later, I believe in MVS, the APF (Authorized Program Facility)
>> added some more protection. Somewhat similar to unix root,
>> there are only two levels (APF=0 or 1). Maybe a little like
>> unix SETUID, where only privileged programs can do certain
>> things. Again it depends on the system operator to work.

> No, APF is a separate issue, it controls what systems services
> you can use. RACF deals with authorities for all types of
> resources (Resource Access Control Facility) and it has been
> around since MVS.

Still, a lot later than OS/360.

I have not used APF and RACF, but have read about them.

> All this preceded UNIX, so it is not like UNIX. If anything,
> UNIX implemented what little "security"
> it has so badly in an attempt to not be like IBM.

I meant that it was like unix in that it doesn't give the
fine grain control like, for example VMS. VMS has many different
privileges which you can turn on or off for individual users.

-- glen

William Clodius

unread,
Aug 16, 2011, 6:31:05 PM8/16/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> Richard Maine <nos...@see.signature> wrote:
> <snip>


>
> I would want to see it documented in the manuals for at least two
> systems to support that. If only one, it could be considered
> a rare exception. If it is the linker or program loader, then it
> could change with a system update even without a compiler change.
> (And even for already compiled programs.)
>
> An interesting note in the Fortran I manual:
>
> "The topmost section of data storage is occupied by those variables
> which appear in DIMENSION or EQUIVALENCE statements. The
> arrangement of this region is such that two programs, whose
> DIMENSION and EQUIVALENCE statements are identical, will have this
> region allocated identically. This makes it possible to write
> families of programs which deal with the same data."
>
> (Emphasis in the original.)
>
> Now, since there were no SUBROUTINE and FUNCTION statements yet,
> this "families of programs" must mean successive main programs.
> It seems, then, that under certian conditions one could count
> on variables having specific values on program start.

> <snip>
> -- glen

While FORTRAN (I) did not allow users to write subroutines or functions
in FORTRAN it did allow users to call functions and subroutines from a
library (http://www.fortran.com/FortranForTheIBM704.pdf). The term
program appears to consistently refer to the main program, but since the
pdf is not searc hable I may have missed exceptions to this.

--
Bill Clodius
los the lost and net the pet to email

glen herrmannsfeldt

unread,
Aug 16, 2011, 6:41:22 PM8/16/11
to
William Clodius <wclo...@lost-alamos.pet> wrote:

(snip, I wrote)

>> Now, since there were no SUBROUTINE and FUNCTION statements yet,
>> this "families of programs" must mean successive main programs.

(snip)


> While FORTRAN (I) did not allow users to write subroutines or functions
> in FORTRAN it did allow users to call functions and subroutines from a
> library (http://www.fortran.com/FortranForTheIBM704.pdf). The term
> program appears to consistently refer to the main program, but since the
> pdf is not searc hable I may have missed exceptions to this.

Later on, the manual explains how to write assembly functions
and add them to the master tape.

Interestingly, as noted previously arrays and EQUIVALENCE variables
starte at 77777(octal) in relocatable addresses.

For library routines on the master tape...

"The subroutine itself and any constants that it requires
should be located in relocatable locations 0, 1, 2, ... It may
also make use of a common storage region of any desired length n,
beginning with relocatable location 77777-(n-1) and ending with
relocatable location 77777."

Next, regarding arguments:

"At the moment of transfer to the subroutine Arg1 will have been
places in AC, Arg2 (if it exists) in the MQ, Arg3 (if it exists)
in relocatable location 77775 of the common region..."

So, it seems that your common variables can get written over
by subroutine arguments.

As far as I know, copies of the Fortran II compiler and IBSYS
exist, but no copies of Fortran I. Otherwise, we could look
to see how it actually worked.

-- glen

John Harper

unread,
Aug 16, 2011, 7:29:22 PM8/16/11
to
Richard Maine wrote:

As Richard said, 0.1 is single precision. The consequence: 4 different
compilers (g95, gfortran, ifort, Sun f95) printed 1.0000000149011612E-01,
not 1.0000000000000000E-01, when running this 2-line program:

print "(ES23.16)",real(0.1,kind(1d0))
end

-- John Harper

Richard Maine

unread,
Aug 16, 2011, 8:48:57 PM8/16/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> Richard Maine <nos...@see.signature> wrote:

[about explicit initialization of variables to zero, as opposed to
having it happen implicitly]

> > Perhaps there is some confusion about the use of the word "necessary"
> > here. It was certainly necessary in terms of the standard (at least once
> > there was a published standard well over a decade of Fortran use). But
> > it most certainly was not "necessary" in terms of making programs work
> > correctly on many machines/compilers of the day.
>
> I would want to see it documented in the manuals for at least two
> systems to support that. If only one, it could be considered
> a rare exception. If it is the linker or program loader, then it
> could change with a system update even without a compiler change.
> (And even for already compiled programs.)

Sorry. I'm not interested in bothering to try to dig up documentation. I
wouldn't even swear I ever even saw such. Many of the people I worked
with in those days weren't that much into documentation. If you want to
see such in order to believe it, then I guess you'll just have to get by
without believing it. And I neither know nor care whether it was the
compiler, linker, OS, or whatever.

What I do know is that I personally have observed *LOTS* of Fortran code
that depended on this feature and was run on many machines where it
worked. I probably wrote some such code myself in my foolish youth (as
opposed to my current age, where I am still certainly foolish, but in
different ways). And questions about such code pop up in this newsgroup
quite regularly; that's because there is still a lot of it around. You
might also ask some of the compiler developers why they have special
switches for the purpose. Apparently my word isn't good enough.

Apparently also the word of Tobias, who I thought I recalled actually
was a compiler developer isn't good enough either. I jumped into this
thread after Tobias commented that this was the reason for the compiler
switch in the compiler he helped develop. I would have thought he'd be a
pretty reliable source for information like that. In fact, part of the
reason I posted was that I thought it a bit ironic that someone who did
not (to my knowledge) have anything to do with the development of the
compiler felt qualified to contradict Tobais' explanation of the reason
with just "No it wasn't".

*I* know this was the case for many old codes. If that's not enough for
you, I can understand that. But I'm not interested in putting out the
effort to document it. (I don't think any of the codes I've seen this in
date back to the Fortran 1 days, though. Fortran II was the earliest I
saw much code for).

Richard Maine

unread,
Aug 16, 2011, 9:08:17 PM8/16/11
to
Nomen Nescio <nob...@dizum.com> wrote:

[an obscenity-filled diatribe]

I suppose that odds are high that you don't care, but I will note that
one more post like that (including the one that I half expect to see in
response to this) will get you on at least my killfile (and probably
that of several other people here). No, I'm not particularly prudish.
I've been known to use such terminology and worse on occasion. But that
level of discourse (to make rather liberal use of the word "discourse")
is not appropriate to the context of this newsgroup and will not be read
by me. Post it to the newsgroup if you like; I can't control that. But I
can control whether I ever see anything you post again.

glen herrmannsfeldt

unread,
Aug 16, 2011, 9:20:27 PM8/16/11
to
Richard Maine <nos...@see.signature> wrote:

(snip)


> [about explicit initialization of variables to zero, as opposed to
> having it happen implicitly]

>> > Perhaps there is some confusion about the use of the word "necessary"
>> > here. It was certainly necessary in terms of the standard (at least once
>> > there was a published standard well over a decade of Fortran use).

(snip, then I wrote)


>> I would want to see it documented in the manuals for at least two
>> systems to support that. If only one, it could be considered
>> a rare exception. If it is the linker or program loader, then it
>> could change with a system update even without a compiler change.
>> (And even for already compiled programs.)

> Sorry. I'm not interested in bothering to try to dig up documentation. I
> wouldn't even swear I ever even saw such. Many of the people I worked
> with in those days weren't that much into documentation. If you want to
> see such in order to believe it, then I guess you'll just have to get by
> without believing it. And I neither know nor care whether it was the
> compiler, linker, OS, or whatever.

I believe that some systems did it, but was it intentional?
That is harder to say.

I believe that many now do it as a side effect of using the
backend of a C compiler, which naturally zeros static data.
Stack data tends not to be zero, but that isn't relevent to
the olden days. I do remember the TOPS-10 compiler using
the stack for subroutine calls, but all data was still static.

> What I do know is that I personally have observed *LOTS* of Fortran code
> that depended on this feature and was run on many machines where it
> worked. I probably wrote some such code myself in my foolish youth (as
> opposed to my current age, where I am still certainly foolish, but in
> different ways). And questions about such code pop up in this newsgroup
> quite regularly; that's because there is still a lot of it around. You
> might also ask some of the compiler developers why they have special
> switches for the purpose. Apparently my word isn't good enough.

But there were also many where it didn't. As I posted, it seems
to have been an IBM Fortran I (and, as others posted Fortran II)
feature that you could read data from the previous program run.
That requires it not being zeroed.

> Apparently also the word of Tobias, who I thought I recalled actually
> was a compiler developer isn't good enough either. I jumped into this
> thread after Tobias commented that this was the reason for the compiler
> switch in the compiler he helped develop. I would have thought he'd be a
> pretty reliable source for information like that. In fact, part of the
> reason I posted was that I thought it a bit ironic that someone who did
> not (to my knowledge) have anything to do with the development of the
> compiler felt qualified to contradict Tobais' explanation of the reason
> with just "No it wasn't".

It seems pretty certian that neither all nor no compilers zeroed
all variables. Exactly where in between is still open.

Most of the ones I used didn't. I do remember learning about the
problem of sines of large numbers from the IBM OS/360 compiler
message, most likely due to some value left in memory.

> *I* know this was the case for many old codes. If that's not enough for
> you, I can understand that. But I'm not interested in putting out the
> effort to document it. (I don't think any of the codes I've seen this in
> date back to the Fortran 1 days, though. Fortran II was the earliest I
> saw much code for).

And I know as well as you, that just because something works
on one (or a few) compilers doesn't mean it is supposed to work,
or will keep working.

-- glen

Richard Maine

unread,
Aug 16, 2011, 9:33:00 PM8/16/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
[about initial zeroing of memory]

> I believe that some systems did it, but was it intentional?
> That is harder to say.

I have no idea about that. That's not what I was claiming - just that it
happened commonly enough for lots of programs to depend on it. I make no
claim of any kind about intent.

> And I know as well as you, that just because something works
> on one (or a few) compilers doesn't mean it is supposed to work,
> or will keep working.

Of course. I definitely agree. As noted above, I claim no knowledge
relating to whether it was supposed to work. And it most certainly did
not just keep working in all environments or I wouldn't have had to
spend so much time helping people fix the problems that showed up when
they tried to run codes in environments where it no longer worked.

Louisa

unread,
Aug 16, 2011, 9:37:34 PM8/16/11
to
On Aug 17, 9:29 am, John Harper <john.har...@vuw.ac.nz> wrote:
> As Richard said, 0.1 is single precision. The consequence: 4 different
> compilers (g95, gfortran, ifort, Sun f95) printed 1.0000000149011612E-01,
> not 1.0000000000000000E-01, when running this 2-line program:
>
> print "(ES23.16)",real(0.1,kind(1d0))
> end

Try 0.1d0

Louisa

unread,
Aug 16, 2011, 9:28:40 PM8/16/11
to
On Aug 16, 10:23 pm, nos...@see.signature (Richard Maine) wrote:

> Louisa <louisa.hu...@gmail.com> wrote:
> > On Aug 14, 9:35 pm, Tobias Burnus <bur...@net-b.de> wrote:
> > From: Tobias Burnus <bur...@net-b.de>
> > Date: Sun, 14 Aug 2011 13:35:54 +0200
> > >The main reason for the flag is that it used to be very common to have
> > >zero initialized variables
>
> > No it wasn't.
>
> > > - hence, in particular old programs rely on it.
>
> > No they didn't.
>
> Those seem like rather "abrupt" and absolute claims. I certainly
> personally saw *LOTS* of programs that depended on this behavior and I
> used many compilers that implemented it.

Such as?

As consultant, I have seen several thousand FORTRAN and Fortran
programs that erroneously relied on that behavior because their owners/
authors
assumed that the compiler initialized to zero. However, the compilers
that they
used never did. And having used many Fortran compilers,
never have I found one that initialized anything to anything.
WATFOR actually did a run-time test for uninitialized variables,
and issued a diagnostic.

> It wasn't exactly rare. I'd
> have said that it was typical of most older compilers.

No it wasn't. Name one that did.

> This is not some
> theoretical musing. I saw them myself. I've also personally helped many
> people fix such old programs so that they work in environments without
> zero initialization. I've also seen (here and elsewhere) plenty of user
> requests for compiler options to implement zero initialization in order
> to help such old programs work.
>
> In light of all that direct personalexperience, I'm a bit unsure how to
> reply to a blatant "no they didn't" that appears to deny what I've seen
> with my own eyes.
>
> > In olden times, it was necessary then -- as is now -- to ensure that
> > every variable was initialized with a value (assignment, etc)
> > before trying to use the value of that variable.
>
> Perhaps there is some confusion about the use of the word "necessary"
> here. It was certainly necessary in terms of the standard (at least once
> there was a published standard well over a decade of Fortran use). But
> it most certainly was not "necessary" in terms of making programs work
> correctly on many machines/compilers of the day.

No confusion about the word "necessary".
It was necessary. As I said, none of the Fortran compilers I used
did initialization.

Louisa

unread,
Aug 16, 2011, 9:46:46 PM8/16/11
to
On Aug 17, 11:33 am, nos...@see.signature (Richard Maine) wrote:
> glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
>
>   [about initial zeroing of memory]
>
> > I believe that some systems did it, but was it intentional?
> > That is harder to say.
>
> I have no idea about that. That's not what I was claiming - just that it
> happened commonly enough for lots of programs to depend on it. I make no
> claim of any kind about intent.

More than likely what you experienced was this:
Programs that assumed that variables were initialized
worked only by chance.
For example, a real variable that happened to be allocated to a word
already containing a smallish integer would look like a very small
value.
Such small values when added to, say, a value greater than unity
would have no effect.
Thus, failing to initialize a variable to zero prior to taking the sum
of the elements of an array, very likely would yield the correct
answer.

Richard Maine

unread,
Aug 16, 2011, 9:50:40 PM8/16/11
to
Louisa <louisa...@gmail.com> wrote:

Well yes, that's a fix, but the point wasn't to ask how to do that. It
was to illustrate what the REAL intrinsic does (and doesn't do). I'm
assuming (and am pretty confident of my assumption) that both Paul and
John knew perfectly well what 0.1d0 would do here.

Richard Maine

unread,
Aug 16, 2011, 10:01:48 PM8/16/11
to
Louisa <louisa...@gmail.com> wrote:

> On Aug 17, 11:33 am, nos...@see.signature (Richard Maine) wrote:
> > glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> >
> > [about initial zeroing of memory]
> >
> > > I believe that some systems did it, but was it intentional?
> > > That is harder to say.
> >
> > I have no idea about that. That's not what I was claiming - just that it
> > happened commonly enough for lots of programs to depend on it. I make no
> > claim of any kind about intent.
>
> More than likely what you experienced was this:
> Programs that assumed that variables were initialized
> worked only by chance.

Sorry, but no, that's just not so. I am *VERY* confident of that and I
have seen *LOTS* of supporting data. I lived and worked through that
era. No, it isn't based just on results happening to work out right in
some cases. As I mentioned to Glen, I'm not really interested in digging
out documentation to prove it to people for whom that isn't sufficient
data. In fact, I'm sufficiently disinclined to do so, that I think I'll
no longer reply on this topic, as without such documentation, I suppose
the thread can only devolve into "does so"/"does not"/"does so"/etc.

P.S. Though I can't bother to research the documentation of the older
compilers from when the practice started, but documentation of swicthes
to at least partly support the practice in current compilers *IS*
available. I suggest asking the compiler developers why those switches
exist. You might also ask Lynn, who posts here regularly about a rather
large code which he works with and which I thought I recalled was one
example of a code that needed the feature (though that's not one of the
codes I've personally seen and I'm not nearly as confident of the
second-hand data or my recollection about it).

Fritz Wuehler

unread,
Aug 17, 2011, 2:14:32 AM8/17/11
to
nos...@see.signature (Richard Maine) wrote:

> Fritz Wuehler <fr...@spamexpire-201108.rodent.frell.theremailer.net>
> wrote:
>
> > Gordon Sande <Gordon...@gmail.com> wrote:
> >
> > > The history of blank common is that it was an area where the system put
> > > temporary code before starting the user program, so nothing to do with
> > > single user PCs.
> >
> > There is no such thing as "blank common" on mainframes. Have you changed the
> > subject?
>
> I'll not address the other questions. It looks a bit to me like the
> discussion in the subthread is devolving.

Thanks to Gordon the argumentative, Cliff Claven wannabe.

> But while one is mentioning changes of subject, I'll note that this is
> comp.lang.fortran and the original post was rather specifically about
> Fortran.

That's right. So if you want to argue with me, try quoting me and saying
what you find off topic, because as it stands, you're selectively quoting me
responding to asinine misstatments from Gordy-boy and then objecting to that.

> Fortran *DOES* have blank common. It has it on all implementations,
> including those on all mainframes, and it has done so pretty much
> forever. There was certainly blank common at least as early as Fortran
> II days; I'm not so sure about Fortran I, which I never personally used.

Nobody said Fortran didn't have blank common. Here is what Gordon wrote,
since you seem to be arguing with me out of context:

>From Gordon...@gmail.com Tue Aug 16 15:48:21 2011
Path: eternal-september.org!mx04.eternal-september.org!feeder.eternal-september.org!.POSTED!not-for-mail
From: Gordon Sande <Gordon...@gmail.com>
Newsgroups: comp.lang.fortran
Subject: Re: zero-initialization of variables
Date: Sun, 14 Aug 2011 19:24:54 -0300
Organization: A noiseless patient Spider
Lines: 83
Message-ID: <j29hvm$3lc$1...@dont-email.me>
References: <85ffd740e36f777f...@dizum.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: mx04.eternal-september.org; posting-host="xWNePTTz0IIXQJxxGarj/w";
logging-data="3756"; mail-complaints-to="ab...@eternal-september.org"; posting-account="U2FsdGVkX1+WnWxC42EBZKwBVBfKrkvh"
User-Agent: Unison/2.1.5
Cancel-Lock: sha1:3fnG5rQG5u2ptxrqA61TCh+K3g8=
Xref: comp.lang.fortran:1584

> Gordon Sande <Gordon...@gmail.com> wrote:

snip

"The history of blank common is that it was an area where the system put
temporary code before starting the user program, so nothing to do with
single user PCs."

Here we find Gordon jerking off in public, making idiot statements about
something called "blank common" and saying the system puts temporary code
before starting the user program. That statement is so wrong and so
misleading there's almost no way to fix it. It's not a place for code,
Gordon, it's a place to share variables. Got it? Been that way since the
1960s and still works that way today. In FORTRAN. It's not a mainframe
concept.

He goes on and makes it worse by saying:

"Much of storage was defined by "blank" areas in the object code when static
storeage was the typical implementation. Stacks for storage virtually invite
reuse of areas that have not been zeroed."

The first sentence is wrong and shows Gordy doesn't know anything about the
details, although it's true even till today that object modules (and load
modules made from them) do contain space for non-reentrant variables
declared in the program, that's how it has always worked on IBM
mainframes. That's not "was defined" that's "is defined". As we have come to
expect from Cliff Claven (I mean Gordy) that has nothing to do with what
anybody said. He's just blowing smoke out his ass.

The next statement he makes (about stacks) is a useless tautology. As I
pointed out in an earlier post, he's not talking about mainframes, at least
not the 99.9% market leader IBM's mainframes, because none of those ever had
a stack for user storage and they don't today. He's just jerking off in
public for the sake of jerking off in public, arguing about stuff he has not
a fucking clue about, and making himself look really, really bad.

> Of course mainframes themselves don't have blank common. Blank common is
> a language feature - not a hardware or OS feature. The implementation of
> the language feature must, of course, make use of the underlying
> hardware and OS somehow or other. There might even be hardware or OS
> features that pretty directly map onto Fortran blank common. But blank
> common itself is a Fortran feature. There is no need for the quotes, as
> it is well defined in the Fortran standard (unless perhaps one is
> quibbling about the exact term; I don't recall without checking whether
> it might formally be called something like "unnamed common" instead.)

Well said, maybe he'll listen to you.

Thomas Jahns

unread,
Aug 17, 2011, 5:01:46 AM8/17/11
to
On 08/14/2011 06:12 PM, Richard Maine wrote:
> You wouldn't be helping efficiency either. In cases where the code does
> require the initialization, the worst case for performance is that you
> are telling the compiler to do the same thing that it would have done
> anyway. In other cases, you can sometimes improve performance (such as
> by doing run-time initialization, which will sometimes be much faster
> than loading huge arays from the compiled executable), or, even more
> importantly, make the program work correctly.

Just adding my 2 cents here: some platforms restrict the size of initialized
data objects. XCOFF for example requires the data segment of a single .o file to
be less than 256MB in size. An uninitialized arrays goes to the bss segment
which has lesser restrictions because it's not represented O(n) but O(1).

Thomas Jahns

Louis Krupp

unread,
Aug 17, 2011, 9:01:15 AM8/17/11
to

Historically, Burroughs Large Systems handled things as follows:

Scalars were allocated on the stack, and stack cells were allocated by
pushing zero words on top of the stack. This is in contrast to many
(most?) systems, in which stack cells are allocated by decrementing
the stack pointer (on systems where the stack grows downward), leaving
the contents of those cells unchanged.

Arrays were allocated one stack cell for a descriptor with a zero
presence bit. When the array was first indexed, the zero presence bit
triggered an interrupt, and the Master Control Program (the OS)
invoked a routine called GETSPACE which allocated physical memory,
filled the array with zeros and set the presence bit. When the system
ran low on memory, arrays could be written to disk and the descriptor
updated accordingly. Since that same memory could be subsequently
allocated by another program, filling it with zeros made sense from
the point of view of security.

I know this was the case with ALGOL, and I have no recollection (or
reason to believe) that FORTRAN was different.

I have no reason to believe that current Unisys descendants of Large
Systems have changed anything with regard to data initialization. A
quick look at an online copy of a 1992 Fortran 77 manual didn't turn
up anything striking.

Louis

Nomen Nescio

unread,
Aug 17, 2011, 9:59:45 AM8/17/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> Nomen Nescio <nob...@dizum.com> wrote:
>
> (snip, I wrote)
> >> Later, I believe in MVS, the APF (Authorized Program Facility)
> >> added some more protection. Somewhat similar to unix root,
> >> there are only two levels (APF=0 or 1). Maybe a little like
> >> unix SETUID, where only privileged programs can do certain
> >> things. Again it depends on the system operator to work.
>
> > No, APF is a separate issue, it controls what systems services
> > you can use. RACF deals with authorities for all types of
> > resources (Resource Access Control Facility) and it has been
> > around since MVS.
>
> Still, a lot later than OS/360.

Not really. OS/360 hit the shelves in 1964, MVS was out in the late
60s/early 70s. I don't remember exactly when. I know by MVS 3.8J in the
early 1970s RACF was available. 6 years is not "a lot later".

> I have not used APF and RACF, but have read about them.

APF is something you have to be aware of if you write systems software, if
not you won't even have any idea it exists. You don't *use* it per se, it
doesn't have any APIs, it just controls the way things work. It's a
protection against unauthorized code doing authorized things. It protects
the system to a degree it's never been compromised, it just doesn't have any
holes. RACF is more or less transparent too, although when you set something
new up you usually don't have all the authorities you need, you get warning
or errror messages, and the security guy fixes it.

If you have APF authorization you bypass most if not all RACF checks. If
not, as long as you don't go where you're not allowed to go, you won't ever
see it. All this stuff is designed to work without bothering anyone but
offenders, and it does that remarkably well.

> > All this preceded UNIX, so it is not like UNIX. If anything,
> > UNIX implemented what little "security"
> > it has so badly in an attempt to not be like IBM.
>
> I meant that it was like unix in that it doesn't give the
> fine grain control like, for example VMS. VMS has many different
> privileges which you can turn on or off for individual users.

I meant MVS is not like UNIX because MVS preceded UNIX. To talk about it
correctly you really should say UNIX feature x is or is not like MVS feature
x. For the younger guys it's important to let them know IBM got it right
before UNIX ever screwed it up. MVS was there first and the world still runs
on it. Years later, UNIX is still a toy operating system designed by
small-minded programmers with idiotic error messages and many embarassing
shortcomings.

If you have any examples of what you can do in VMS I'll try to answer
whether IBM already had it or not. I doubt any OS has as much capability and
protection as MVS, I've certainly never heard of one.

RACF does support pretty fine grained controls. Anything can be defined as a
resource but use a data set as an example. You can define rules to control
who can read it, update it, execute it, delete it, rename it, or create it
if it doesn't currently exist.

There can be lists for authorized users each of these types of actions. You
can do it by creating groups or specifying individual users in a list that
isn't part of any group. In other words, you don't have to create groups
just for the sake of creating groups. If you have a group of users that need
identical authorities, fine then create a group. But if you have a wide
variety of differences among users you can specify everything by adding
individual users to access lists while they are or are not defined also in a
group or groups.

It's obviously powerful than the newer UNIX owner/group/world concept
because you can specify access lists for each type of access, you're not
limited to having to define a group. And dealing with users and groups in
UNIX is still a mess of editing files and issuing ridiculous commands. MVS
has had a menu system for this plus batch control support for many decades
already.

UNIX ACL tried to approach the functionality of RACF much later. I haven't
looked at it in awhile but I believe it still falls short of IBM's original
design and implementation. That's the great thing about MVS. IBM got so many
things right and was very careful managing change, they planned ahead and
did things right the first time. They never created the holes and endless
list of shortcomings UNIX has designed in to it.


Ken Fairfield

unread,
Aug 17, 2011, 12:06:03 PM8/17/11
to
On Aug 16, 11:14 pm, Fritz Wuehler

<fr...@spamexpire-201108.rodent.frell.theremailer.net> wrote:
> nos...@see.signature (Richard Maine) wrote:
> > Fritz Wuehler <fr...@spamexpire-201108.rodent.frell.theremailer.net>
> > wrote:
>
> > > Gordon Sande <Gordon.Sa...@gmail.com> wrote:
>
> > > > The history of blank common is that it was an area where the system put
> > > > temporary code before starting the user program, so nothing to do with
> > > > single user PCs.
>
> > > There is no such thing as "blank common" on mainframes. Have you changed the
> > > subject?
>
> > I'll not address the other questions. It looks a bit to me like the
> > discussion in the subthread is devolving.
>
> Thanks to Gordon the argumentative, Cliff Claven wannabe.
[...]

Seems to me that "Fritz Wuehler", "Louisa" and "Nomen Nescio"
are all variations on the same, newly arrived, troll.

!Plonk!

(Except I really can't since I'm restricted to GG, sigh...)
I'd suggest others not feed the trolls, if possible.


On Richard's point earlier in the thread about experience
with various systems' Fortran compilers and uninitialized
variables starting with value zero:

I'm a relative newbie having been "brought up" on VAX/VMS
starting in about 1982. VMS definitely initialized programs'
data space to zero and *MOST* of the Fortran programs
"we" wrote depended upon that feature. The rest of SLAC
were using FORTRAN H on the IBM 360 under VM/CMS
at that time, and collaborating with others on similar
hardware at DESY and CERN. Those programs also
assumed initialized-to-zero behavior.

It was only much later, I believe when Sun and other unixes
started making inroads, that people started finding that
they needed explicit assignment or initialization, but by
that time, much of HEP were moving to C/C++ anyway.

I had very little experience with compilers running on PCs
so I don't know what typical behavior was on those. I do
know that it took quite some time to have "industrial
strength" compilers available on those platforms (Lahey
being one of the better ones IIRC).

-Ken

Dick Hendrickson

unread,
Aug 17, 2011, 12:30:02 PM8/17/11
to
On 8/16/11 8:20 PM, glen herrmannsfeldt wrote:
> Richard Maine<nos...@see.signature> wrote:
>
> (snip)
>> [about explicit initialization of variables to zero, as opposed to
>> having it happen implicitly]
>
>>>> Perhaps there is some confusion about the use of the word "necessary"
>>>> here. It was certainly necessary in terms of the standard (at least once
>>>> there was a published standard well over a decade of Fortran use).
>
> (snip, then I wrote)
>>> I would want to see it documented in the manuals for at least two
>>> systems to support that. If only one, it could be considered
>>> a rare exception. If it is the linker or program loader, then it
>>> could change with a system update even without a compiler change.
>>> (And even for already compiled programs.)
>
>> Sorry. I'm not interested in bothering to try to dig up documentation. I
>> wouldn't even swear I ever even saw such. Many of the people I worked
>> with in those days weren't that much into documentation. If you want to
>> see such in order to believe it, then I guess you'll just have to get by
>> without believing it. And I neither know nor care whether it was the
>> compiler, linker, OS, or whatever.
>
> I believe that some systems did it, but was it intentional?
> That is harder to say.
The CDC and Cray family of compilers/loaders zeroed user memory (except
for the object code and data parts ;) ) at program start-up. Many old
codes depended on it. The Cray compiler eventually offered a command
line option to spray memory with "NaN-like" values. There was great
internal resistance to even offering this as an option. Making it the
default was a DOA non-starter.

When the Cray compilers switched from a static memory model to a stack
based model they could no longer guarantee zero filling and programs
broke. (The loss of "IMPLICIT SAVE" with a stack based memory didn't
help; but the loss of zero initialization was also a killer.)

Dick Hendrickson

Dick Hendrickson

unread,
Aug 17, 2011, 12:42:48 PM8/17/11
to
>In the mid-60s the CDC 1604 compiler/system put a small loader-like
piece of code at the high end of memory and ran this to load the user
object file. After the user code was loaded, if the code used blank
common the base address of blank common was set to the top of the user
code and blank common could be as large as the remaining memory. It
generally overlaid the little loader. The machine had 32,768 words of
memory so memory was heavily overlaid and reused. Blank common wasn't
zeroed.

Dick Hendrickson

Nomen Nescio

unread,
Aug 17, 2011, 12:47:04 PM8/17/11
to
Well I'm happy to debate with and learn from guys like Glen who know tons of
things but don't bullshit you about what they don't know. I joined the
conversation only to have Gordon pull a Pee Wee Herman on me, and I won't
tolerate it.

Sorry for the offense to you and other sensitive readers, Cliff Claven
exclued. Remailers come and go so you'll be pretty busy killfiling me if I
stick around and you will also miss some good info because I try to be like
Glen (only with a much lower tolerance for bullshit) giving info when I can
and not bullshitting people (unlike Gordy) when I don't know something.

Gordon Sande

unread,
Aug 17, 2011, 1:05:54 PM8/17/11
to

The IBM 7090 under Fortran II used the same scheme except blank common
started at
the top and worked down so the object loader would certainly be there
and nonzero.
The code was more than just a small loader as it handled the input tape
stream, loaded
compilers and provided buffered i/o for the system programs. The
Fortran II run time
did not use it so it was expendable.

> Dick Hendrickson


Steven G. Kargl

unread,
Aug 17, 2011, 1:45:24 PM8/17/11
to
On Wed, 17 Aug 2011 09:06:03 -0700, Ken Fairfield wrote:

> On Aug 16, 11:14 pm, Fritz Wuehler
> <fr...@spamexpire-201108.rodent.frell.theremailer.net> wrote:
>> nos...@see.signature (Richard Maine) wrote:
>> > Fritz Wuehler <fr...@spamexpire-201108.rodent.frell.theremailer.net>
>> > wrote:
>>
>> > > Gordon Sande <Gordon.Sa...@gmail.com> wrote:
>>
>> > > > The history of blank common is that it was an area where the
>> > > > system put temporary code before starting the user program, so
>> > > > nothing to do with single user PCs.
>>
>> > > There is no such thing as "blank common" on mainframes. Have you
>> > > changed the subject?
>>
>> > I'll not address the other questions. It looks a bit to me like the
>> > discussion in the subthread is devolving.
>>
>> Thanks to Gordon the argumentative, Cliff Claven wannabe.
> [...]
>
> Seems to me that "Fritz Wuehler", "Louisa" and "Nomen Nescio" are all
> variations on the same, newly arrived, troll.
>
> !Plonk!

Yep. I think you've spotted the real issue.

> (Except I really can't since I'm restricted to GG, sigh...) I'd suggest
> others not feed the trolls, if possible.

I just switch from GG to using eternal-september's usenet feed. If you
can't switch. Try installing greasemonkey and ggkiller. You'll see
the first post from someone and you can then use ggkiller to ignore
all other posts.

--
steve

glen herrmannsfeldt

unread,
Aug 17, 2011, 1:52:51 PM8/17/11
to
Louis Krupp <lkr...@nospam.indra.com.invalid> wrote:

(snip on zero initialization)

> Historically, Burroughs Large Systems handled things as follows:

> Scalars were allocated on the stack, and stack cells were allocated by
> pushing zero words on top of the stack. This is in contrast to many
> (most?) systems, in which stack cells are allocated by decrementing
> the stack pointer (on systems where the stack grows downward), leaving
> the contents of those cells unchanged.

As I happen to have an ALGOL book nearby, I looked it up.

It seems that ALGOL 60 requires that a simple variable be assigned
a numerical value before it is used in an expression. That doesn't
mean compilers can't zero them, but it does seem pretty explicit.
It might even be that the hardware didn't have a stack pointer
subtract operation. The Burroughs hardware seems to have been
designed around ALGOL.

> Arrays were allocated one stack cell for a descriptor with a zero
> presence bit. When the array was first indexed, the zero presence bit
> triggered an interrupt, and the Master Control Program (the OS)
> invoked a routine called GETSPACE which allocated physical memory,
> filled the array with zeros and set the presence bit. When the system
> ran low on memory, arrays could be written to disk and the descriptor
> updated accordingly. Since that same memory could be subsequently
> allocated by another program, filling it with zeros made sense from
> the point of view of security.

> I know this was the case with ALGOL, and I have no recollection (or
> reason to believe) that FORTRAN was different.

It is likely that, as with gcc, the back end of the ALGOL compiler
was used. Possibly the compiler even generated ALGOL code.

> I have no reason to believe that current Unisys descendants of Large
> Systems have changed anything with regard to data initialization. A
> quick look at an online copy of a 1992 Fortran 77 manual didn't turn
> up anything striking.

-- glen

Paul Anton Letnes

unread,
Aug 17, 2011, 2:02:04 PM8/17/11
to
Den 17.08.11 01.48, skrev Richard Maine:

I'm siding with Richard here. I once took a course in finite difference
modeling of seismic and electromagnetic waves. The guy giving the
lectures was a great guy, but he knew Fortran 77, his one favorite
compiler, and was happy with that. When we downloaded g95 and started
running his programs (first without, then later with, our own
modifications) we got all sorts of strange answers that made no sense.
Throwing in a few
somearray = 0
solved everything. The guy never heard of any compiler flags, he just
had some IDE program where he hit "compile" and ran his one-file
programs. Again, a great guy, but not the software engineer kind of guy.

Oh, and I'm not even 30 yet. This was probably in 2009 or so. Although
his compiler might not be the newest, it can't be that old, because it
ran on a recent intel macbook. Unfortunately I don't know its name.

Paul

Paul van Delst

unread,
Aug 17, 2011, 2:23:54 PM8/17/11
to
Louisa wrote:
> On Aug 16, 10:23 pm, nos...@see.signature (Richard Maine) wrote:
>
> As consultant, I have seen several thousand FORTRAN and Fortran
> programs that erroneously relied on that behavior because their owners/
> authors
> assumed that the compiler initialized to zero. However, the compilers
> that they
> used never did. And having used many Fortran compilers,
> never have I found one that initialized anything to anything.
> WATFOR actually did a run-time test for uninitialized variables,
> and issued a diagnostic.
>
>> It wasn't exactly rare. I'd
>> have said that it was typical of most older compilers.
>
> No it wasn't. Name one that did.

The IBM xlf compiler is one that is pretty well known for doing this. Since as long as I can remember.

$ cat test_init.f
program test_init

integer i
real x
write(*,*) 2*i, 4.0*x

end

$ xlf test_init.f
** test_init === End of Compilation 1 ===
1501-510 Compilation successful for file test_init.f.

$ a.out
0 0.0000000000E+00

I think that counts as "zero", not just "like a very small value." :o)


$ xlf -qversion
IBM XL Fortran for AIX, V12.1
Version: 12.01.0000.0001


However, moving to the fortran95 variant....

$ cat test_init.f90
program test_init

integer :: i
real :: x
write(*,*) 2*i, 4.0*x

end program test_init

$ xlf95 test_init.f90
** test_init === End of Compilation 1 ===
1501-510 Compilation successful for file test_init.f90.

$ a.out
-1044652006 0.0000000000E+00


It would appear the IBM setup recognises that a lot of old (read: f77) code depends on default zero init. The default
configurations appear to (still) reflect that.


> No confusion about the word "necessary".
> It was necessary. As I said, none of the Fortran compilers I used
> did initialization.

Fair enough. But your statement doesn't scale to all compilers.

cheers,

paulv


Fritz Wuehler

unread,
Aug 17, 2011, 2:37:22 PM8/17/11
to
Dick Hendrickson <dick.hen...@att.net> wrote:

> >In the mid-60s the CDC 1604 compiler/system put a small loader-like
> piece of code at the high end of memory and ran this to load the user
> object file. After the user code was loaded, if the code used blank
> common the base address of blank common was set to the top of the user
> code and blank common could be as large as the remaining memory. It
> generally overlaid the little loader. The machine had 32,768 words of
> memory so memory was heavily overlaid and reused. Blank common wasn't
> zeroed.

Thanks, Dick. That's fascinating. There were alot of overlay tricks on small
IBM System 360 machines but I don't remember them using this technique and
I believe it *may* have been because the loader was an authorized piece of
code and would not normally be placed in the same library or storage as user
code, even when overlays were used. Glen goes back further than I do, maybe
he can remember how the loader interacted with overlays, if at all, on IBM
machines.

I never had the pleasure to spend much time on CDC equipment other than a
few hours with PLATO.

dpb

unread,
Aug 17, 2011, 2:43:13 PM8/17/11
to
On 8/17/2011 1:02 PM, Paul Anton Letnes wrote:
...

> I'm siding with Richard here. I once took a course in finite difference
> modeling of seismic and electromagnetic waves. The guy giving the
> lectures was a great guy, but he knew Fortran 77, his one favorite
> compiler, and was happy with that. When we downloaded g95 and started
> running his programs (first without, then later with, our own
> modifications) we got all sorts of strange answers that made no sense.
> Throwing in a few
> somearray = 0
> solved everything. The guy never heard of any compiler flags, he just
> had some IDE program where he hit "compile" and ran his one-file
> programs. Again, a great guy, but not the software engineer kind of guy.

...

I'd give fair odds on that having been OpenWatcom w/ the IDE...

--

glen herrmannsfeldt

unread,
Aug 17, 2011, 3:02:15 PM8/17/11
to
Ken Fairfield <ken.fa...@gmail.com> wrote:
(snip)

> On Richard's point earlier in the thread about experience
> with various systems' Fortran compilers and uninitialized
> variables starting with value zero:

> I'm a relative newbie having been "brought up" on VAX/VMS
> starting in about 1982. VMS definitely initialized programs'
> data space to zero and *MOST* of the Fortran programs
> "we" wrote depended upon that feature.

One feature that I first remember from VAX/VMS was that the
Fortran compiler stored constants in memory that was marked
as read-only. If you do the old Fortran trick of passing
a constant to a subroutine, and then changing the dummy
variable, the memory protection system will find it.
(VAX has 512 byte pages, widely considered to be too small,
but that does make it easier to do fine grained protection.)

It would be consistent for VAX/VMS to zero variables, but
I don't remember actually testing for it. (I was pretty
good at doing it myself by then.)

> The rest of SLAC were using FORTRAN H on the IBM 360
> under VM/CMS at that time, and collaborating with others
> on similar hardware at DESY and CERN. Those programs also
> assumed initialized-to-zero behavior.

The system that I previously noted, that initialized to X'81',
was the SLAC OS/VS2 system, not so long before the switch
to VM/CMS. The Fortran G and H compilers don't initialize
variables not in a DATA statement. The generated code has holes
(assembler DS instruction) where the variables should be.
Each card of the object program has a start address and length,
such that a hole is generated if the start address is more
than the start plus length of the previous card. As I understand
it, the linkage editor fills in small holes, such that it can
write out larger records (which again have a start and length,
but aren't restricted to 80 byte records.)

VM/CMS uses pretty much the same compilers, but a different
linker and program loader. It is certainly possible that CMS
zeros such, it even seems to fit the CMS style.

> It was only much later, I believe when Sun and other unixes
> started making inroads, that people started finding that
> they needed explicit assignment or initialization, but by
> that time, much of HEP were moving to C/C++ anyway.

I did a lot of C on SunOS systems, but not much Fortran.

> I had very little experience with compilers running on PCs
> so I don't know what typical behavior was on those. I do
> know that it took quite some time to have "industrial
> strength" compilers available on those platforms (Lahey
> being one of the better ones IIRC).

I do remember using a Fortran compiler in the floppy only
days of the IBM PC. The compiler passes were on different
disks, such that one had to keep swapping disks. When 256K
was large for a PC. But then Fortran H is supposed to run
in 256K, too.

-- glen

Louis Krupp

unread,
Aug 17, 2011, 3:07:32 PM8/17/11
to
On Wed, 17 Aug 2011 17:52:51 +0000 (UTC), glen herrmannsfeldt
<g...@ugcs.caltech.edu> wrote:

>Louis Krupp <lkr...@nospam.indra.com.invalid> wrote:
>
>(snip on zero initialization)
>
>> Historically, Burroughs Large Systems handled things as follows:
>
>> Scalars were allocated on the stack, and stack cells were allocated by
>> pushing zero words on top of the stack. This is in contrast to many
>> (most?) systems, in which stack cells are allocated by decrementing
>> the stack pointer (on systems where the stack grows downward), leaving
>> the contents of those cells unchanged.
>
>As I happen to have an ALGOL book nearby, I looked it up.
>
>It seems that ALGOL 60 requires that a simple variable be assigned
>a numerical value before it is used in an expression. That doesn't
>mean compilers can't zero them, but it does seem pretty explicit.
>It might even be that the hardware didn't have a stack pointer
>subtract operation. The Burroughs hardware seems to have been
>designed around ALGOL.

That would be a sensible thing for the ALGOL 60 spec to require, but
users of Burroughs Extended ALGOL (which, as I should have made clear,
is what I'm talking about) were, in my experience, accustomed to
variables automatically being inititalized to zero.

You are correct in that Burroughs architecture had no way to
manipulate the top of stack register without explicitly or implicitly
pushing and popping things.

You are all also correct in that the architecture was designed to
accomodate Burroughs Extended ALGOL (and its relatives, which were --
and still are -- used for OS programming). Keep in mind that this
particular version of ALGOL was also designed around Burroughs
architecture. It was easy for the architecture to initialize scalars
to zero, and Burroughs Extended ALGOL did not go out of its way to
force users to do their own initialization.

>
>> Arrays were allocated one stack cell for a descriptor with a zero
>> presence bit. When the array was first indexed, the zero presence bit
>> triggered an interrupt, and the Master Control Program (the OS)
>> invoked a routine called GETSPACE which allocated physical memory,
>> filled the array with zeros and set the presence bit. When the system
>> ran low on memory, arrays could be written to disk and the descriptor
>> updated accordingly. Since that same memory could be subsequently
>> allocated by another program, filling it with zeros made sense from
>> the point of view of security.
>
>> I know this was the case with ALGOL, and I have no recollection (or
>> reason to believe) that FORTRAN was different.
>
>It is likely that, as with gcc, the back end of the ALGOL compiler
>was used. Possibly the compiler even generated ALGOL code.

I vaguely recall a FORTRAN to ALGOL converter, but the resulting
output was predictably ugly, and as far as I know, it didn't get much
use.

The FORTRAN compilers whose listings I read generated native code.

Keep in mind two things:

1. The OS initialized arrays to zero (except for ALGOL read-only
arrays, which were inititalized in the code file); all the compiler
had to do was emit code to push a descriptor on the stack.

2. Initializing scalars was as easy as emitting a "LITC 0"
instruction. "LITC" -- "literal call" -- was the instruction used
when an arithmetic expression used a constant. Everything happened on
the stack: expressions were evaluated on the stack, and variables
lived on the stack until they went out of scope and a stack frame was
removed.

Louis

abrsvc

unread,
Aug 17, 2011, 3:23:13 PM8/17/11
to
On Aug 17, 3:07 pm, Louis Krupp <lkr...@nospam.indra.com.invalid>
wrote:
> Louis- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

RE: Glen "One feature that I first remember from VAX/VMS was that the


Fortran compiler stored constants in memory that was marked
as read-only. "

Two things to realize here:

1) fixed constants WERE stored in a read-only PSECT (program section)
so it was possible to get access violations and program failures when
attempting to write to "local" variables.
2) Undefined variables were stored in a PSECT marked Dzero (Demand
Zero). This allowed for the compiler to skip storeage space within
the image for those variables. They were allocted at runtime and
actually brouight into the "working set" of hte program when
referenced. They were paged into the address space from Dzero pages
and thus (by definition) were always intialized to 0.

Dan

glen herrmannsfeldt

unread,
Aug 17, 2011, 4:17:40 PM8/17/11
to
Fritz Wuehler <fr...@spamexpire-201108.rodent.frell.theremailer.net> wrote:
(snip on CDC, program loaders, and overlays)

> Thanks, Dick. That's fascinating. There were alot of overlay tricks on small
> IBM System 360 machines but I don't remember them using this technique and
> I believe it *may* have been because the loader was an authorized piece of
> code and would not normally be placed in the same library or storage as user
> code, even when overlays were used. Glen goes back further than I do, maybe
> he can remember how the loader interacted with overlays, if at all, on IBM
> machines.

Much of the OS/360 I/O system is done in user space and with user
PSW key. All the work, through generation of the channel program
is done in user space. Then, at EXCP (EXecute Channel Program)
the OS takes over, verifies that the channel program doesn't go
outside where it is supposed to go (your data set), and then
does SIO (Start IO). After the I/O is done, the system
uses POST to indicate to the program that the I/O has been done.

For overlay, the linker puts a small block which indicates
which overlay needs to be loaded, and then does the SVC
(Supervisor Call) which does the actual I/O. The program
does a branch (BALR) to that block. Then the first
bytes of the overlay block are overwritten with a branch
instruction such that subsequent calls don't go to the
overlay handler. Overlay is only done on call, and not
on return, such that a routine can't overlay itself, or
anything in the call chain leading up to it.

> I never had the pleasure to spend much time on CDC equipment
> other than a few hours with PLATO.

In the 1980's, the main computer available at the University
of Illinois for student free accounts was a Cyber 174.
I did use it some, but mostly had other machines available that
I could use. As the IBM PC became more popular, labs with
PCs for student use became available. We also had PLATO,
which I did use.

-- glen

Fritz Wuehler

unread,
Aug 18, 2011, 9:59:23 PM8/18/11
to
Thank you.

Lynn McGuire

unread,
Aug 18, 2011, 11:17:54 PM8/18/11
to
On 8/14/2011 4:00 AM, Paul Anton Letnes wrote:
> Hi!
>
> I just encountered a bug which was easily fixed by initializing an array
> to zero in the beginning of the relevant subroutine. I was working on
> gfortran, and this was a bug for me. I know someone else uses the PGI
> compiler, and did not see this issue.
>
> - Do some compilers do this by default (zero variables by default)?
> - Do others have flags for this?
> - What does the standard have to say on this topic? (i.e. was the code
> standard conforming before I fixed the bug?)
> - What is good form, standards and compilers aside?
>
> Cheers
> Paul.

I have F77 code (700K lines) that must have all of the
scalar variables and all of the vector variables
initialized to zero. Both local and global (common).
Everything (hangs head in shame).

This is the default behavior on many of the old systems
(pre 32 bit / 8 byte - people are going to argue here
but I know what worked for us). We also got that behavior
on the IBM mainframes and big Unix boxes using the f77
compiler (I'm not sure if we had to turn it on). Even
if this was the default behavior, it was not the standard
behavior. Since fortran is the second oldest computer
language, it picked up many warts along the way.

I have tried 4 of the pc fortran compilers:
1. the old NDP compiler - worked just fine
2. the watcom compiler - worked just fine using the zero
init compiler flag
3. the compaq fortran compiler - did not work for me
because some of the vectors did not get initialized
to zero
4. the intel fortran compiler - did not work for me
because some of the vectors did not get initialized
to zero (reported and partially fixed, have not tried
since 11.0 ??? release)

We are in the process of getting rid of our zero
initialization dependent code for our forthcoming port
to C++. It is not easy and very time consuming.

Zero initialization is not good form. I wish that we
had addressed it a long time ago as it really affects
your compiler automatic code optimization.

Lynn

Lynn McGuire

unread,
Aug 18, 2011, 11:32:41 PM8/18/11
to

Univac 1108, CDC 7600, IBM 370 with MVS ???, Apollo Domain
fortran and f77, Sun f77, VAX VMS fortran, IBM RS/6000 f77,
HP Unix f77 and Prime fortran all initialized all variables
(local and global, scalar and vector) to zero. I do not
remember if this was the default behavior on all these
platforms or optional behavior (flag).

I'm not a consultant but have been dragging the same crappy
<g> code around the known universe of engineering computers
since 1975.

Lynn

Nomen Nescio

unread,
Aug 19, 2011, 3:50:17 AM8/19/11
to
Paul van Delst <paul.v...@noaa.gov> wrote:

some snipping

Sorry for the lengthy quote here, please keep reading


> > On Aug 16, 10:23 pm, nos...@see.signature (Richard Maine) wrote:
> >
> > As consultant, I have seen several thousand FORTRAN and Fortran
> > programs that erroneously relied on that behavior because their owners/
> > authors
> > assumed that the compiler initialized to zero. However, the compilers
> > that they
> > used never did. And having used many Fortran compilers,
> > never have I found one that initialized anything to anything.
> > WATFOR actually did a run-time test for uninitialized variables,
> > and issued a diagnostic.
> >
> >> It wasn't exactly rare. I'd
> >> have said that it was typical of most older compilers.
> >
> > No it wasn't. Name one that did.
>
> The IBM xlf compiler is one that is pretty well known for doing
> > this. Since as long as I can remember.

Note, IBM XL is brand new in IBM years, so this statement doesn't really
mean anything.

>
> $ cat test_init.f
> program test_init
>
> integer i
> real x
> write(*,*) 2*i, 4.0*x
>
> end
>
> $ xlf test_init.f
> ** test_init === End of Compilation 1 ===
> 1501-510 Compilation successful for file test_init.f.
>
> $ a.out
> 0 0.0000000000E+00
>
> I think that counts as "zero", not just "like a very small value." :o)
>
>
> $ xlf -qversion
> IBM XL Fortran for AIX, V12.1
> Version: 12.01.0000.0001
>

> It would appear the IBM setup recognises that a lot of old (read: f77) code depends on default zero init. The default
> configurations appear to (still) reflect that.

Of course IBM had FORTRAN way before F77 existed as many list members have
pointed out running on the 704, 7090, and S/360. When I read your statement
I thought that is ridiculous, those old compilers never initialized
variables that weren't explicity initialized just like Richard's statement
above.

But he and I are wrong.

I have IBM's F and G level FORTRAN compilers from the System 360 days. They
are publicly available and believe it or not will run and generate code just
fine on a modern mainframe like most everything else ever written for S/360
and up.

I did a little experiment using your samples as a basis. I couldn't use them
as is because FORTRAN IV does not seem to allow you to issue a WRITE
statement that contains calculations. I may have missed something but using
a similar syntax as yours it flagged my 2*I. My FORTRAN is pretty rusty so I
decided to declare two INTEGERS and use the second as a target J = 2 * I and
put I and J in the WRITE and that worked fine.

When I executed it the results were zero. Thinking it might be by chance, I
ran it again numerous times and got the same results.

Then I added an option to the FORTRAN compilation to generate an assembler
listing and I looked over the listing to see what was going on. Unfortunately,
this listing did not contain the fields used to initialize the variables. I
could see where the variables were being loaded from but the listing didn't
contain the values. So I added a dump step to the job and reran it and sure
enough, the fields are being loaded from an area that was initialized to
zeros. IBM FORTRAN F and G both initialize uninitialized INTEGER and REAL
variables to zero!

I decided to add a DATA statement and generate another assembler listing and
another dump. Sure enough, in those exact locations that used to contain
zeros, the values I specified to initialize the fields were set! There is no
longer any doubt that IBM FORTRAN F and G initialize INTEGER and REAL
variables to zero in the absence of user initialization.

I have the listings and dumps from this if anyone wants but I didn't post
them because this post is long enough and the listings are hard to read if
you're not an IBM assembler programmer. It took me 20 minutes to get what
was going on because the fields containing the values to initialize the
variables weren't shown in the assembler listing. You need to do a few
address calculations and see a dump to verify what the gaps in the listing
actually contain.

> > No confusion about the word "necessary".
> > It was necessary. As I said, none of the Fortran compilers I used
> > did initialization.

Richard, I guess you didn't use IBM FORTRAN F or G which were two of the
most popular successful and high performing FORTRAN compilers for many
decades. I did use them but I remembered wrong!

I was asking myself why would IBM do this. IBM software is pretty protective
but when these compilers were written core was expensive and so was CPU
time. It doesn't make sense to add initialization logic when people may not
have wanted it. But then I realized it is much easier for the compiler to do
this when people had to use at least one extra card to punch a DATA
statement on, since there was no way to give a value on an INTEGER or REAL
statement until (I believe) VS FORTRAN. Anyone else have any thoughts on
this?

> Fair enough. But your statement doesn't scale to all compilers.

You got it right, Paul!

glen herrmannsfeldt

unread,
Aug 19, 2011, 5:35:39 AM8/19/11
to
Nomen Nescio <nob...@dizum.com> wrote:

(snip)


> Of course IBM had FORTRAN way before F77 existed as many list members have
> pointed out running on the 704, 7090, and S/360. When I read your statement
> I thought that is ridiculous, those old compilers never initialized
> variables that weren't explicity initialized just like Richard's statement
> above.

> But he and I are wrong.

> I have IBM's F and G level FORTRAN compilers from the System 360 days. They
> are publicly available and believe it or not will run and generate code just
> fine on a modern mainframe like most everything else ever written for S/360
> and up.

I don't know about Fortran F, but Fortran G does not initialize
data areas. They are generated as the equivalent of the assembler DS
instruction.

> I did a little experiment using your samples as a basis. I couldn't
> use them as is because FORTRAN IV does not seem to allow you to
> issue a WRITE statement that contains calculations. I may have
> missed something but using a similar syntax as yours it flagged
> my 2*I.

My favorite feature added in Fortran 77 is the ability to use
expressions, not just variables, in WRITE statements. It saves
many temporary variables only needed to get expressions printed.

> My FORTRAN is pretty rusty so I decided to declare two INTEGERS
> and use the second as a target J = 2 * I and put I and J in the
> WRITE and that worked fine.

> When I executed it the results were zero. Thinking it might be
> by chance, I ran it again numerous times and got the same results.

Did you load other values into memory before loading the object
program? Most likely not.

Try a program like:

REAL X(10000)
DATA X/10000*0.0/
WRITE(6,1) X(10000)
1 FORMAT(1X,G20.3)
STOP
END


Now compile it with PARM.FORT='DECK'
Be sure you have

SYSPUNCH DD SYSOUT=B

and that it is spooled to a card punch. Yes, I actually did this.

Well, some of my early programs initialized arrays with DATA,
as it seemed easier than a DO loop. But all those zeros get
punched into the object deck. I once then had an actual deck
of cards punched. Hundreds of cards to initialize an array.
(Each card has X'02',C'TXT', an address, a length, and then
columns 73-80 are also not used, so about 60 bytes per card.)

> Then I added an option to the FORTRAN compilation to generate an
> assembler listing and I looked over the listing to see what was
> going on. Unfortunately, this listing did not contain the fields
> used to initialize the variables. I could see where the variables
> were being loaded from but the listing didn't contain the values.

They don't go into the object program. Generate a hex dump of
the object program. Each TXT card contains a 24 bit address,
and a length. (I have the format around somewhere.) There
will be no TXT cards generated for those variables.

> So I added a dump step to the job and reran it and sure enough,
> the fields are being loaded from an area that was initialized
> to zeros. IBM FORTRAN F and G both initialize uninitialized
> INTEGER and REAL variables to zero!

> I decided to add a DATA statement and generate another assembler
> listing and another dump. Sure enough, in those exact locations
> that used to contain zeros, the values I specified to initialize
> the fields were set! There is no longer any doubt that IBM
> FORTRAN F and G initialize INTEGER and REAL variables to zero in
> the absence of user initialization.

In this case, they will be generated in the object program, as
if assembler DC instructions were used. The bytes are punched,
even for large arrays of zero.

> I have the listings and dumps from this if anyone wants but I didn't post
> them because this post is long enough and the listings are hard to read if
> you're not an IBM assembler programmer. It took me 20 minutes to get what
> was going on because the fields containing the values to initialize the
> variables weren't shown in the assembler listing. You need to do a few
> address calculations and see a dump to verify what the gaps in the listing
> actually contain.

The gaps actually exist in the object program. Usually small gaps
are filled in by the linkage editor. The object deck contains 80
byte records, possibly blocked. The load module contains records
of varying length (RECFM=U), up to the supplied BLKSIZE. To avoid
generating many small records (which again have a offset and length)
small DS blocks are filled in. In the early OS/360 days, they were
filled with whatever happened to be in the buffer at the time.
That may have changed, later. Larger gaps, such as large arrays,
are left as gaps in the load module.

That is done to optimize the disk space used by interblock gaps
against the space used by filling in the DS. It is likely that
newer linkers fill with zeroes, but the older ones didn't.

The Program Logic Manuals for Fortran G, H, and Linkage Editor F
exist, along with the source code for them.

>> > No confusion about the word "necessary". It was necessary.
>> > As I said, none of the Fortran compilers I used did initialization.

> Richard, I guess you didn't use IBM FORTRAN F or G which were two of the
> most popular successful and high performing FORTRAN compilers for many
> decades. I did use them but I remembered wrong!

> I was asking myself why would IBM do this. IBM software is pretty
> protective but when these compilers were written core was expensive
> and so was CPU time. It doesn't make sense to add initialization logic
> when people may not have wanted it.

It would take many extra bytes of cards, or expensive disk space
if they were written out.

> But then I realized it is much
> easier for the compiler to do this when people had to use at least
> one extra card to punch a DATA statement on, since there was no way
> to give a value on an INTEGER or REAL statement until (I believe)
> VS FORTRAN. Anyone else have any thoughts on this?

Initializing on declaration is allowed in OS/360 Fortran.

REAL X/0.0/,Y/1.0/,Z/2.0/

>> Fair enough. But your statement doesn't scale to all compilers.

-- glen

Nomen Nescio

unread,
Aug 19, 2011, 6:22:30 AM8/19/11
to
Correction, I tested with IBM FORTRAN G and H, not F. But I believe I also
have IBM FORTRAN F somewhere.

Nomen Nescio

unread,
Aug 19, 2011, 7:04:56 AM8/19/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> Nomen Nescio <nob...@dizum.com> wrote:
>
> I don't know about Fortran F, but Fortran G does not initialize
> data areas. They are generated as the equivalent of the assembler DS
> instruction.

Sorry I should have written G and H. I probably have F around somewhere but
I did not use it for this test.

Whether they are punched as DS or DC is not relevant, because in the load
module the area is zeros. That's dependable and that's how it works.

> Did you load other values into memory before loading the object
> program? Most likely not.

I tried programs with and without initialized data a few times running them
interspersed.

> Try a program like:
>
> REAL X(10000)
> DATA X/10000*0.0/
> WRITE(6,1) X(10000)
> 1 FORMAT(1X,G20.3)
> STOP
> END
>
>
> Now compile it with PARM.FORT='DECK'
> Be sure you have
>
> SYSPUNCH DD SYSOUT=B
>
> and that it is spooled to a card punch. Yes, I actually did this.

I don't dispute they used DS in the object code to save *object deck* size,
but that relies on the load module containing zeros in fields that aren't
initialized. Remember, this is a non reentrant program, everything is in the
load module and whatever isn't set will be set to zero by the linker.

> They don't go into the object program. Generate a hex dump of
> the object program. Each TXT card contains a 24 bit address,
> and a length. (I have the format around somewhere.) There
> will be no TXT cards generated for those variables.

I don't dispute that at all and it is good to know. But the load module
contains zeros in those areas, I have the dumps. See my proposed explanation
above.

> That is done to optimize the disk space used by interblock gaps
> against the space used by filling in the DS. It is likely that
> newer linkers fill with zeroes, but the older ones didn't.

Ok now you may have something! I am running old FORTRAN compilers on a much
newer system. I will check on MVS 3.8J next week and get back to you unless
you have done this already. But I believe the linker must have set
everything in the load module that wasn't defined to zeros even in the old
days. It would only be with dynamically allocated storage this wouldn't
apply. Maybe I am wrong.

> Initializing on declaration is allowed in OS/360 Fortran.
>
> REAL X/0.0/,Y/1.0/,Z/2.0/

I did not remember that, thank you. Thanks for your post Glen. It may really
be the new linker is changing my results because I do not remember things
working this way but what I saw now seems to go against what I remember
and that was long ago. I'll try with a vintage system and see if I can rule
out the linker unless you already tried this on MVS 3.8J.

This turns out to be a pretty interesting discussion!

Richard Maine

unread,
Aug 19, 2011, 8:21:26 AM8/19/11
to
Nomen Nescio <nob...@dizum.com> wrote:

> I thought that is ridiculous, those old compilers never initialized
> variables that weren't explicity initialized just like Richard's statement
> above.

That was not actually my statement. I believe you will find that it was
actually from Louisa and misattributed to me by accident. I was on the
opposite side. Yes, by the way, I've used the IBM compilers referenced,
along with quite a large number of other compilers, more than I can
easily count.

--
Richard Maine
email: last name at domain . net
domain: summer-triangle

glen herrmannsfeldt

unread,
Aug 19, 2011, 2:33:16 PM8/19/11
to
Nomen Nescio <nob...@dizum.com> wrote:
> glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

>> Nomen Nescio <nob...@dizum.com> wrote:

>> I don't know about Fortran F, but Fortran G does not initialize
>> data areas. They are generated as the equivalent of the assembler DS
>> instruction.

> Sorry I should have written G and H. I probably have F around
> somewhere but I did not use it for this test.

> Whether they are punched as DS or DC is not relevant, because in the load
> module the area is zeros. That's dependable and that's how it works.

As I remember it, the OS/360 linkage editor didn't do that.
In the OS/VS2 days I used one that was custom modified to initialize
to X'81'. It might be that the IBM version was changed at that time,
and the one I used was a custom modification of that.

As well as I understand it, larger regions (medium and large arrays)
are not initialized in the load module, but by program fetch.
(The part of the OS that loads programs to be run.) The OS/VS2
system I used also had that modified to initialize to X'81'.

>> Did you load other values into memory before loading the object
>> program? Most likely not.

> I tried programs with and without initialized data a few times
> running them interspersed.

>> Try a program like:
>>
>> REAL X(10000)
>> DATA X/10000*0.0/
>> WRITE(6,1) X(10000)
>> 1 FORMAT(1X,G20.3)
>> STOP
>> END

>> Now compile it with PARM.FORT='DECK'
>> Be sure you have

>> SYSPUNCH DD SYSOUT=B

>> and that it is spooled to a card punch. Yes, I actually did this.

> I don't dispute they used DS in the object code to save *object deck*
> size, but that relies on the load module containing zeros in fields
> that aren't initialized. Remember, this is a non reentrant program,
> everything is in the load module and whatever isn't set will be
> set to zero by the linker.

Each load module record has a start address and length, so there
can still be holes.

>> They don't go into the object program. Generate a hex dump of
>> the object program. Each TXT card contains a 24 bit address,
>> and a length. (I have the format around somewhere.) There
>> will be no TXT cards generated for those variables.

> I don't dispute that at all and it is good to know. But the load
> module contains zeros in those areas, I have the dumps.
> See my proposed explanation above.

>> That is done to optimize the disk space used by interblock gaps
>> against the space used by filling in the DS. It is likely that
>> newer linkers fill with zeroes, but the older ones didn't.

> Ok now you may have something! I am running old FORTRAN compilers
> on a much newer system. I will check on MVS 3.8J next week and
> get back to you unless you have done this already. But I believe
> the linker must have set everything in the load module that wasn't
> defined to zeros even in the old days. It would only be with
> dynamically allocated storage this wouldn't apply. Maybe I am wrong.

The ones I remember are from the OS/360 days. The OS/360 linkage
editor is available, and, as with the compilers, should run on
newer systems.

>> Initializing on declaration is allowed in OS/360 Fortran.

>> REAL X/0.0/,Y/1.0/,Z/2.0/

> I did not remember that, thank you. Thanks for your post Glen.

A Fortran IV extension that didn't make it into Fortran 77.

With DATA statements you can:

DATA X,Y,Z/1.0,2.0,3.0/

but, as the initialization is optional, that doesn't work for
declaration statements.

> It may really be the new linker is changing my results because
> I do not remember things working this way but what I saw now
> seems to go against what I remember and that was long ago.
> I'll try with a vintage system and see if I can rule
> out the linker unless you already tried this on MVS 3.8J.

> This turns out to be a pretty interesting discussion!

-- glen

Fritz Wuehler

unread,
Aug 19, 2011, 2:55:51 PM8/19/11
to
Lynn McGuire <l...@winsim.com> wrote:

> > No confusion about the word "necessary".
> > It was necessary. As I said, none of the Fortran compilers I used
> > did initialization.
>

snipped part of list:

> IBM 370 with MVS

confirmed in another post. I have the original FORTRAN G and H compilers
(you can get them too, they're available online) and what Lynn said is
right. They definitely set uninitialized INTEGER and REAL to zero.

> fortran and f77, Sun f77, VAX VMS fortran, IBM RS/6000 f77,
> HP Unix f77 and Prime fortran all initialized all variables
> (local and global, scalar and vector) to zero. I do not
> remember if this was the default behavior on all these
> platforms or optional behavior (flag).

All new compilers ;-) I like stuff from the 60s and 70s.

Tim Prince

unread,
Aug 19, 2011, 5:04:32 PM8/19/11
to
On 8/18/2011 11:32 PM, Lynn McGuire wrote:

>
> Univac 1108, CDC 7600, IBM 370 with MVS ???, Apollo Domain
> fortran and f77, Sun f77, VAX VMS fortran, IBM RS/6000 f77,
> HP Unix f77 and Prime fortran all initialized all variables
> (local and global, scalar and vector) to zero. I do not
> remember if this was the default behavior on all these
> platforms or optional behavior (flag).

The GeCOS and successor Honeywell 36-bit platforms initialized data
(other than blank COMMON) to integer 0 by default. In that mode, single
and double precision variables would take on a default un-normalized
representation of 0.0 with exponent same as 1.0, which didn't
necessarily behave the same as a true floating point 0. Not many
programmers understood the distinction. There was a switch to change
the loader initialization to any bit pattern you might choose.
The possibility that blank COMMON might contain the program loader code
persisted on into the CP/M-80 days.
Many programs which got by without initialization before application of
overlays became totally broken with overlays. Overlays persisted into
the days of the TI microprocessor Fortran.

--
Tim Prince

glen herrmannsfeldt

unread,
Aug 19, 2011, 5:38:46 PM8/19/11
to
Tim Prince <tpr...@computer.org> wrote:

(snip)


> Many programs which got by without initialization before application of
> overlays became totally broken with overlays. Overlays persisted into
> the days of the TI microprocessor Fortran.

To mix threads, one of my favorite features of the Watcom compiler
in the MS-DOS days was its overlay linker. Much better than the MS
linker, and it could link the output of MS compilers.

-- glen

glen herrmannsfeldt

unread,
Aug 19, 2011, 7:50:20 PM8/19/11
to
Nomen Nescio <nob...@dizum.com> wrote:

(snip)


> Whether they are punched as DS or DC is not relevant, because in the load
> module the area is zeros. That's dependable and that's how it works.

I asked someone at IBM who worked on these at the time. He believes
that the change was in the later versions of OS/360, though that is
without actually looking back.

(snip)


> I don't dispute they used DS in the object code to save *object deck* size,
> but that relies on the load module containing zeros in fields that aren't
> initialized. Remember, this is a non reentrant program, everything is in the
> load module and whatever isn't set will be set to zero by the linker.

>> They don't go into the object program. Generate a hex dump of
>> the object program. Each TXT card contains a 24 bit address,
>> and a length. (I have the format around somewhere.) There
>> will be no TXT cards generated for those variables.

> I don't dispute that at all and it is good to know. But the load module
> contains zeros in those areas, I have the dumps. See my proposed
> explanation above.

>> That is done to optimize the disk space used by interblock gaps
>> against the space used by filling in the DS. It is likely that
>> newer linkers fill with zeroes, but the older ones didn't.

> Ok now you may have something! I am running old FORTRAN compilers
> on a much newer system. I will check on MVS 3.8J next week and
> get back to you unless you have done this already. But I believe
> the linker must have set everything in the load module that wasn't
> defined to zeros even in the old days. It would only be with
> dynamically allocated storage this wouldn't apply. Maybe I am wrong.

That might not be old enough. If the change was in the late OS/360,
maybe 21.0 or 21.6, then it would also be in any MVS.

There was someone who had TOS/360 running on Hercules. I believe
also DOS/360, which I believe is where Fortran F is. (Or maybe
it is E.)

> This turns out to be a pretty interesting discussion!

Also, I don't know about VS Fortran. Tradition would be to
generate DS, but it is possible that it generates DC.

-- glen

William Clodius

unread,
Aug 20, 2011, 10:51:55 AM8/20/11
to
Lynn McGuire <l...@winsim.com> wrote:

><snip>


> This is the default behavior on many of the old systems
> (pre 32 bit / 8 byte - people are going to argue here
> but I know what worked for us). We also got that behavior
> on the IBM mainframes and big Unix boxes using the f77
> compiler (I'm not sure if we had to turn it on). Even
> if this was the default behavior, it was not the standard
> behavior. Since fortran is the second oldest computer

> language, <snip>

Describing Fortran as the second oldest language is a bit odd. It is
almost certainy the oldest programming language still in significant
use. Many people make the mistake of describing it as the first
programming language, it wasn't, that was (by most standards)
Plankalkul, but it also wasn't the second (or third or forth or ...)
programming language. It was probably the second programing language to
be standardized, but most people aren't even aware of the existence of
the first language to be standardardized, the Automatically Programmed
Tool (APT) language, which started development and was released by IBM a
few months after Fortran. While a Turing complete language APT was
develped to progrom automated precision tools.

What language did you think was the oldest programming langage?

>
> Lynn


--
Bill Clodius
los the lost and net the pet to email

David W Noon

unread,
Aug 20, 2011, 10:51:44 AM8/20/11
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Fri, 19 Aug 2011 23:50:20 +0000 (UTC), glen herrmannsfeldt wrote
about Re: zero-initialization of variables:

>Nomen Nescio <nob...@dizum.com> wrote:
>
>(snip)
>> Whether they are punched as DS or DC is not relevant, because in the
>> load module the area is zeros. That's dependable and that's how it
>> works.
>
>I asked someone at IBM who worked on these at the time. He believes
>that the change was in the later versions of OS/360, though that is
>without actually looking back.

I have been reading this thread with some amusement and nostalgia. I
think I should chime in here, though.

In late 1986 or early 1987 I was working as a system programmer using
MVS/XA. I received a phone call one evening (while still at work, as I
was working for EDS at the time) reporting that a COBOL program's
WORKING-STORAGE contained data from another COBOL program's initialized
data areas. Both of these COBOL programs were linked in a batch of
object decks, separated by NAME ...(R) directives. I reported the
problem to IBM, whose initial response was that that was the way the
linkage editor worked. I suggested that it was a security issue, as
the user of one program would not necessarily be entitled to see the
data areas of another program owned by another user (read: EDS
account). In late 1987 or early 1988 (MVS/XA SP 2.1.7, to be precise)
the linkage editor was modified to zero out its work areas before each
load module was linked, so that batch link edit decks could not cause
that problem again. Indeed, all uninitialized areas would be zeroes
thereafter.

So, now you all know when this occurred and who instigated it. The PMR
should still be in the archives of IBM's INFO/MAN database, probably
still in Atlanta, with the customer number for EDS (UK) and my name as
the reporting party. < Takes a bow ... :-) >
- --
Regards,

Dave [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
dwn...@spamtrap.ntlworld.com (David W Noon)
Remove spam trap to reply by e-mail.
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)

iEYEARECAAYFAk5PygUACgkQ9MqaUJQw2MmuJgCgqldNlBeutux33YVlOdK1NzVV
+VMAnidHeSr8SA8iGccy8cPD3eaJN0CQ
=VwTJ
-----END PGP SIGNATURE-----

Lynn McGuire

unread,
Aug 20, 2011, 2:52:16 PM8/20/11
to
"William Clodius" <wclo...@lost-alamos.pet> wrote in message
news:1k6aica.1r829oo1y6rvasN%wclo...@lost-alamos.pet...

Cobol <shudder> predates Fortran by a year or so.

Lynn


dpb

unread,
Aug 20, 2011, 4:30:32 PM8/20/11
to
On 8/20/2011 1:52 PM, Lynn McGuire wrote:
...

> Cobol<shudder> predates Fortran by a year or so.

I don't think so; every language roadmap I've seen has FORTRAN in the
'56-'58 time frame depending on what benchmark they've chosen but COBOL
doesn't show up until '59 or so...

Here's one; I won't vouch for accuracy but it's pretty much consonant w/
other compendiums I've seen and the order is consistent w/ that in the
Wikipedia article chart...

<http://www.levenez.com/lang/>

--

glen herrmannsfeldt

unread,
Aug 20, 2011, 5:10:17 PM8/20/11
to
dpb <no...@non.net> wrote:
(snip, someone wrote)

>> Cobol<shudder> predates Fortran by a year or so.

> I don't think so; every language roadmap I've seen has FORTRAN in the
> '56-'58 time frame depending on what benchmark they've chosen but COBOL
> doesn't show up until '59 or so...

One that is supposed to be true is that Fortran was first with
multi-letter variable names. (Something that mathematicians
still haven't discovered.) It seems that many other Fortran
features appeared earlier in other programming systems
(not quite powerful enough to be called languages).

-- glen

glen herrmannsfeldt

unread,
Aug 20, 2011, 5:12:58 PM8/20/11
to
dpb <no...@non.net> wrote:
(snip)

> I don't think so; every language roadmap I've seen has FORTRAN in the
> '56-'58 time frame depending on what benchmark they've chosen but COBOL
> doesn't show up until '59 or so...

Oh, in addition, before the Fortran I that we all know so well,
D.E.Knuth considers that there was a Fortran 0. That is, what
the designers thought it would be like before they started.

That would be even earlier than the October 15th, 1956, on
the 704 Fortran manual. (So almost 55 years ago.)

-- glen

William Clodius

unread,
Aug 20, 2011, 9:29:54 PM8/20/11
to
Lynn McGuire <l...@winsim.com> wrote:

I don't think so. Backus wrote his proposal for Fortran in late 1953.
IBM authorized the work in early 1954 and they were able to compile the
first Fortran program in late 1954. By 1956 it was in external beta
testing and was officially released in 1957. Work on the first
definition of Cobol began in April 1959 and was completed in December
1959, with the first compilers in 1960. However Cobol was strongly
influenced by an earlier language FLOW-MATIC, which was based on a
proposal by Grace Hopper from 1953, although implementation only begin
in 1956 and release only occured in 1958.

David Thompson

unread,
Aug 21, 2011, 3:22:35 AM8/21/11
to
On Mon, 15 Aug 2011 06:11:48 +0000 (UTC), glen herrmannsfeldt
<g...@ugcs.caltech.edu> wrote:

> Nomen Nescio <nob...@dizum.com> wrote:

> > Good form is to always, always, ALWAYS initialize variables before first
> > use. On some platforms certain types of storage are defined to be binary
> > zeros but depending on your data types this may not be a good value.
>
> That is, at least, a good rule for Fortran. C requires static data
> to be zeroed by the system. I suppose sometimes I add an initializer
> and sometimes not.
>
Note C uses 'appropriate' zeros, not binary zeros in the (rare) cases
where there is a difference as 'Nomen' notes.

I'd say that's not a counterexample. Language-default initialization
is consistent and reliable, even though not explicit, so that makes it
'only' a style issue rather than a reliability/portability issue. Not
that people don't obsess and fight plenty over style issues.

> Java requires that you give variables a value before they are
> used, and also requires the compiler to attempt to detect cases
> where you don't. Now, the language definition could have just
> required compilers to zero all variables, but by not doing that,
> they give programmers one last reason to check for coding error.
>
_local_ variables, see next.

> I have had cases where a variable was definitely initialized,
> that the Java compiler couldn't figure out, and so just add
> an initializer (usually with a comment complaining about the
> compiler). Java arrays are always allocated zero filled.
>
Also, class instance variables are always initialized when allocated,
to appropriate zero if you don't say explicitly, and class 'static'
variables are initialized when the class is loaded similarly.

Thus local (necessarily simple) variables are the only ones subject to
the 'definite-initialization' flow analysis.

> > On Intel the .data segment is initialized to zeros but .bss IIRC is
> > not. Roughly, depending on the compiler on Intel, this would mean any
> > variables you define on the help would be expected to contain binary zero
> > and any variables on the stack or any variables declared but not defined (if
> > that is even possible in Fortran nowadays) will be unpredictable before
> > being set. And that means your program will eventually go bang! or do
> > something you don't want.
>
> I believe most operating systems now zero dynamically allocated
> memory for security reasons. It used to be you got whatever was
> in memory, possibly including data from a previous program.
> That is pretty much not allowed today.
>
Rather all process memory not initialized otherwise (canonically TEXT
and [R]DATA). That includes BSS as noted, heap, and stack. Only heap
is usually called 'dynamic' especially in C and Fortran.

Yep, in OS/360 it was sometimes fun to poke around in your storage and
see what you could find. Of course in those days you could also read
_other_ partitions' storage -- but not write, barring bugs in setting
the storage keys, and of course OS/360 had no bugs <G^N>.

> > It's important to initialize variables before use in every
> > language, not just Fortran.
>
> Except languages that require variable to already be zero.
>
As above, I'd say that _is_ initialized even though not explicit.

There are a few languages (but not many) which have a distinct
'uninitialized' state or value; in those languages it may actually be
useful to NOT initialize (to a 'real' value).

glen herrmannsfeldt

unread,
Aug 21, 2011, 4:10:41 AM8/21/11
to
David Thompson <dave.th...@verizon.net> wrote:

(snip)


>> I believe most operating systems now zero dynamically allocated
>> memory for security reasons. It used to be you got whatever was
>> in memory, possibly including data from a previous program.
>> That is pretty much not allowed today.

> Rather all process memory not initialized otherwise (canonically TEXT
> and [R]DATA). That includes BSS as noted, heap, and stack. Only heap
> is usually called 'dynamic' especially in C and Fortran.

> Yep, in OS/360 it was sometimes fun to poke around in your storage and
> see what you could find. Of course in those days you could also read
> _other_ partitions' storage -- but not write, barring bugs in setting
> the storage keys, and of course OS/360 had no bugs <G^N>.

For S/360, store protection was optional, at least on the smaller
processors. If you have store protection, fetch protection is
an additional option. The storage key has four bits for the key
value, plus one bit for fetch protection. I believe that for the
high-end systems, both were standard.

(Not quite related, but the storage keys for the 360/91 were the
first use of IC semiconductor memory for a memory subsystem,
with 16 bits per chip.)

In the usual case on multiuser systems, most (or all) of the OS
is not fetch protected, but other users' regions are. Many control
blocks need to be readable by all tasks. Also, the access method
routines for I/O are usually key 0, and entered by subroutine call
(BALR) in problem state, and with the user PSW key. Among other
reasons, that saves the overhead of SVC for I/O routines until the
actual I/O operation by EXCP.

-- glen

Louis Krupp

unread,
Aug 21, 2011, 6:54:03 AM8/21/11
to
On Sun, 21 Aug 2011 03:22:35 -0400, David Thompson
<dave.th...@verizon.net> wrote:

>On Mon, 15 Aug 2011 06:11:48 +0000 (UTC), glen herrmannsfeldt
><g...@ugcs.caltech.edu> wrote:
>
>> Nomen Nescio <nob...@dizum.com> wrote:
>
>> > Good form is to always, always, ALWAYS initialize variables before first
>> > use. On some platforms certain types of storage are defined to be binary
>> > zeros but depending on your data types this may not be a good value.
>>
>> That is, at least, a good rule for Fortran. C requires static data
>> to be zeroed by the system. I suppose sometimes I add an initializer
>> and sometimes not.
>>
>Note C uses 'appropriate' zeros, not binary zeros in the (rare) cases
>where there is a difference as 'Nomen' notes.

I wouldn't have given C credit for being that thorough.. Is there
anyone out there with a system that has a data type for which binary
zero wouldn't be "appropriate" and a C program that demonstrates what
actually happens?

Louis

It is loading more messages.
0 new messages