ptr=malloc(size);
if(ptr==NULL)
But i came to know malloc can return valid pointer in case if argument
is zero.
one solution is
if(size==0)
ptr=NULL;
else
ptr=malloc(size);
if you have any other suggestion please welcome.
And that pointer can successfully be passed to free(). And you are
certain to be able to store at least zero bytes at the location pointed
to by the pointer, so what's the problem?
If the issue is that you are likely to call malloc() with an argument of
zero and then attempt to store data at the address returned, your
problem is not really with malloc() is it?
I think it would be useful for you to explain in more detail what
problem you think you need help with, as I'm not at all clear about that.
One issue might be that, if you call malloc(0) enough times, you can run out
of memory!
--
Bartc
employees = malloc(N * sizeof(struct Employee));
if(!employees)
goto error_handler;
memcpy(employees, temparray, N * sizeof(struct Employee));
the code is correct is malloc() returns non-zero for a zero allocation
and N == 0 is allowed. It's incoorect is malloc() returns zero in this
situation.
if (N) {
employees = calloc(N, sizeof *employees);
if(!employees) {
goto error_handler;
}
memcpy(employees, temparray, N * sizeof *employees);
} else {
employees = NULL;
}
There I both fixed the problem and cleaned up your code, that'll be
100 shillings please, due net 30.
Tom
It depends on what you think your problem is.
In my code examples
How to check whether malloc has allocated memory properly
is done this way:
http://www.mindspring.com/~pfilandr/C/get_line/get_line_test.c
buff = malloc(size);
if (buff == NULL && size != 0) {
/* malloc has not allocated memory properly */
} else {
/* malloc has allocated memory properly */
}
--
pete
Yes: Just like malloc(N), malloc(0) can return NULL or non-NULL.
Whichever it returns, though, you must not try to read or store through
that pointer. If it's NULL, well, you mustn't try to read or store
through NULL. If it's non-NULL you can read or store anything whose
size does not exceed the argument value, but since *every* C data type
is at least one byte wide there is nothing that fits in 0 bytes, hence
nothing you can safely store in 0 bytes.
To test for "success" of malloc(N), you could write
ptr = malloc(N);
if (ptr == NULL && N > 0) {
// failure
} else {
// "success"
}
I put "success" in quotes because of an ambiguity that is meaningless
in practice: If malloc(0) returns NULL, we cannot tell whether it
"succeeded" or "failed," but it really doesn't make any difference.
> one solution is
>
> if(size==0)
> ptr=NULL;
> else
> ptr=malloc(size);
After this, a simple `if (ptr == NULL)' leaves you with the same
problem you had to begin with.
> if you have any other suggestion please welcome.
One thing to keep in mind is that this `size' value comes from
somewhere, and will (presumably) govern what you later try to store
in the allocated area. If your program can calculate `size' as 0,
it should also never try to store more than 0 bytes in the allocated
area -- in which case, all will be well no matter what malloc(0)
returned.
Since a 0-byte memory area is mostly useless, you probably should
not be allocating a large number of them -- if you are, you've most
likely got a bug somewhere. Usually, 0-byte allocations arise when
you've got a data structure whose size you adjust up and down as the
circumstances require. Maybe you're keeping track of employees'
birthdays using 366 dynamically-allocated blocks, and the only person
born on February 29 quits: Your program might do
count[date] -= 1; // making it zero
Person *tmp = realloc(persons[date], count[date] * size *tmp);
if (tmp == NULL && count[date] > 0) {
// Can't delete: Hire him back again!
} else {
persons[date] = tmp;
}
In situations like this, a 0-byte allocation could be considered
"normal." But if you're routinely making thousands and thousands of
0-byte allocations, there's likely something wrong elsewhere.
--
Eric Sosman
eso...@ieee-dot-org.invalid
IMHO the case when malloc(0) return NULL is more interesting, as there you
need to adjust the error checking, i.e. not exit the program due to lack of
memory if size was 0.
Bye, Jojo
Agreed, I can't ever think of a legit use for malloc'ing zero bytes.
It should return a NULL to trip up [what should be in place] normal
error detection.
If possible I would design the program so that `size' is only allowed to
be a positive number. If it is a library I would simply assert that size
> 0 and put the responsibility on the client to make sure that the
condition is met (the fail fast strategy).
/August
--
The competent programmer is fully aware of the limited size of his own
skull. He therefore approaches his task with full humility, and avoids
clever tricks like the plague. --Edsger Dijkstra
I would be a bit careful with putting stuff at the start of the
allocated area - if you put e.g. 4 bytes at the start and then
use the original address + 4 then this may not be properly aligned
for all possible types. You can be only be sure about that for the
original address returned by malloc().
Regards, Jens
--
\ Jens Thoms Toerring ___ j...@toerring.de
\__________________________ http://toerring.de
You can do this in your own program if you like, but note that
malloc(0) itself cannot work this way. If it returns non-NULL, it
must return a value that is distinct from all the other values it
has returned (that have not yet been released). That is, malloc(0)
must satisfy:
void *p = malloc(0);
void *q = malloc(0);
assert (p == NULL || p != q);
Of course, you can make your wrapper behave as you please -- but
you can't change the behavior of the underlying malloc().
--
Eric Sosman
eso...@ieee-dot-org.invalid
Careful! This has undefined behavior if malloc(n) returns NULL,
even if n is zero. See 7.21.1p2 and 7.1.4p1, and note the absence of
any "explicitly stated otherwise" language in 7.21.2.1.
In other words: You needn't special-case zero (much), but you
still need the NULL test.
> This is similar to the old zero trip loops of Fortran and Algol; both languages
> could implement identical semantics, but which had the more of convenient syntax
> was a matter of taste.
In the long-ago days when I used FORTRAN (II and IV), it had
no construct I'd have described as a "zero-trip loop." Specifically,
a DO loop always executed its body at least once.
--
Eric Sosman
eso...@ieee-dot-org.invalid
That is not my point. Of course malloc(0) is stupid, but calculating the
size of a buffer and then malloc()'ing that is legel, even of the size
calculation mey result in 0. But in that case chacking malloc() for
returning NULL is not sufficient for aborting the program, you'd also need
to check size, either before the malloc (and skip it) or after, but befor
jumping to the error handling.
I once had to debug a program, actually 'nm', which aborted with a memory
error on an objectfile, that, as it turned out later, had 0 symbols (which
is not usual, but legal), and so 'nm' successfully malloc()'ed 0 bytes, but
chocked on the NULL check.
Bye, Jojo
That works. It's silly, IMHO, but it works. (Keep in mind that
`int' and `size_t' are not synonymous, so you may be inviting trouble
with your choice of parameter type.)
>> Of course, you can make your wrapper behave as you please -- but
>> you can't change the behavior of the underlying malloc().
>
> And I can hide that malloc if I want.
Not sure what you mean by "hide." You cannot "hide" it in the
sense of making it unavailable to other code in the program, nor can
you "hide" it in the sense of intercepting all malloc() calls and
routing them somewhere other than to the real malloc(). You can, of
course, threaten to flog or "hide" or "give a hiding to" anyone who
calls malloc() directly instead of using your wrapper -- but that's
social engineering, not software engineering.
--
Eric Sosman
eso...@ieee-dot-org.invalid
The phrase "duck and cover" springs to mind.
That's not a "zero-trip loop," it's an ordinary loop conditionally
skipped. (And it still iterates once for n=-42.) (And it's not "very
early on," because the "logical IF" wasn't in the language prior to
FORTRAN IV.)
--
Eric Sosman
eso...@ieee-dot-org.invalid
"Eric Sosman" <eso...@ieee-dot-org.invalid> wrote in message
news:iedo67$d4i$1...@news.eternal-september.org...
Fortran IV came out nearly 50 years ago. How much earlier do you want to go?
--
Bartc
I avoid this in my programs/wrappers, because it makes debugging harder,
as I like tools like valgrind to be able to catch writes outside of a
block. If a zero-length block is allocated as a one-byte block, then
writes to that first byte won't be caught. Thus, any wrappers do something
like this:
void* p = malloc( s );
if ( p == NULL && s == 0 )
p = malloc( 1 );
if ( p == NULL )
out_of_memory();
... use p
Only if the system returns null for a zero-length malloc does it try
malloc(1); in that case, the system isn't equipped for allowing the best
debugging, and the code accepts that.
"Nearly 50 years ago" FORTRAN IV was available on a handful of
computers. In late 1966 (44 < 50 years ago) I was writing my first
programs in FORTRAN II.
Fortran, with its newfangled mixed-case name (note that even C has
not yet advanced to a mixed-case name), didn't arrive until 1991. By
then, CALL EXIT was a vanished speck in my personal rear-view mirror.
FORTRAN (NO SUFFIX) was altogether before my time, though.
--
Eric Sosman
eso...@ieee-dot-org.invalid
> if (N) {
> employees = calloc(N, sizeof *employees);
> if(!employees) {
break;
//> goto error_handler;
> }
> memcpy(employees, temparray, N * sizeof *employees);
> } else {
> employees = NULL;
> }
if(N && (employees == NULL))
{
do_something_here();
}
>
> There I both fixed the problem and cleaned up your code, that'll be
> 100 shillings please, due net 30.
>
> Tom
only 30 shillings, 70 go for the goto :-P
-rasp
>> "Mark Bluemel" <mark_b...@pobox.com> wrote in message
>> And that pointer can successfully be passed to free(). And you are
>> certain to be able to store at least zero bytes at the location pointed
>> to by the pointer, so what's the problem?
> One issue might be that, if you call malloc(0) enough times, you can run
> out of memory!
I tried this and it hanged my system in 60 seconds. Last thing I could
see from top commnd (Linux) after 40 or so seconds that memory usage was
jumping between 83-87% range and CPU u]2sage was 13-17% range and then I
switched to Emacs from urxvt, to look at source code and moved the cursor
10 lines down and then everything just jammed, had to give a hard reboot.
If malloc was called 1000,000 times then memory used still is 0 bytes
(1000,000 * 0 = 0), hence I still wonder when malloc(0) allocates 0 bytes
why system hanged by eating all memory ?
/* **** DO NOT TRY to run this code PLEASE ***. It will crash your
system. */
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
char* p;
while(1)
{
p = malloc(0);
if(NULL == p)
{
printf("********* Out of Memory ******** \n");
break;
}
}
printf("---------------------\n");
return 0;
}
malloc(0) can return a value different from NULL.
It's a value which can and should be freed at some point.
Eventually malloc can run out of resources to keep
track of the values which it has returned
and which have not been freed.
--
pete
"pete" <pfi...@mindspring.com> wrote in message
news:4D0F5A...@mindspring.com...
Isn't it silly then to use up valuable resources to allocate 0 bytes?
If malloc has to return a non-NULL value for an empty allocation, why not a
pointer to the same, simple block of reserved memory? Then free() can
recognise the pointer and not do anything; and if free() isn't called, then
no extra memory is used up.
And, the application also has an extra way of telling whether an empty
allocation has been made (malloc(N) == malloc(0)).
--
Bartc
No more silly than doing a billion calls of malloc(1) without freeing them.
> If malloc has to return a non-NULL value for an empty allocation,
It doesn't "have to" return a non-NULL value. It's implementation defined
whether it returns NULL or something else.
> why not a
> pointer to the same, simple block of reserved memory? Then free() can
> recognise the pointer and not do anything; and if free() isn't called, then
> no extra memory is used up.
I was thinking the same thing. However, the Standard says it can't.
7.20.3p1:
> If the size of the space requested is zero, the behavior is implementation
> defined: either a null pointer is returned, or the behavior is as if the size
> were some nonzero value, except that the returned pointer shall not be
> used to access an object.
[...]
--
Kenneth Brody
Quite possibly, but it depends on what you're going to do with the
resulting pointer.
> If malloc has to return a non-NULL value for an empty allocation, why not a
> pointer to the same, simple block of reserved memory? Then free() can
> recognise the pointer and not do anything; and if free() isn't called, then
> no extra memory is used up.
Nearly the same goal could have been achieved by requiring malloc(0)
to return a null pointer. The only drawback would be that you
couldn't distinguish between a null pointer and a pointer returned
by malloc(0) -- but you can't do that anyway, since malloc(0)
can return a null pointer.
> And, the application also has an extra way of telling whether an empty
> allocation has been made (malloc(N) == malloc(0)).
I think the way the current definition came about is something
like this: Before C89, some implementations had malloc(0) return a
unique pointer value (that couldn't safely be dereferenced), and
some had it return a null pointer. The former is arguably more
consistent with the behavior of malloc() for non-zero sizes, and
lets you distinguish between results of different malloc(0) calls; it
makes malloc(0) a convenient way to generate a pointer value that's
non-null, guaranteed to be unique, and consumes minimal resources.
The latter avoids the conceptual problems of zero-sized objects.
The ANSI C commmittee chose to allow either behavior, probably
to avoid breaking existing implementations; they also defined the
behavior of realloc() so it could deal consistently with either a
null pointer or a pointer to a zero-sized object.
Personally, I think it would have been better to define the behavior
consistently and let implementations conform to what the standard
requires.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
A reasonable solution for this is to wrap malloc. That lets you
define a behavior rather than depending on the implementations one.
e.g. if you get my_malloc(0) you could return NULL directly, or return
malloc(1). And if central handling of malloc failures is appropriate
to your program (e.g. abort on malloc() return of NULL), this wrapper
for malloc gives you a place to handle that as well.
-David
> On 12/16/2010 11:04 AM, Ered China Luin wrote:
<snip prior points about C malloc for size=zero>
> > This is similar to the old zero trip loops of Fortran and Algol; both languages
> > could implement identical semantics, but which had the more of convenient syntax
> > was a matter of taste.
>
> In the long-ago days when I used FORTRAN (II and IV), it had
> no construct I'd have described as a "zero-trip loop." Specifically,
> a DO loop always executed its body at least once.
Changed in (Standard) F-77. Which also added logical-IF noted
downthread. I thought F-IV had most of the F-77 enhancements, but
don't remember this one specifically. I'm sure F-II didn't.
And as I understand it in earlier Fortran the iteration count for a "zero-trip"
loop was undefined, but many implementations performed as above,
Ian
>> In the long-ago days when I used FORTRAN (II and IV), it had
>> no construct I'd have described as a "zero-trip loop." Specifically,
>> a DO loop always executed its body at least once.
> Changed in (Standard) F-77. Which also added logical-IF noted
> downthread. I thought F-IV had most of the F-77 enhancements, but
> don't remember this one specifically. I'm sure F-II didn't.
The zero trip loop was not allowed in Fortran 66. Even more,
the starting, ending, and increment values all had to be greater
than zero. (It is good arrays started at one, otherwise one
wouldn't be able to loop through an array!)
Compilers would catch it at compile time if constants were used,
but not variables. It was, then, common to put the test at
the end of the loop, allowing for the 'one trip' feature.
Some even say that the BXLE instruction was added to S/360
specifically for Fortran DO loops.
Logical IF was in Fortran IV and Fortran 66. The structured
form of IF was added in Fortran 77.
Many of the Fortran IV features not in Fortran 66 were added
into Fortran 77, but also many others.
-- glen
More precisely, code that had a "zero-trip loop count" was nonconforming
to the f66 ANSI standard. It was not just that the resulting loop count
was undefined, the entire behavior of the program was undefined, as with
any other nonstandard code. This is commonly misunderstood, with the
misunderstanding even making its way into compiler switch names, which
probably then reinforced user misunderstandings. As noted, f77
standardized such code, with the zero-trip interpretation.
I've seen f77 and later compilers where a switch named -f66 did nothing
other than change zero-trip DO loops to be implemented with one trip.
That always struck me as a misleading switch name because that was *NOT*
one of the areas where the f77 standard was incompatible with f66. The
f77 standard has an annex listing the incompatibilities, and that isn't
in it, insomuch as giving an interpretation to formerly nonstandard code
does not count as an incompatibility. The -f66 compiler switch in
question did nothing about any of the actual incompatibilities between
the standards.
This probably traces back to people confusing particular compiler
implementations with the standard. That was more common in the days of
f66 than it is now. Before f66, of course, there was no formal standard
- just particular implementations, some of which were dominant enough to
amost constitute defacto standards.
--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain
Hmm. I am afraid that I regard that as being a bit revisionist and
legalistic, because that wasn't how the Fortran 66 standard was
interpreted in the early 1970s. It's exactly like the situation
with impure functions, where the words are now given a meaning
that is different from the one they were given then.
Having said that, I don't know WHY the words were interpreted in
the way that they were then - unlike with impure functions, there
is no ambiguity, and the standard says clearly that the terminal
value must not be less than the initial value. It might be because
it says it in the section that describes how the DO statement works,
and not in the one that describes how it is used.
A possibility is that it was specified in the IBM Fortran language,
and everybody followed that in this respect. I don't think that I
have a copy of a manual of that era to check. There may be one online.
Regards,
Nick Maclaren.
This annoyed me, so I did a search. Yes, it's IBM. The 7094
explicitly specified the one-trip semantics and the 360/370
language merely said that the initial value SHOULD not be larger
than the final one. See:
http://www.fh-jena.de/~kleine/history/
But I take your point that the Fortran 66 standard did unambiguously
specify that a null count DO loop was an error. What I remember
VERY distinctly was that the near-universal belief was that it
specified the one-trip semantics. It is possible that most experts
regarded the restriction as a defect in the standard, but I don't
remember that aspect.
Regards,
Nick Maclaren.
(snip on one-trip DO)
> This annoyed me, so I did a search. Yes, it's IBM. The 7094
> explicitly specified the one-trip semantics and the 360/370
> language merely said that the initial value SHOULD not be larger
> than the final one.
The ones I remember would enforce this for constants. Also,
that all are greater than zero.
Yes, but that's not the point. One of the defects of the way that
almost all standards are written is that the the semantic guarantees
have to be deduced from the semantic contraints and intent. They
almost all have the concept of defined behaviour that may be rejected
by a compiler (due to resource constraints if nothing else). This
was regarded as being something else of the same category.
Regards,
Nick Maclaren.
> But I take your point that the Fortran 66 standard did unambiguously
> specify that a null count DO loop was an error. What I remember
> VERY distinctly was that the near-universal belief was that it
> specified the one-trip semantics.
Yes, I recall that near-universal belief as well. I ascribe it to
confusion between the standard and particular implementations. After
all, what I most recall as near universal in f66 days was that most
programmers never even saw a copy of the actual standard, but relied on
vendor documentation or 3rd-party books. I personally never saw a copy
of the f66 standard until well after f77 was out. I'm not sure I ever
even met anyone during the f66 timeframe who had seen the standard.
Another thing I recall from that timeframe was working with some
compilers that did not have one-trip semantics. Memory has gotten vague
enough that I can't say for sure which ones they were, but I sure recall
working with some because I recall having to fix code that assumed the
behavior.
> This probably traces back to people confusing particular compiler
> implementations with the standard. That was more common in the days of
> f66 than it is now. Before f66, of course, there was no formal standard
> - just particular implementations, some of which were dominant enough to
> [almost] constitute defacto standards.
>
Since this is x-posted to both C and Fortran groups, I think we need the
syntax name to describe which standard. It was my understanding that
before 1966, IBM was "standard fortran." Were there other vendors that
could make the same claim? (I was busy being born in '66.)
I would say that C became "standard" in '89-'90 with ANSI. Would people
say that K&R1 was a de facto (C) standard?
I'm reading _Perl Cookbook_ today and find a quote from clc contributor
Chris Torek at the beginning of §9: "Unix has its weak points, but its
file system is not of them."
(Hey Chris, if you happen to read this and are still in Salt Lake,
what's the weather and roads like? I'm coming to Zion for New Years.
I'm nervous about driving conditions with my truck this time. Last year
I drove a subaru outback and was all over the roads.)
Cheers,
--
Uno
Metacommand:$DO66
The following F66 semantics are used:
*Statements within a Do loop are always executed at least once.
*Extended range is permitted;control may transfer into the syntactic
body of a DO statement.
The range of the DO statement is thereby extended to include, logically,
any statement that may be executed between a DO statement and
its terminal statement.
However, the transfer of control into the range of a DO statement
prior the the execution of the DO statement or following the final
execution of its terminal statement is invalid.
Whatever all that means, verbatim from the MS Fortran reference,
typing errors are free of charge :) .
> It was my understanding that before 1966, IBM was "standard fortran."
No. It was just IBM's Fortran, generally with the particular compiler
version number. The term "standard" was neither applicable nor generally
used.
I can hardly conceive of what happened during that conception, but I
wouldn't consider any stroke of it contrary to the open source movement.
(I'm sure DoD and the Usual Suspects played their part.)
--
(=)
I had, but I rarely used it.
As far as I know, it was one of the first standards to use the modern
informal conventions, which are the source of most of these problems.
Most previous languages had no standard-like specification or were in
a semi-mathematical notation.
>Another thing I recall from that timeframe was working with some
>compilers that did not have one-trip semantics. Memory has gotten vague
>enough that I can't say for sure which ones they were, but I sure recall
>working with some because I recall having to fix code that assumed the
>behavior.
I can't remember if I used them or only wrote code that would be
expected to work on them.
Regards,
Nick Maclaren.
If I have parsed your sentence correctly, I find it incomprehensible.
>(I'm sure DoD and the Usual Suspects played their part.)
Nah. They did with the USMIL extensions, which had some later
effect, but had effectively no effect on Fortran before (really)
Fortran 90 and possibly a couple of things in Fortran 77.
Regards,
Nick Maclaren.
> Uno <merril...@q.com> wrote:
> >On 12/27/2010 8:24 PM, Richard Maine wrote:
> >> Uno<merril...@q.com> wrote:
> >>
> >>> It was my understanding that before 1966, IBM was "standard fortran."
> >>
> >> No. It was just IBM's Fortran, generally with the particular compiler
> >> version number. The term "standard" was neither applicable nor
> >> generally used.
> >
> >I can hardly conceive of what happened during that conception, but I
> >wouldn't consider any stroke of it contrary to the open source movement.
>
> If I have parsed your sentence correctly, I find it incomprehensible.
My sentiments as well. I honestly can't tell whether it is intended to
say something, or whether it is just some kind of nonsensical playing
with words intended to be humorous. In one case, the meaning escapes me.
In the other case, the humor does. I often have this problem with Uno's
posts. I can't tell his serious comments from his joking around. I
probably should resist the temptation to respond to them at all.
Allocating memory, even zero bytes of it, is not free. malloc() et al
must keep a record of each allocation, and that record consumes memory
of its own that is _not_ included in the (zero-byte, in this case)
object returned to the caller.
If you're making large allocations, or relatively few small ones, this
overhead is negligible. If you make lots of small allocations, though,
the overhead can consume significantly more memory than the actual
objects you're allocating--infinitely more in the case of zero-byte
objects. That's why programs that do so usually have special-purpose
allocators more efficient than a general-purpose one like malloc().
S
--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
I may have misunderstood Keith's post else-thread, but it (Message-ID:
<ln7hf4r...@nuthaus.mib.org>) contradicts what you say above
(i.e., the implementation could return a pointer that is non-null and
unique so that p == q holds).
- Anand
> >(I'm sure DoD and the Usual Suspects played their part.)
>
> Nah. They did with the USMIL extensions, which had some later
> effect, but had effectively no effect on Fortran before (really)
> Fortran 90 and possibly a couple of things in Fortran 77.
A couple of years after the mil-std-1754 specification, there were a
couple of vendors that mentioned a DOE (Department of Energy)
specification for asynchronous I/O in fortran. I tried to track
this down a few times in the early 80's and never found it anywhere.
Then in later versions of their compiler documentation the DOE
references were removed. I suspect that there might have been some
kind of draft document at one time, but perhaps it never matured.
I think the late 70's through the mid 80's was a time when customers
were just beginning to shift over to the idea of writing standard
code as a means to achieve portability. Before then, the vendors
were happy to implement extensions and features to their big
customers. This kept the customers happy and it helped lock in
those customers to the vendor's hardware and software. But in the
80's, especially the late 80's, there were all kinds of new hardware
being adopted by new companies, minicomputers were becoming popular,
PCs were becoming useful for scientific programming, networking was
becoming popular allowing scientists to sit in front of one piece of
hardware and run programs on some remote piece of hardware, the
first commercial parallel computers were appearing, and so on. In
this new environment, portability of source code was more important
than before, and that is why the standards process become more
important to the end programmers. That is also why the failures of
the fortran standards committee during the decade of the 80's to
move forward was such a devastating blow to the popularity of the
language.
$.02 -Ron Shepard
Some of us had been doing that for a decade before that.
Regards,
Nick Maclaren.
Unless you meant the difference between de facto and de jure
standards. I agree that the latter dated from Fortran 77, because
of the character-handling problem. But the widespread use of
PFORT shows the interest in that a long time earlier.
Regards,
Nick Maclaren.
I well believe you were. But my observation was that most programmers of
the day weren't until sometime around the time frame that Ron mentions.
I think you did misunderstand it, presumably because I didn't state it
clearly enough. Eric is correct.
Here's what I wrote:
| I think the way the current definition came about is something
| like this: Before C89, some implementations had malloc(0) return a
| unique pointer value (that couldn't safely be dereferenced), and
| some had it return a null pointer. The former is arguably more
| consistent with the behavior of malloc() for non-zero sizes, and
| lets you distinguish between results of different malloc(0) calls; it
| makes malloc(0) a convenient way to generate a pointer value that's
| non-null, guaranteed to be unique, and consumes minimal resources.
| The latter avoids the conceptual problems of zero-sized objects.
| The ANSI C commmittee chose to allow either behavior, probably
| to avoid breaking existing implementations; they also defined the
| behavior of realloc() so it could deal consistently with either a
| null pointer or a pointer to a zero-sized object.
|
| Personally, I think it would have been better to define the behavior
| consistently and let implementations conform to what the standard
| requires.
What I meant by "unique pointer value" is a pointer value that's
unique *for each call*.
And something I missed: even an implementation that returns a
unique (per-call) non-null pointer value for malloc(0) cannot do
so arbitrarily many times. Each call (assuming nothing is free()d)
consumes some resources, at least address space if not actual memory.
Eventually there won't be enough left to allocate the bookkeeping
information, and malloc(0) will fail and return a null pointer
anyway.
But if you want a series of unique address values, and you're not
planning to dereference any of them, malloc(1) will serve the same
purpose. Or, with less overhead, you could return addresses of
successive elements of a big char array, expanding it with realloc()
as needed.
Yes, I remember using PFORT on a decsystem-20 in the late 70's, and
that is one of the tools I had in mind when I wrote that sentence.
Are you saying that PFORT was commonly used a decade earlier?
Another useful tool from that era was FTNCHEK, which I think I
started using about 1986 or so.
$.02 -Ron Shepard
Not a full decade earlier, obviously, because it dates from only
1974. There were tools that preceded it, though I can no longer
remember anything much about them.
Back in the late 1960s, most American codes were system-specific,
but the UK was always a hotchpotch, and MOST people expected to
use a wide variety of systems in a short period. NAG was founded
in 1970, initially for the ICL 1900, but the objective of portability
to all UK academic mainframes was adopted almost immediately, and
quite a lot of people wrote with the same objectives.
I contributed from 1973, and wrote code that was expected to work,
unchanged, on IBM 360, ICL 1900, several PDPs, CDCs, Univac, and
others that I can't remember. We wrote standard-conforming code
to do that, though the actual standard was adapted from Fortran 66,
and not the document itself.
For example, no assumption of one-trip DO-loops (or not), saved
data outside COMMON, extended ranges, etc. etc. But, essentially,
it was the sane subset of Fortran 66 that has remained unchanged
to this day - the code has been redone, but not because it stopped
working!
Regards,
Nick Maclaren.
Same applies to any utility function that you use widely in your
program. You can also in a variety of platform specific ways override
the libc malloc to achieve this end while still calling "malloc" in
your program. But using your own code is completely portable and
achieves the desired end -- which is to say consistent behavior of
malloc(0) under your control. Can't really see your issue here.
Thank you for the clarification.
- Anand
> Back in the late 1960s, most American codes were system-specific,
> but the UK was always a hotchpotch, and MOST people expected to
> use a wide variety of systems in a short period.
I'd believe that as consistent with my previously mentioned observation
that most programmers paid little attention to portability. I failed to
mention the qualifier, but my observations at the time were solely
within the US. Not until a bit later did I get any exposure to work
across the pond. In the environments I mostly saw, there was only a
single machine at your place of employment, and that machine was likely
to be the only one you used for somewhere close to a decade, with
software compatibility being a big argument for replacing it with
another from the same vendor when the decade was up.
> <nm...@cam.ac.uk> wrote:
>
> > Back in the late 1960s, most American codes were system-specific,
> > but the UK was always a hotchpotch, and MOST people expected to
> > use a wide variety of systems in a short period.
>
> I'd believe that as consistent with my previously mentioned observation
> that most programmers paid little attention to portability. I failed to
> mention the qualifier, but my observations at the time were solely
> within the US. Not until a bit later did I get any exposure to work
> across the pond. In the environments I mostly saw, there was only a
> single machine at your place of employment, and that machine was likely
> to be the only one you used for somewhere close to a decade, with
> software compatibility being a big argument for replacing it with
> another from the same vendor when the decade was up.
Yes, this was really the point that I was trying to make before. I
think most programmers up through the 70's (in the US, that was my
only experience at the time) only worked on a single machine. It
might have been an IBM shop, or a Univac shop, or a CDC shop, or
whatever, but that was the hardware that was available to a single
programmer, and his job was to squeeze everything out of it that he
could for his set of applications. That often meant using
machine-specific extensions for things like bit operators, or access
to some hardware widget or i/o device, or operations on character
strings, and so on. Given the choice between slow portable code or
fast machine-specific code, the pressure always seemed to be toward
the latter.
Then in the early 80's when some of the new hardware became
available, such as Cray vector processors, some programmers kept
this same mindset. I've seen rewrites of electronic structure codes
for cray computers that would have almost no chance of compiling on
any other hardware. Every other line of code seemed to have some
kind of special vector operator or something in it. I remember
seeing someone multiply an integer by ten by doing a
shift2-add-shift2 sequence (or maybe it was shift4-shift2-add, I
forget) because he had counted clock cycles and determined that that
was the best way to do it with that version of the hardware and
compiler. But as more and more hardware became available, and it
became necessary to port your codes quickly from one machine to
another, or to be able to run your code simultaneously on multiple
combinations of hardware and software, this kind of coding died out
pretty quickly. Low-level computational kernels, such as the BLAS,
were still done that way, but the higher-level code was written to
be portable, even at the cost of an extra machine cycle here and
there if necessary. All of this was also driven by the need to work
with collaborators who were using different hardware than you, and
the need to access and contribute to software libraries such as
netlib which were used on a wide range of hardware, and network
access to remote machines at the various NSF/DOE/DoD supercomputer
centers around the country.
In the old environment a programmer might take several months or
years to optimize some code for a specific machine, and then that
code might be used for a decade. In the new environment, the code
needed to be ported in a matter of days or weeks, and used for a few
months, at which time the machine might be replaced with new
hardware, or your NSF allocation would expire, or whatever. It was
this newer environment (at least in the US) that I think drove
programmers toward writing portable code, and in many ways that
meant conforming to the fortran standard.
There were some exceptions to this, of course, which have been
discussed often here in clf. One of them was the practical
observation that the nonstandard REAL*8 declarations were more
portable in many situations than the standard REAL and DOUBLE
PRECISION declarations. This was an example of the standard
actually inhibiting portability rather than promoting it. The KINDs
introduced finally in f90 solved this dilemma, but that is probably
part of f90 that should have been included in a smaller revision to
the standard in the early 80's rather than a decade later. There
may be other examples of this, but this is the only one that comes
to mind where I purposely and intentionally avoided using standard
syntax and chose to use instead the common nonstandard extension.
Then when f90 was finally adopted (first by ISO, then force fed to
the foot-dragging ANSI committee), this was one of the first
features that I incorporated into my codes. I even wrote some sed
and perl scripts to help automate these conversions, violating my
"if it ain't broke, don't fix it" guiding principle to code
maintenance.
$.02 -Ron Shepard
The main drive wasn't that the individual programmer had access to
multiple machines at one time, but collaborated with people who had
other ones. The other was that the next machine might well be very
different, whether at another location or even at the same one,
especially in academia.
While we typically had a lot less computer power than people in the
USA, that helped to AVOID extreme coding of the above nature,
because there was no option but to use smarter algorithms.
>That often meant using
>machine-specific extensions for things like bit operators, or access
>to some hardware widget or i/o device, or operations on character
>strings, and so on. Given the choice between slow portable code or
>fast machine-specific code, the pressure always seemed to be toward
>the latter.
That is still true - even when speed isn't important :-(
Regards,
Nick Maclaren.
> ...this is the only one that comes
> to mind where I purposely and intentionally avoided using standard
> syntax and chose to use instead the common nonstandard extension.
I mostly avoided this particular one because much of my work in the late
70's and early 80's was on CDC machines that did not support that syntax
(and didn't have an 8-byte real, or for that matter bytes at all).
Instead, I developed habits of using a style that made it easy to do
automated translation of "double precision" to "real". F77's generic
intrinsics made this at least reasonably practical, though it was still
a pain to go through the preprocessing/translation stage. I also was a
quick convert to F90 kinds.
> Ron Shepard <ron-s...@NOSPAM.comcast.net> wrote:
> >...nonstandard REAL*8 declarations
>
> > ...this is the only one that comes
> > to mind where I purposely and intentionally avoided using standard
> > syntax and chose to use instead the common nonstandard extension.
>
> I mostly avoided this particular one because much of my work in the late
> 70's and early 80's was on CDC machines that did not support that syntax
> (and didn't have an 8-byte real, or for that matter bytes at all).
I generally found that REAL*8 worked alright on these kinds of
machines. I forget exactly which compilers I used on CDC machines,
but I remember some kind of compiler option or something that mapped
these declarations to the 60-bit floating point type, which is what
I wanted on that hardware (I remember using CDC 6600 and 7600
machines). This also worked fine on the univac and decsystem-20
compilers I used; these were both 36-bit word machines, and the
REAL*8 declaration mapped onto the 72-bit floating point type which
is what I wanted on those. I also used harris computers a little
(these had 3, 6, and 12-byte data types I think), and I remember
getting things to match up alright there too. And of course there
were the cray and cyber computers and the fps array processors which
had 64-bit words, not bytes, and REAL*8 worked fine there too.
When used this way, REAL*8 is sort of a poor man's
selected_real_kind() where the 8 didn't necessarily mean anything
specific, but it resulted in the right kind of floating point. As I
complained before, all of this should have been incorporated into a
minor fortran revision in 1980 or so, along with the mil-std-1754
stuff and maybe a few other similar things. It sure would have made
fortran easier to use in that time period 1980-1995 before f90
compilers eventually became available.
> Instead, I developed habits of using a style that made it easy to do
> automated translation of "double precision" to "real". F77's generic
> intrinsics made this at least reasonably practical, though it was still
> a pain to go through the preprocessing/translation stage.
Yes, I did some of this too with sed (and later perl) scripts. In
particular, I wrote and maintained some of my codes using the
nonstandard IMPLICIT NONE, but I included the scripts with my code
distributions to replace these with other declarations for those
compilers that did not support this declaration. And there were a
few compilers like that, I forget which ones, but I know it was
important enough for me to worry about keeping my code consistent
with my conversion scripts.
I know that some programmers went much farther with this approach
than I did. The LINPACK library, for example, was written using
something called TAMPR which took a source file and could output
REAL, DOUBLE PRECISION, or COMPLEX code, including both the correct
declarations and the correct form for floating point constants.
There were other tools like RATFOR and SFTRAN which I think also had
some of this capability. But I was not comfortable getting too far
away from fortran source, so I did not use these kinds of tools
routinely in my own codes.
$.02 -Ron Shepard
> In article <1ju9akd.1t0oevbkz7znyN%nos...@see.signature>,
> nos...@see.signature (Richard Maine) wrote:
>
> > Ron Shepard <ron-s...@NOSPAM.comcast.net> wrote:
> > >...nonstandard REAL*8 declarations
> >
> > > ...this is the only one that comes
> > > to mind where I purposely and intentionally avoided using standard
> > > syntax and chose to use instead the common nonstandard extension.
> >
> > I mostly avoided this particular one because much of my work in the late
> > 70's and early 80's was on CDC machines that did not support that syntax
> > (and didn't have an 8-byte real, or for that matter bytes at all).
>
> I generally found that REAL*8 worked alright on these kinds of
> machines. I forget exactly which compilers I used on CDC machines,
> but I remember some kind of compiler option or something that mapped
> these declarations to the 60-bit floating point type,
The compilers I mostly used didn't accept the syntax at all.
> Metacommand:$DO66
> The following F66 semantics are used:
> *Statements within a Do loop are always executed at least once.
> *Extended range is permitted;control may transfer into the syntactic
> body of a DO statement.
> The range of the DO statement is thereby extended to include,
> logically, any statement that may be executed between a DO
> statement and its terminal statement. However, the transfer of
> control into the range of a DO statement prior the the execution
> of the DO statement or following the final execution of its
> terminal statement is invalid.
Before SUBROUTINE, FUNCTION, and CALL, subroutines were done
using GOTO and ASSIGNed GOTO. To allow for such within a DO loop,
one was allowed to GOTO out of a DO loop, do something else, and
then GOTO back again.
I believe that has been removed in newer versions of the
standard.
-- glen
Well, I don't see that it requires a government agency to make
a standard, but yes the IBM versions weren't quite constant, which
is an important part of a standard.
It seems that with Fortran II and Fortran IV, IBM did try to name
their specific versions of Fortran. It doesn't seem too far off
to say that a program that conforms to all the implementations
of IBM Fortran II or IBM Fortran IV follows an IBM standard.
-- glen
> A couple of years after the mil-std-1754 specification, there were a
> couple of vendors that mentioned a DOE (Department of Energy)
> specification for asynchronous I/O in fortran. I tried to track
> this down a few times in the early 80's and never found it anywhere.
> Then in later versions of their compiler documentation the DOE
> references were removed. I suspect that there might have been some
> kind of draft document at one time, but perhaps it never matured.
In the 1970's, some DOE labs ran IBM machines, and some CDC machines.
DOE headquarters, as far as I know, had IBM machines.
The OS/360 Fortran H Extended compiler supported asynchronous I/O.
I don't know about CDC Fortran and asynchronous I/O, though.
I would expect anything DOE related to follow one or the other.
(And note that DOE didn't exist before 1974.)
> I think the late 70's through the mid 80's was a time when customers
> were just beginning to shift over to the idea of writing standard
> code as a means to achieve portability. Before then, the vendors
> were happy to implement extensions and features to their big
> customers. This kept the customers happy and it helped lock in
> those customers to the vendor's hardware and software.
-- glen
> Yes, this was really the point that I was trying to make before. I
> think most programmers up through the 70's (in the US, that was my
> only experience at the time) only worked on a single machine. It
> might have been an IBM shop, or a Univac shop, or a CDC shop, or
> whatever, but that was the hardware that was available to a single
> programmer, and his job was to squeeze everything out of it that he
> could for his set of applications. That often meant using
> machine-specific extensions for things like bit operators, or access
> to some hardware widget or i/o device, or operations on character
> strings, and so on. Given the choice between slow portable code or
> fast machine-specific code, the pressure always seemed to be toward
> the latter.
The first reference to standard Fortran 66 that I remember was
that the Mortran2 processor was written in, and expected to generate,
standard Fortran 66 (at least as close as it could.)
All character processing was done using A1 format for input and
output, with the expectation (maybe not required by the standard)
that one could read in, store in a variable, compare, and write
out an INTEGER variable using A1 format.
I do remember having a friend try to compile it using the NCR
Century 100 Fortran compiler, but don't remember if it ever compiled.
It might be that NCR implements Standard Basic FORTRAN, a subset
defined in the standard.
-- glen
> I would expect anything DOE related to follow one or the other.
> (And note that DOE didn't exist before 1974.)
The date for the DOE was October 1, 1977. It was the beginning of
the first fiscal year after Carter became president. The labs all
existed before that, of course, they dated back to WWII and the
Manhattan project, but they were under control of various other
organizations that were not at the cabinet level, such as the Atomic
Energy Commission (AEC) and the Nuclear Regulatory Commission (NRC).
http://www.energy.gov/about/origins.htm
When the DOE was formed, partly as a response to the OPEC oil
embargo in 1972, energy policy was considered the "moral equivalent
of war". Now it is not just the "moral equivalent", we are actually
fighting wars over energy supply, but there were a few decades in
between.
One of the vendors that mentioned the DOE I/O thing was Floating
Point Systems. They had fortran (cross-) compilers for their
hardware that did support true asynchronous i/o (i.e. not just the
syntax). I forget who the other vendor was, but it could have been
CDC or ETA (which were the same company for a while). DOE labs at
that time had a pretty wide range of hardware, including machines
from IBM, Cray, DEC, FPS, Convex, SCS (a minicomputer based on Cray
architecture), Alliant, CDC, and ETA. There were probably others,
these where the ones that I worried about with my codes.
It is kind of interesting that none of these mainframe/supercomputer
class machines were based on the x86 intel architecture, isn't it?
$.02 -Ron Shepard
>> (And note that DOE didn't exist before 1974.)
> The date for the DOE was October 1, 1977. It was the beginning of
> the first fiscal year after Carter became president.
Yes, just after I posted I remembered, it was 1974 that was the
end of AEC, and the beginning of ERDA. AEC was 1947-1974, switch
the digits around, easy to remember.
> The labs all
> existed before that, of course, they dated back to WWII and the
> Manhattan project, but they were under control of various other
> organizations that were not at the cabinet level, such as the Atomic
> Energy Commission (AEC) and the Nuclear Regulatory Commission (NRC).
> http://www.energy.gov/about/origins.htm
> When the DOE was formed, partly as a response to the OPEC oil
> embargo in 1972, energy policy was considered the "moral equivalent
> of war". Now it is not just the "moral equivalent", we are actually
> fighting wars over energy supply, but there were a few decades in
> between.
> One of the vendors that mentioned the DOE I/O thing was Floating
> Point Systems. They had fortran (cross-) compilers for their
> hardware that did support true asynchronous i/o (i.e. not just the
> syntax). I forget who the other vendor was, but it could have been
> CDC or ETA (which were the same company for a while). DOE labs at
> that time had a pretty wide range of hardware, including machines
> from IBM, Cray, DEC, FPS, Convex, SCS (a minicomputer based on Cray
> architecture), Alliant, CDC, and ETA. There were probably others,
> these where the ones that I worried about with my codes.
Yes, there were a number of smaller machines, but it seems to
me that each lab had either large IBM machines, or large CDC machines
(and later Cray) at the top. I don't remember any with both,
though I wasn't especially trying to keep track. Smaller machines
like VAX were pretty popular all around.
> It is kind of interesting that none of these mainframe/supercomputer
> class machines were based on the x86 intel architecture, isn't it?
There was iPSC, though it was never very popular. The fifth choice
in the wikipedia disambiguation page for IPSC.
Not so much later they went to the 860 for such machines.
-- glen
I think that I once installed that and supported it, but cannot
remember anything about it!
>All character processing was done using A1 format for input and
>output, with the expectation (maybe not required by the standard)
>that one could read in, store in a variable, compare, and write
>out an INTEGER variable using A1 format.
It wasn't. And it didn't always work. The ICL 1900 as what would
later be called a RISC machine, and did comparisons by subtraction,
with overflow trapped for both integer and real!
Some (usually originally IBM) Fortran programs used D.P. to get more
characters, and then came BADLY unstuck on systems that normalised
upon loading or storing floating-point numbers :-)
And, of course, some compilers copied only one bit of LOGICAL.
Regards,
Nick Maclaren.
" One issue might be that, if you call malloc(0) enough times, you can
run out of memory! "
How would I run out of memory if I am allocation 0 bytes all the time ?
[OT] Huh? The 8086 was a 16 bit bus version of the 8088. The 8088 came from
the 8080, the 8080 from the 8008 and the 8008 from the 4004. The 4004 was
designed to do 4 bit (BCD) arithmetic inside a Japanese calculator. I
remember seeing a 4004 based computer design inside a university EE lab back
in 1972.
> In comp.lang.fortran Richard Maine <nos...@see.signature> wrote:
> > Uno <merril...@q.com> wrote:
>
> >> It was my understanding that before 1966, IBM was "standard fortran."
>
> > No. It was just IBM's Fortran, generally with the particular compiler
> > version number. The term "standard" was neither applicable nor generally
> > used.
>
> Well, I don't see that it requires a government agency to make
> a standard,
It doesn't, and I didn't say that it does. NIST, for example, is not a
government agency. It does require some kind of recognized organization.
Sure, one can use the word more generically, but it then is sort of
meaningless and everyone can claim that "our version is the standard".
That's sort of like adspeak where everyone claims that their product is
the best. A good example is the way that Verizon's ads like to claim
that their cell network is the most reliable, complete with some
meaningless number for reliability. Verizon objected strongly to the
establishment of any actual standard for defining or measuring
reliability and they don't say what their internal one is. They are just
the best according to their own measurement of their own unspecified
criterion. Yeah, sure.
Or an even better example is the way that I've seen at least one diploma
mill claim to be "accredited". Are they accredited by any regignized
accreditation organization? No. I forget exactly who turned out to be
the source of the "accreditation". Something like the council of indian
tribes of North Carolina? That might not be right, but it was something
along that line; obviously someone willing to be paid for agreeing to
call the diploma mill accredited. Worth about as much as the diplomas,
which mean only that you paid whatever their cost was. Might as well
claim to be accredited by my brother-in-law.
In the case of pre-f66 Fortrans, I don't think you'll find that they
were even claimed to be standards or that people used that terminology.
That's the "nor generally used" part of my quoted statement. Having
someone 50 years later say that it had some of the characteristics of a
standard isn't the same thing.
So all the BCD instructions are still there for backward
compatability?? Is there a Vista compatability wizard for
all that legacy 4004 code?
--
;)
Er, no, sorry. If you mean the National Institute of Standards and
Technology, it states "NIST is an agency of the U.S. Department of
Commerce." You may have meant ANSI or IEEE. I agree with your point,
but not your example!
Also, the recognition can be purely de facto - look at MPI for an
example of that - but the MPI Forum isn't a recognised organisation
of any shape or form (except as the 'owner' of the MPI standard!)
Regards,
Nick Maclaren.
> glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
>
> > In comp.lang.fortran Richard Maine <nos...@see.signature> wrote:
> > > Uno <merril...@q.com> wrote:
> >
> > >> It was my understanding that before 1966, IBM was "standard fortran."
> >
> > > No. It was just IBM's Fortran, generally with the particular compiler
> > > version number. The term "standard" was neither applicable nor generally
> > > used.
> >
> > Well, I don't see that it requires a government agency to make
> > a standard,
>
> It doesn't, and I didn't say that it does. NIST, for example, is not a
> government agency. It does require some kind of recognized organization.
> Sure, one can use the word more generically, but it then is sort of
> meaningless and everyone can claim that "our version is the standard".
[further ranting elided]
Back from breakfast now...
A big reason that abuse of the term "standard" in technical contexts
gets such a reaction from me is to help avoid the term becoming useless
through dilution.
Several years back, one of the regular posters here (a Dave Frank, if I
recall correctly, though I'm not sure if that was his actual name - I
seem to recall some use of pseudonyms) used to insist on using the term
"standard" to mean whatever his current favorite compiler was. He would
give answers about "standard Fortran" to unsuspecting posters, even
posters who had specifically said what other compiler they were using.
When people pointed out that his answers did not reflect the standard,
he would explain that when he said standard he meant <whatever his
current favorite compiler was> and that was a lot better than those
other standards. Might have been ok if he said that up front in his
answers, but instead he just said "the standard" in contexts where you
would never guess what he meant unless you had been a regular reader of
his posts. Occasionally he would capitalize it as "The Standard" for
emphasis.
I found it ironic that <whatever his current favorite compiler was>
changed several times as the vendors stopped making thatparticular
product. But each time he came up with a new definition of "standard
Fortran", which was the only thing anyone should ever use now and
forever (and if they had hardware or operating systems it didn't run on,
they should fix that also).
> In comp.lang.fortran Ron Shepard <ron-s...@nospam.comcast.net> wrote:
> (snip)
>
> > A couple of years after the mil-std-1754 specification, there were a
> > couple of vendors that mentioned a DOE (Department of Energy)
> > specification for asynchronous I/O in fortran.
> I don't know about CDC Fortran and asynchronous I/O, though.
CDC definitely supported asynchronous I/O, though I don't know whether
it had any relation to the aforementioned DOE spec, which I don't know
anything about.
CDC used bufferin/bufferout, which I believe was also used by some other
vendors, but CDC's is the one I most recall working with.
> In article <1juazq5.1hrqqa3138fzaeN%nos...@see.signature>,
> Richard Maine <nos...@see.signature> wrote:
> >NIST, for example, is not a government agency.
> Er, no, sorry. If you mean the National Institute of Standards and
> Technology, it states "NIST is an agency of the U.S. Department of
> Commerce." You may have meant ANSI or IEEE. I agree with your point,
> but not your example!
Oops, yes.
> NIST, for example, is not a
> government agency.
I agree with the rest of your post, but not this part. NIST is
funded by congress, just like NASA, EPA, NIH, NSF, and numerous
other government agencies. It is not a cabinet level department,
but neither are those other agencies. If I remember correctly, NIST
(and before that NBS) is under the department of commerce.
Some examples of a standards organizations that are not government
agencies are IEEE, ANSI, and UL.
$.02 -Ron Shepard
This question has been answered multiple times in this thread.
S
--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
> In article <1juazq5.1hrqqa3138fzaeN%nos...@see.signature>,
> nos...@see.signature (Richard Maine) wrote:
>
> > NIST, for example, is not a
> > government agency.
>
> I agree with the rest of your post, but not this part.
See my sheepish reply to Nick. I got NIST and ANSI mixed up. You'd think
I would know, as I used to volunteer for a ANSI (specifically X3J3).
This side of the pond, we call them quangos.
Regards,
Nick Maclaren.
--
Joe Wright
"If you rob Peter to pay Paul you can depend on the support of Paul."
>>The first reference to standard Fortran 66 that I remember was
>>that the Mortran2 processor was written in, and expected to generate,
>>standard Fortran 66 (at least as close as it could.)
> I think that I once installed that and supported it, but cannot
> remember anything about it!
Before Fortran 77, free form semicolon termintated statements,
alphanumeric statement labels (surrounded by colons), block
structure using angle brackets (< and >), WHILE, UNTIL, and DO
that work with such blocks, labelled and unlabelled EXIT and
NEXT statements for loops, and you can add your own macros,
similar to some of the uses of the C preprocessor.
>>All character processing was done using A1 format for input and
>>output, with the expectation (maybe not required by the standard)
>>that one could read in, store in a variable, compare, and write
>>out an INTEGER variable using A1 format.
> It wasn't. And it didn't always work. The ICL 1900 as what would
> later be called a RISC machine, and did comparisons by subtraction,
> with overflow trapped for both integer and real!
Yes, but if it doesn't then you pretty much can't do any useful
character processing at all. MORTRAN2 uses the first card of the
macro file as its character set, read in A1 format.
Even so, the comparisons are done in few enough places that you
could modify the processor if it could be done, but in a
different way.
> Some (usually originally IBM) Fortran programs used D.P. to get more
> characters, and then came BADLY unstuck on systems that normalised
> upon loading or storing floating-point numbers :-)
Fortunately OS/360 doesn't do that, but, yes, there are systems
like that.
> And, of course, some compilers copied only one bit of LOGICAL.
-- glen
>> It is kind of interesting that none of these mainframe/supercomputer
>> class machines were based on the x86 intel architecture, isn't it?
> [OT] Huh? The 8086 was a 16 bit bus version of the 8088. The 8088 came from
> the 8080, the 8080 from the 8008 and the 8008 from the 4004. The 4004 was
> designed to do 4 bit (BCD) arithmetic inside a Japanese calculator. I
> remember seeing a 4004 based computer design inside a university EE lab back
> in 1972.
Well, the 8086 came before the 8088, but otherwise that is pretty
much the way it went. The 8086 instruction set is designed to be
assembly source compatible (with appropriate macros in a few cases)
with the 8080 instructions set.
The Intel iPSC is based on a hypercube array of 80286/80287
processor units.
-- glen
>> I don't know about CDC Fortran and asynchronous I/O, though.
> CDC definitely supported asynchronous I/O, though I don't know whether
> it had any relation to the aforementioned DOE spec, which I don't know
> anything about.
> CDC used bufferin/bufferout, which I believe was also used by some other
> vendors, but CDC's is the one I most recall working with.
The IBM form seems to be:
READ(a,ID=n) ...
WRITE(a,ID=n) ...
WAIT(a,ID=n)
Where the ID= value is used to connect the WAIT to the appropriate
READ or WRITE. Unformatted only, and the usual rules on accessing
variables between the READ or WRITE and matching WAIT.
(I don't remember ever doing it, though.)
-- glen
--
;)
The 80x86 took one approach to (unsigned) packed BCD. It uses decimal adjust
instructions. The 65xx has a separate decimal mode which is invoked before
doing packed BCD. So one way to convert binary to BCD on the 65xx is to
shift bits left into carry, then add a two byte set of z-page locations to
itself in BCD (in decimal mode).
Relevance to later architectures? IIRC microcode on early 360 systems
actually worked on 8 bit bytes.
[drifting] CALL 5 still exists as an alternative to int 20H, with a
different register mapping, as a relic of CP/M in MS-DOS. Old DOS software
also used FCBs instead of file handles. I've had some very early DOS
software burp on Win NT/XP as it wants more than the usual number of FCBs
(file control blocks). These need to be set in CONFIG.NT instead of
CONFIG.SYS. I don't remember if any DOS Fortran compilers fall into that
category.
So when I go to 64 bit Windows I can finally kiss MS-DOS goodbye.
> Anand Hariharan <mailto.anan...@gmail.com> writes:
>> On Dec 16, 10:30 am, Eric Sosman <esos...@ieee-dot-org.invalid> wrote:
>> (...)
>>> malloc(0) itself cannot work this way. If it returns non-NULL, it
>>> must return a value that is distinct from all the other values it
>>> has returned (that have not yet been released). That is, malloc(0)
>>> must satisfy:
>>>
>>> void *p = malloc(0);
>>> void *q = malloc(0);
>>> assert (p == NULL || p != q);
>>>
>>
>> I may have misunderstood Keith's post else-thread, but it (Message-ID:
>> <ln7hf4r...@nuthaus.mib.org>) contradicts what you say above
>> (i.e., the implementation could return a pointer that is non-null and
>> unique so that p == q holds).
>
> I think you did misunderstand it, presumably because I didn't state it
> clearly enough. Eric is correct.
>
> Here's what I wrote:
>
> | I think the way the current definition came about is something
> | like this: Before C89, some implementations had malloc(0) return a
> | unique pointer value (that couldn't safely be dereferenced), and
> | some had it return a null pointer. The former is arguably more
> | consistent with the behavior of malloc() for non-zero sizes, and
> | lets you distinguish between results of different malloc(0) calls; it
> | makes malloc(0) a convenient way to generate a pointer value that's
> | non-null, guaranteed to be unique, and consumes minimal resources.
> | The latter avoids the conceptual problems of zero-sized objects.
> | The ANSI C commmittee chose to allow either behavior, probably
> | to avoid breaking existing implementations; they also defined the
> | behavior of realloc() so it could deal consistently with either a
> | null pointer or a pointer to a zero-sized object.
> |
> | Personally, I think it would have been better to define the behavior
> | consistently and let implementations conform to what the standard
> | requires.
>
> What I meant by "unique pointer value" is a pointer value that's
> unique *for each call*.
>
> And something I missed: even an implementation that returns a
> unique (per-call) non-null pointer value for malloc(0) cannot do
> so arbitrarily many times. Each call (assuming nothing is free()d)
> consumes some resources, at least address space if not actual memory.
> Eventually there won't be enough left to allocate the bookkeeping
> information, and malloc(0) will fail and return a null pointer
> anyway.
>
> But if you want a series of unique address values, and you're not
> planning to dereference any of them, malloc(1) will serve the same
> purpose. [snip]
It might not serve it equally well. A malloc(0) call could
allocate pointers to non-writable (or even non-accessible)
memory, and malloc(1) can't do that. There is (in some cases)
value in having malloc(0) return a pointer other than NULL.
realloc() could invalidate the already-assigned values, and in theory
even fetching or comparing them would be UB; more practically I can
easily see examples where this results in duplicate addresses being
returned, thus not being unique as intended.
You need to allocate a new array 'chunk' without altering the old
one(s). You can probably just leak the old chunk(s), although
personally I would keep a linked-list just in case I should want to do
some debug checking or maybe statistics.
Quite right, I should have realized that.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"