Also, I'd like it to be a full blown compiler, and not just an Ada
interpreter. I've heard Meridian mentioned, what's the current version and
pricing for it?
thanks for any help.
I think you are saying that you would like a full-blown DOS compiler
that generates 80x86 executable programs that run under DOS. This is a
VERY different statement from "a full-blown compiler, not just an Ada
interpreter."
First off, I should point out that there are compilers hosted on DOS
boxes that produce executable code for _other_ platforms. Aside from
the obvious bare-board-type targets (which are in general NOT running
DOS), Alsys produces (or used to, anyway) a DOS-hosted compiler which
cranks (cranked) out machine code for IBM-360-family mainframes. I
haven't scoured the compiler list for this, but undoubtedly there are
other examples.
If by "Ada interpreter" you mean Ada/Ed, either pure or in its GW-Ada/Ed
enhanced form, your meaning is half true. Ada/Ed is a full-blown Ada
compiler, supporting (almost) all of Ada 83, and validated some years ago.
It is hosted on many platforms, including DOS, Mac, and many flavors of
Unix boxes. It has all the earmarks, aspects, and phases of a "full-blown"
compiler; indeed, it _is_ one.
Ada/Ed's _target_ language is a hypothetical machine language in the
style of Pascal P-code. The hardware to execute this language directly
does not happen to exist, and so programs are executed by a simulator
for this hypothetical machine (which some might call an "interpreter").
Hypothetically, one could make silicon to execute this language. Not only
that, one could write an Ada/Ed code generator, working off the intermediate
form, that would produce 80x86 code.
I don't mean to criticize you unduly, or pretend that Ada/Ed is something
it is not, or be unduly pedantic. On the other hand, we serious computer
folks should try to get our terminology right. That Ada/Ed's code generator
cranks out programs for a machine that is not implemented in silicon,
does NOT make Ada/Ed any less a compiler.
To answer your question about Meridian, if you are a student, their
DOS-host, DOS-target compilers start at about $99. and are quite
mature, stable, respectable pieces of software. Meridian was taken
over by Verdix, which merged with Rational, so officially you are
dealing with Rational now. Give them a call.
Mike Feldman
------------------------------------------------------------------------
Michael B. Feldman - chair, SIGAda Education Working Group
Professor, Dept. of Electrical Engineering and Computer Science
The George Washington University - Washington, DC 20052 USA
202-994-5253 (voice) - 202-994-0227 (fax) - mfel...@seas.gwu.edu (Internet)
"Pork is all that stuff the government gives the other guys."
------------------------------------------------------------------------
Price depends on exactly what you're looking for and (for some
reason) who you are. Anyway, here it the contact information from
a current Alsys news post, if it is not the place to call for
pricing info, they should be happy to give you the correct
number :-)
---
Alsys at (619) 457-2700
Seminar Location:
Alsys, Inc.
10251 Vista Sorrento Parkway
Suite 300
San Diego, California 92121
---
There are, of course, other vendors for the DOS environment. Does anybody
from R.&R. Software, AETECH, or Alsys (or someone who has used their
products) care to comment?
Oh, and of course, there's GNAT/DJGPP if one doesn't mind missing tasking and
a few other details for the moment.
----------------------------------------------------------------
Kevin J. Weise wei...@source.asset.com
COLSA Corporation Voice - (205) 922-1512 ext. 2115
6726 Odyssey Drive FAX - (205) 971-0002
Huntsville, AL 35806
{Standard Disclaimers about my opinions & my employer's opinions}
{... which are in conflict often enough}
----------------------------------------------------------------
"Admire those who seek the truth;
avoid those who find it." Marcel Proust
I completely disagree with all those who state the Alsys compiler is
better than the Meridian compiler when developing PC targeted programs on
DOS platforms. (specific enough terminology, I hope).
I've been working in this specific area since 1985. I curently have four
different Alsys compilers, and three Meridian compilers loaded on my PC.
Alsys has NEVER had a compiler that performed effective memory management
on the PC. There runtime memory manager is DOS. When the program ends,
memory is returned to the system. Alsys keeps on allocating, and
allocating, and allocating...
Meridian, on the other hand, has an active runtime memory management
facility. As variables go out of scope, the memory is returned to the
free store for reuse later in the program. Meridian allocates, and frees,
and allocates, and frees, etc.
Alsys does not produce microsoft compatible object files. Meridian does.
Neither of these capabilities means much to a workstation developer, but
they are life-and-death to a programmer who attempts to write Ada software
that will function within the constraints of DOS.
Why use Alsys at all then? Alsys has much better error messages when
developing a system. Alsys has been around longer, and many clients
require the use of Alsys on projects. For anything I write where I have
choice of compiler, I prefer to use Meridian.
OK. My flak jacket and helmet are on. Fire at will.
Wayne R. Lawton
Lawton Software, Inc.
>WADR, Mike, we ought to let the man have the benefit of some experience,
^^^^ new abbreviation for me. What's it mean?
>too. I've used Meridian 4.1.1 and 4.1.4 on DOS, and while its fine for
>course assignments, I've been completely underwhelmed by its
>capabilities for "serious" programming tasks. If you want the 32-bit
>addressing, you need to get a DOS extender from them (don't remember
>which one).
As I recall, the extender is bundled.
> It seems to have serious problems with generics when used
>in other than trivial fashions (try compiling the Configuration
>Management Assistant, aka CMA, from STARS, for example). Now, before
>someone from Meridian/Verdix/Rational jumps on me, let me state that
>this was as of last year. We don't have a maintenance agreement, so I
>have no way of knowing if these things are still true.
Not much has changed.
>
>There are, of course, other vendors for the DOS environment. Does anybody
>from R.&R. Software, AETECH, or Alsys (or someone who has used their
>products) care to comment?
Compiler preferences tend to be kinda religious. Many factors play a
role, and robustness for large programs is only one factor. Most industry
folks seem to prefer Alsys; I've had enough experience with it to agree that
it is probably better for "serious" work. It is, in my experience,
rather slower at compile time, and Meridian comes with a reasonably
decent (better IMHO) set of libraries. As a "starter" compiler for a
guy wanting to plunk down a hundred bucks, Meridian wins in my book.
Alsys' student prices have dropped significantly; last I heard,
FirstAda (the 16-bitter, I think) was $144. It's been very difficult
to get onesies-type pricing and technical information consistently from
Alsys; their heart is in the right place but they are just not geared
up for retailing PC software. They're still project- or corporate-
oriented. Meridian's retailing of PC and Mac compilers has always been
quite good as Ada goes.
RR's Janus/Ada is packaged up in an IDE by AETech; the compiler is the
same. Janus has its following. I don;t use it much, so can't comment
on the robustness of recent versions for large programs. As a tasking
fan, I've always found Janus' single-priority, no-time-slicing tasking
model to be brain-damaged (even though it is legal).
>
>Oh, and of course, there's GNAT/DJGPP if one doesn't mind missing tasking and
>a few other details for the moment.
Yes, I am using it fairly heavily at the moment. GNAT is a very nice piece
of work, as we all know. For those who don;t know djgpp, this DOS port
of gcc is also a nice job, and lets you build a nice Unix-like system on
your DOS box.
I hesitate to recommend the djgpp/GNAT pair to a newbie because getting it
all downloaded and installed is rather tedious and it isn't (yet) "packaged"
so as to make it as easy to work with as the others are. This is to be
expected at this stage; both djgpp and GNAT are works-in-progress, and both
have great promise and very devoted users (I am one).
My ten years' experience with Ada newbies (many of whom are also _computing_
newbies) tells me to wait a while before advising them to go with djgpp/GNAT.
Meantime, I stand by my recommendation to the original poster; for the money,
he can't get go too far wrong with Meridian. Dozens - no, hundreds - of
my students have done OK with it, and stepped up to Alsys if they _really_
wanted to get serious.
(Blatant plug: there is, of course, GW-Ada/Ed, but my guess is that he has
already gotten past that. But maybe not. Wasn't clear from his post.)
BTW, I didn't mean to start a religious war, so please no flames. I'm
just ready for a different compiler at work. Can't get one for my home
computer, Its a Mac/SE (but I sure am coveting the PowerPC line).
----------------------------------------------------------------
Kevin J. Weise wei...@source.asset.com
COLSA Corporation Voice - (205) 922-1512 ext. 2115
6726 Odyssey Drive FAX - (205) 971-0002
Huntsville, AL 35806
{Standard Disclaimers about my opinions & my employer's opinions}
{... which are frequently in conflict}
Of course I had heartburn. It took 10 minutes to compile "HELLO WORLD"
Since then I have used Alsys and, frankly, it does hose up the memory
pretty good. Especially, we were trying to use files that were NFS
mounted to the PC. Extended memory is not a problem. It's the
first 1/2 a meg that gets used quickly.
Chad
>I haven't used a Meridian Compiler since college when it was on a 286.
>
>Of course I had heartburn. It took 10 minutes to compile "HELLO WORLD"
The perpetuation of these old statistics and emotional reactions to
rather primitive environments leaks out, unfortunately, into industry
and government project offices where it is asserted that they are
sufficient reason to resist Ada. I think we need to correct the record
and make sure current statistics are being used.
_Naturally_ running an immature compiler on a 286 will cause "heartburn",
though I think you exaggerate. I ran Meridian on a 640k PC/AT (7 Mhz) for
3 years (1987-90). Typical compilation time for any unit of up to several
hundred lines was a minute or two. Most of that time was taken
in HD swapping, as I could see from the continuously blinking HD
activity light. The result was that, for typical class-size programs,
compilation time was approximately _constant_ (because it was IO
bound).
By 1991 I stepped up to an 8 meg 386-33, which, BTW, I still use as
my home DOS machine. Typical compilation times are down to a few seconds,
faster still if I take the trouble to set up a RAM disk for the
temporary files.
With current Intel machines, suitably configured (RAM disk, etc.),
compilation times with _any_ Ada compiler should be well within
humanly-acceptable ranges, roughly indistinguishable from times
given by C or C++ or Pascal compilers.
Indeed, few users of GW-Ada/Ed have complained about the speed of
compilation (using current PC's), even though Ada/Ed was never designed
for speed and is noticeably slower than commercial compilers.
Given Ada's style of checking package interfaces at compilation time,
it's not surprising that _any_ compiler will spend a lot of its
time chasing library files on disk.
We've gotten so impatient that we sometimes confuse ten seconds with
ten minutes.
>
>Since then I have used Alsys and, frankly, it does hose up the memory
>pretty good. Especially, we were trying to use files that were NFS
>mounted to the PC. Extended memory is not a problem. It's the
>first 1/2 a meg that gets used quickly.
>
No disagreement here. But anyone using a current Meridian compiler
on a current PC would discover similar behavior. If your machine is
reasonably up to date, compilation time is not a significant factor
in choosing a DOS compiler. I'm satisfied with my 386-33, even.
The 486-50 in my office seems astoundingly fast to me, by comparison.
Let's be careful not to perpetuate outdated myths, folks. OK?
Managers may not be able to read between the lines...:-)
OK.
>
>BTW, I didn't mean to start a religious war, so please no flames. I'm
>just ready for a different compiler at work. Can't get one for my home
>computer, Its a Mac/SE (but I sure am coveting the PowerPC line).
No flames intended. Obviously different folks prefer different compilers.
IMHO the best-packaged DOS compiler is Meridian's, and the most
"industrial-strength" is Alsys'. You didn't say what the primary use
would be, so the choice is yours.
Unfortunately GW-Ada/Ed-Mac won't run on your SE; it needs System 7
and a heftier processor - we think the 020's are OK, but have not
tested with them.
Meridian makes a pretty well-packaged Mac compiler too; it's bundled
with MPW 3.2 and a Toolbox binding. It _will_ run on your SE - I used
to run it on a Mac+.
Of course, it'll be kinda slow because, in current terms, the SE is.
Runs fine on my IIci.
Similarly, you certainly *can* generate Microsoft compatible object modules
(the standard Alsys compiler uses the Microsoft linker!)
>_Naturally_ running an immature compiler on a 286 will cause "heartburn",
I wonder why we think this so "natural". Suppose the first version
of a new motor car got 2 miles per gallon. Would we think this
"natural", and happily waste petrol until the arrival of V2, with
an order of magnitude improvement?
If a compiler takes 10 minutes (or 10/exaggeration) to compile "Hello
World", then the QA department should have vetoed its release. Nuff
said.
Ah, the fallacy of argument by analogy.
Mr. Firth, what do you think the mileage was for the _first_ motor car?
Ok, so Ada was not the first computer language. Bad analogy again.
Perhaps Ada is to languages like the _electric_ car is to the automobile.
Because of California law, there _will_ be commercial electric cars in America
within several years. The car manufactures will have had nearly a decade to
develop these cars by the time they appear. I do NOT expect them to
necessarily be within an order of magnitude of "goodnes" as to a petrol car.
Building an electric car is a paradigm shift, and will take years, possibly
decades to "perfect" along with their necessary infra-structure( How do you
re-fuel, for example.)
I must agree with the _Naturally_ in Mr. Feldman's post.
Ron Sercely
Full language COBOL/74 is certainly on a par with Ada/83 from a complexity
point of view. Realia COBOL compiling on a 286 would comfortably exceed
10,000 lines/minute generating highly efficient code.
So far NO Ada compilers have really been fast by those kind of standards,
and there is nothing in the language that explains this. Rather it is
simply that super-fast compilation has clearly not been a priority item.
GNAT is for example pretty slow by these standards, not surprising given
the nature of the highly optimizing backend which is also highly
portable--a valuable feature, but probably not one that is compatible
with really high speed.
On the other hand super fast compilation is NOT so important these days with
much faster machines. GNAT running on a slow Pentium (60 MHz) compiles
hello world in about one second, and binds and links it in about three
seconds, which is fast enough for most purposes, and machines are
continuing to get faster and cheaper by the month.
[deletia]
>
>So far NO Ada compilers have really been fast by those kind of standards,
>and there is nothing in the language that explains this. Rather it is
>simply that super-fast compilation has clearly not been a priority item.
Clearly.
Robert, when guys like me have tended to pin blame for this on unimaginative
vendors, or rather vendors with no market mentality, you've tended to
defend the vendors' having made cold business decisions on stuff like
this. The vendors themselves have tended to blame it on the complexities
of validation.
Without putting you or anyone else on the defensive, I'd like to ask
whether you think that really high performance, on 286's and the like,
would have been compatible with the other constraints on Ada vendors?
Do you think that performance like that would have opened up bigger
markets for Alsys, Meridian, and RR?
>
>GNAT is for example pretty slow by these standards, not surprising given
>the nature of the highly optimizing backend which is also highly
>portable--a valuable feature, but probably not one that is compatible
>with really high speed.
>
>On the other hand super fast compilation is NOT so important these days with
>much faster machines. GNAT running on a slow Pentium (60 MHz) compiles
>hello world in about one second, and binds and links it in about three
>seconds, which is fast enough for most purposes, and machines are
>continuing to get faster and cheaper by the month.
>
GNAT is not too much slower than that on my old 386-33, and that's
without a RAM disk. Eliminating the disk activity would speed it up
even more. GNAT is a nice piece of work, abetted by (as you said)
quite-good-enough hardware.
Somehow a compiler is never fast enough to satisfy some people.
Or rather, compiler speed is too-often used by people to trash Ada,
when in fact it CANNOT make much of a difference to them whether
that program compiles in 1 sec or in 5.
: >So far NO Ada compilers have really been fast by those kind of standards,
: >and there is nothing in the language that explains this. Rather it is
: >simply that super-fast compilation has clearly not been a priority item.
: Robert, when guys like me have tended to pin blame for this on unimaginative
: vendors, or rather vendors with no market mentality, you've tended to
: defend the vendors' having made cold business decisions on stuff like
: this.
I might interject that some vendors have opted to place developmental
priority on smart/optimal recompilation technologies. It is
satisfying to add a new declaration to an oft-used package _spec_, use
that declaration in a unit, and watch what happens when you rebuild
your entire application -- only the unit using the new declaration is
recompiled. Even with a comparatively slower "raw"/batch compilation
speed, this type of recompilation wins hands-down in many situations,
since it simply doesn't have to recompile the world. (Sure, having
optimal recompilation, fast batch-mode compiles, and peace on earth
would be a great thing...)
Clearly, optimal recompilation is biased towards large systems development,
where recompiling an entire system would be a big deal... probably
not a reasonable approach to take for 286 based systems ;-)
I intend no slight towards GNAT, of course. I merely wanted to
remind folks that there are more ways of looking at compile performance
than batch-mode compilation speed.
: Somehow a compiler is never fast enough to satisfy some people.
: Or rather, compiler speed is too-often used by people to trash Ada,
: when in fact it CANNOT make much of a difference to them whether
: that program compiles in 1 sec or in 5.
Hear, hear.
.Bob.
--
Bob Kitzberger
Rational Software Corp. 10565 Brunswick Rd. #11, Grass Valley, CA 95945 USA
--
--The preceeding opinions do not necessarily reflect the opinions of
--The MITRE Corporation or its sponsors.
-- "It is the fashion these days to make war, and presumably it will last
-- a while yet." Frederick the Great of Prussa, writing to Voltaire, 1742.
-------------------------------------------------------------------------
Franco Gasparoni and his group at ENST in Paris have been designing a
smart recompilation system for GNAT, called SGNAT, which will be fully
described in a paper at Tri-Ada (as part of the GNAT technical track
which will have five other papers as well on technical aspects of GNAT).
Whether smart compilation is worth while in the abstract is an interesting
question. The trouble is that smart compilation involves some fairly
complex processing which may be inaoppropriate in a fast compiler, so
there is an interesting trade off.
I must say that using the Realia COBOL compiler on a fast PC, which compiles
over 100,000 lines/minute, I never felt the need for fast recompilation.
Even a 2 million line program, taking 20 minutes to compile, is not that
uncomfortable (on many systems just linking a program this big can take
a long time, and that is often not helped by smart recompilation).
Another interesting factor in compile time is to think about multi-processors
which are fast becoming a low end desk top reality. NT already comfortably
supports multiple processors, and OS/2 will in a month or two (Solaris may
also support MP on a PC, not sure). Work stations supporting MP are also
common, though still expensive.
I would guess that by this time next year, you will be able to plunk down
$3000 or so, and get a PC box with four 100Mhz pentiums, and that
represents a *lot* of compilation power.
Consequently, another thing to evaluate in Ada compilers is how easily they
can be used in MP environments for the compilation of large programs. GNAT
is particularly comfortable in this regard, because the complete lack of
compilation order requirements means that any processor can compile any
piece of the system at any time -- you don't need some elaborate parallel
make program to determine a network of allowed compilation order
dependencies.
Incidentally it is not true that if you add a declaration to a package,
you only need to recompile clients that use this new declaration, or at
least if that *is* true in the rational system, it has a serious bug,
which I doubt. Smart recompilation is more complex than that, because
you also have to check for the introduction of illegalities.
Suppose you have a client X which accesses entity Y in package Z (entity
Y is use visible to X). X also with's and use's package W which currently
does not contain a Y. Now the introduction of a Y into package W means
that X not only needs recompiling, but fixing first, since the two
Y's now hide one another. Any smart recompilation system that is correct
must take this into account, which makes things a little trickier than
you might hope.
Robert Dewar
>A contractor who shall remain nameless (to protect the guilty :-) once
>wrote code using a strict functional decomposition methodology,
>supported by Subunits. So their compilation units looked something
>like:
> separate
> (Main.level1.level2.level3.level4.level5.level6)
> procedure Do_Something (params...)
> is
> procedure First (params...) is separate;
> procedure Second (params...) is separate;
> procedure Third (params...) is separate;
> begin
> First (params);
> Second (params);
> Third (params);
> end Do_Something;
>The compiler basically thrashed trying to keep track of all of the
>subunit context, and performance on this code was terrible...
Well, I don't think we're the guilty party, but we (me and some of my
buddies, NOT the Boeing Company) have used and advocated the use of
hierarchical decomposition and SEPARATEs (not necessarily at the same
time or for the same reason). Would you please explain a couple of
things?
(a) Why does a "strict functional decomposition methodology" lead to
bad code versus other kinds of decomposition? Are you asserting that
hierarchical decomposition in general (versus a flat design with a
million elements at the same level) leads to this, or is there
something special about functional decomposition?
I'm assuming that you believe that the example above is bad code,
and that the procedure body is precisely as shown without any additional
glue logic. Otherwise, it's a straw man, and what's your point?
(b) Why on earth should the "performance on this code [be] terrible"?
Are you referring to the performance of the compiler on the code or
the performance of the code in execution? If it's the former, then
it's certainly a shame that the compiler vendor didn't allow for
capturing Ada's language features a little better.
If it's the latter, though, I'm mystified. Nothing in my experience
indicates that executing subprograms whose bodies aren't in the same
compilation unit causes the kind of drop in performance that would be
called "terrible". Would you explain this, please?
I guess I'm just a little tired of seeing the reflexive curl of the
lip when the phrase "functional decomposition" is uttered, and I
would really like somebody to explain to me why function decomposition
is bad in EVERY part of the analysis process for EVERY domain.
Any volunteers?
+-------------------------------+--------------------------------------+
| Bob Crispen | Who will babysit the babysitters? |
| cri...@foxy.hv.boeing.com +--------------------------------------+
| (205) 461-3296 |Opinions expressed here are mine alone|
+-------------------------------+--------------------------------------+
In this case, the guilty party's methodology, as they then applied it
to Ada, produced this ugly code.
I'm not convinced that functional decomposition (FD) is the best
methodology, but I certainly don't assert that all FD methodologies
produce terrible Ada.
>(b) Why on earth should the "performance on this code [be] terrible"?
>Are you referring to the performance of the compiler on the code or
>the performance of the code in execution? If it's the former, then
>it's certainly a shame that the compiler vendor didn't allow for
>capturing Ada's language features a little better.
The compile-time performance was terrible. Consider what the compiler
must do. The Ada semantics require that the compiler construct the
complete context of the enclosing scope for the subunit, i.e.
everything in the parent of the subunit must be visible just as if the
subroutine were defined in-line. Besides being hard on the compiler,
it's also bad design, as there's little or no effective encapsulation,
since everything in the parent is directly visible in the subunit.
And it's hard for the compiler to do a really good job of code
generation and optimization across compilation units with subunits,
when compared to 'in-line' code.
>I would really like somebody to explain to me why function
>decomposition is bad in EVERY part of the analysis process for EVERY
>domain.
I would not claim that FD is bad in every circumstance. I'm not real
fond of it personally. It doesn't match my way of thinking, and in
particular it tends to lead one away from reusable solutions/objects.
But my observation is that 'real programmers' modify the theoretical
approach to include ways to identify reusable stuff, etc.
>I'm not convinced that functional decomposition (FD) is the best
>methodology, but I certainly don't assert that all FD methodologies
>produce terrible Ada.
But the code clearly violated the rules of hierarchical functional
decomposition. One of those rules (at least as I was taught them)
is that lower levels of the hierarchy do not have visibility of the
higher levels. So the Ada "is separate" simply does not map onto
the hierarchy correctly.
The correct mapping is to declare the lower levels as packages or
subprograms in their own right, and pass information down to them
via the parameter lists.
>The compile-time performance was terrible. Consider what the compiler
>must do. The Ada semantics require that the compiler construct the
>complete context of the enclosing scope for the subunit, i.e.
>everything in the parent of the subunit must be visible just as if the
>subroutine were defined in-line.
That's indeed what the compiler has to do. But your restructuring
of the code defines the subprograms in line, which by your own
argument requires the same amount of work by the compiler.
The reason for the poor performance is not that the compiler
has to do more work - it doesn't - but because most compilers
do not do the work efficiently: they rebuild the context all
over again for each "is separate" subunit, rather than having
the sense to build it once and then compile the subunits
sequentially.
Given a little intelligence in bulk ecompilation, the "is separate"
style becomes generally more efficient, since if only one subunit
is changed the others need not be recompiled, whereas with all
subroutines inline in the same package, evidently all must be
recompiled if any changes. But I agree that making each lower
level function a first-class unit is most efficient of all.
>it's also bad design, as there's little or no effective encapsulation,
>since everything in the parent is directly visible in the subunit.
Agreed. And that's also why I don't think it's proper functional
decomposition. Maybe the error was is choosing the wrong language
construct to map down to.
Let me even argue the following proposition - it is possible that a compiler
can be too fast. It has been my experience that students will use a "fast
compiler" to replace "thinking" when writing code. Debugging by mutation is
encouraged when it is too easy to modify/test code quickly. A famous study in
the late 60s showed that students who used a batch system uniformly beat
students who used a timeshare system, in net time to complete a project.
Maybe all Ada compilers should ensure compilations take some minimal amount
of time, to ensure programmers make time to think about their changes (if not
before the compilation, at least during it :)
Rich
--
------------------------------------------------------------------------------
Richard E. Pattis "Programming languages are like
Department of Computer Science pizzas - they come in only "too"
and Engineering sizes: too big and too small."
>Maybe all Ada compilers should ensure compilations take some minimal amount
>of time, to ensure programmers make time to think about their changes (if not
>before the compilation, at least during it :)
Good idea, Rich, though, I'm sure that compiler authors are probably
meeting your minimal standard without our having to take any action.
Many of them are sufficiently pessimized already. :-)
I guess I haven't mentioned before that this is already an undocumented
feature of GW-Ada/Ed...:-)
This is more true than you know. When we first got a bunch of hp700
machines we noticed that "make" no longer worked - it thought it still
needed to update a target even if the source has the same timestamp as
the target (source should have to be newer than the target to update the
target). It no longer compared "<", it was now "<=". When I asked HP,
the response I got blew me away. The new hp700s, you see, were SO fast
than when they compiled their kernel some units took less than 1 sec to
compile (which happened to be the smallest unit of time they kept track
of), so in order to fix the "problem" they modified make. When I asked
why didn't they just enforce that all compiles took at least one second
I just got a blank look. So now we just use GNU make...
Sigh.
[This is KPR Number: 4701027375, I have no idea if it's fixed yet or not]
--
Richard G. Hash email: r...@shell.com
Geophysics Research phone: (713) 245-7311
Shell Development Company, Bellaire Research Center