So keep the faith. Modula-2 is a fantastic language for large commercial
applications, where the product will go through endless revision. The
separation of the DEFINITION and IMPLEMENTATION parts is a very helpful
discipline, and just from having to type the import lists one unconciously
reduces module cross-linkages, which in the end is the source most
complexity in products. Frankly the inverted module arrangements that much
object oriented programming obscures sequence of processing, and has proven
at Netscape, Apple and many other companies to be a disaster in terms of
reliability and productivity. The claims that some of the newer language
technologies offer improved productivity are demonstrably false; projects
have never been later, or more riddled with bugs.
I am sure I have baited the Java lovers sufficiently, so I will sign off
now.
What kind of bit twidling?
I ask since our compiler has a full complement of bitwise operators
(BAND,BOR,BXOR,BNOT,SHR,SAR,SHL,ROL,ROR) which operate on all integer
types.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Edward de Jong" <edw...@magicmouse.com> wrote in message
news:B91C662E.14D6%edw...@magicmouse.com...
I think anyone with significant experience in commercial
software would know that there are long periods in which
it is critical to subject interface info to relatively
strong change control while simultaneously making
implementations easy for individual developers to make
massive changes to. I don't think there
is any change control mechanism that can apply different
rules to different parts of the same file, and I think it
would be nightmare if somebody actually tried to build such
a beast.
I think C and C++ would have fallen out of favor long ago
solely for this reason, were it not for the fact that
you can use the header file idiom, even if the language
provides no support or error checking for it. But the
C++ classes apparently insist on interspersing the two
kinds of info, so maybe this will force the issue as
people use the classes more.
--
Rodney M. Bates
Effectively we have created a virtual machine that we program to, and it is
unfortunate that other companies have kept their similar systems so secret
that most programmers are unaware of this powerful strategy; certainly many
of the commercial products we see cross-platorm use an identical strategy.
As to Norman's question about bit twiddling, although StonyBrook modula-2
has OR, SHIFT, ROTATE instructions, it is no match for handcrafted macro
assembler code when you want to squeeze the last ounce of performance from
the processor in graphics tasks. But carefully controlling register loads,
and using the upper and lower halves of registers, I can more than double
the throughput of BitBlt operations, which because Windows doesn't support a
masked blt across all versions of Windows, you must write your own, and
there are many other instances as well of CPU intensive tasks that demand
the fastest implementation to give our application that extra snap crackle
pop. Also, the Microsoft Macro Assembler is an incredibly powerful macro
processing tool; we have a few thousand lines of Assembler, and although the
effort to write Assembler is at least 10 times that of Modula-2 (because the
slightest register overwrite can cause disastrous results), we find it
easier to code the bitmap manipulation operations in Assembler.
> Except for Modula-2, Modula-3, and Ada, language designers
> seem to be abandoning the separation of interfaces, complaining
> about redundancy and touting filters that excerpt interface
> information out of a combined module.
I think that working with combined files is much easier:
Only half the number of open text windows, no cut and paste
to duplicate procedure headers in the implementation file
etc. etc.
> I think anyone with significant experience in commercial
> software would know that there are long periods in which
> it is critical to subject interface info to relatively
> strong change control while simultaneously making
> implementations easy for individual developers to make
> massive changes to.
No doubt about that.
> I don't think there
> is any change control mechanism that can apply different
> rules to different parts of the same file, and I think it
> would be nightmare if somebody actually tried to build such
> a beast.
So do I.
But there are more elegant solutions to this problem:
The tool set that extracts definitions from implementations can
also verify that the result conforms to the definition file
stored in the version control system.
The most practical solution would be a knowledgeable editor
that prevents modifications to the implementation that do
not conform to the existing definition file. Checks made
by the compiler are post festum, too late to prevent the
mistake.
Hans Ellenberger
> Dear Fellow Modula-2 programmers, just to keep you spirits up, we have
> passed the 400 thousand copy mark for our Discus CD labeling product. It is
> a cross-platform title for both Macintosh and Windows operating systems,
> runs with instant language switching in any of 15 languages, and is written
> 98% in Modula-2 (the Windows version has a small amount of Assembly language
> for critical bit-twiddling). About 100,000 lines of code, it is well on its
> way to becoming a million selling title.
I think that there are other successful products coded in Modula-2
where the company prefers not to tell the public...
> [...] and just from having to type the import lists one unconciously
> reduces module cross-linkages, which in the end is the source most
> complexity in products.
The competent programmer will always spend some thought before
importing whatsoever.
However, I have seen quite many Modula-2 programs where the import
section obviously contained the whole language library and all
modules of the project FROM xxx IMPORT aaa,bbb,ccc ... ...
Some smart guy obviously first made a fully universal module body
usable for the rest of his life :-(
So once more I think that the language design does not really
educate programmers. The competent crafts man will make a clean
design in the ugliest of all languages...
> The claims that some of the newer language
> technologies offer improved productivity are demonstrably false; projects
> have never been later, or more riddled with bugs.
> I am sure I have baited the Java lovers sufficiently, [...]
I'm not a Java lover, but compared to C/C++ the average programmer
probably is more productive with Java because it is more restrictive
and prevents a good part of the ugly C/C++ tricks.
Hans Ellenberger
Hmm, I use Free Pascal, which as a BP/Delphi derivate uses
interface/implementation in one file, but I actually like the separation more.
I like separation as well, and there's no reason why a VCS couldn't
check implementation module consistency against a locked definition
module whenever a file is checked in.
However, IMO an extra level of abstraction is valuable so that multiple
modules could be defined against a common template for e.g. device
drivers, OO programming effectively fills this niche.
Out of curiosity, where do Free Pascal, GNU Pascal, Megido and Sybil
stand relative to each other these days?
-- Mark Morgan Lloyd
-- mark...@cix.co.uk
--
-- [Opinions above are the author's, not those of his employers or
colleagues]
Agree.
> Out of curiosity, where do Free Pascal, GNU Pascal, Megido and Sybil
> stand relative to each other these days?
note: I'm slightly FPC biassed due for some minor reasons. (like being FPC
core member)
Medigo: dead (9 months after start or so) However a (quite early) spin off
called Lazarus is still alive and kicking: lazarus.freepascal.org
(see below)
Sybil: Never heard off. Isn't that the maker of Speed Pascal? If so, afaik
dead for years.
VP (Virtual Pascal) is formally also alive, but except "I'm still alive, but
have no time due moving" messages I don't see much output. OTOH Alan could
silently be doing major internal restructuring, that will sooner or later
Delphi: Borland will come with .NET versions of Delphi in the fall, but if
this is just a hype thing, a strategic change, or simply the creation of
another backend is still in the dark.
Kylix seems to be a modest success, but doesn't generate the large scale
revolution in Linux desktop usage that Borland (might have) hoped.
GPC and FPC: Alive, but totally separated. Not much contact except some
friendly sharing of tests.
GPC is wrestling with the migration to gcc 3.x, and I don't see much new
functionality. They do a lot of minor bugfixing though. OTOH they still don't
support elementary things like qualified identifiers.
---
FPC is moving along slowly but sure. The compiler work slowed down somewhat
(the -new major feature every other week- period is history) the last two
years.
Still the development branch is nearly entirely Delphi language compatible,
and the codegenerator has been rewritten fundamentally, probably this
summer, implementing support for non x86/m68k processors can finally start,
and I expect the first ones to become reasonably stable before even 2003.
Some other things that are scheduled for the next version is a threadsafe RTL
and PIC code generation, integrated or strapped on.
Also Peter surpised us last week that the internal linker is starting to work
for Go32V2.
The last year the number of OS targets exploded, and now covers most x86
platforms.
FreeBSD, NetBSD, Solaris, QNX, BeOS, WDosX (Dos extender with DLL support)
Carl ported the stable 1.0.x series to support m68k, and that
seems to work.
Rudimentary Linux/m68k and PalmOS support is also in place.
The texmode IDE is also gradually improving, and we are on the brink of
switching from the old Turbo Vision code (based on copyrighted Borland code)
to an own free version (based on sources of Leon de Boer)
GDB is linked in for debugging. Pascal support (and Wirthian in general) in
recent GDB's has improved significantly. (a FPC core member works on it :_)
--------
FPC vs GPC:
Some people seem to consider GPC a failure. I used to too, but changed my
mind. They are IMHO simply understaffed, and it seems that making a GCC
extension is an awful large amount of work. GPC is basically two people. One
(Peter) doing strictly fundamental compiler work, the other (Frank) one
doing runtime library, user support (maillist), small compiler work, making
the the build process easier and more robust, bug triaging
For a M2 gcc port, I'd recommend having at least 4 core compiler people, or
forget about it, and then a bunch of people that handle the platforms and do
runtime library work. Otherwise, if one starts some traineeship, gets married,
buys an house, the compiler work comes to an halt.
Changing FPC to support M2 should be also possible. The scanner/parser is
highly modularised, and afaik besides that there shouldn't be a real
difference except RTL?
Anyway, enough ranting, on with the work. I've a very obnoxious bug to find
in the FreeBSD port.
This structure also makes it VERY CLEAR to the coder, which changes
could impact the whole project.
Besides the important version control and admin advantages, we also
have (rare, but important, as in Edward's case) situations where the
.DEF starts life associated with a .MOD file, and then this changes
to a .ASM file.
This can also apply, where the .DEF is for a DLL, or C coded file.
Our compiler does not need to create ASM output, but we keep this
feature
as an important user aid. ASM is created as clear as possible, with
compiler source as comments. I have seen others obfusticate the ASM
files,
due to some missplaced paranoia.
-jg
--
======= 80x51 Tools & IP Specialists =========
= http://www.DesignTools.co.nz
This should come as no surprise; it's Not Free Software, and has to
compete in mindshare with things that are.
People are going to be reluctant to leap into trusting Borland for
this when there's the history of Corel's abortive attempt to get into
the Linux biz. People barely got opportunity to get excited about the
WP Office stuff, using the Nifty Corel Tools, when it all went down
the tubes.
What do you do for systems that you need to plan to have running for
ten years? GCC has been around longer than that.
Will Kylix still run on Red Hat 10.2 in 2005? Consider: I can't
install Corel WPO2K on the distributions available in 2002, and it's a
mere 2 years old.
What concrete reasoning would cause people to be more willing to trust
Borland than Corel?
--
(reverse (concatenate 'string "ac.notelrac.teneerf@" "454aa"))
http://www.ntlug.org/~cbbrowne/pascal.html
"Computers double in speed every 18 months or so, so any "exponential
time" problem can be solved in linear time by waiting the requisite
number of months for the problem to become solvable in one month and
then starting the computation." -- pr...@Sunburn.Stanford.EDU
Edward de Jong wrote:
> Dear Fellow Modula-2 programmers, just to keep you spirits up, we have
> passed the 400 thousand copy mark for our Discus CD labeling product. It is
> a cross-platform title for both Macintosh and Windows operating systems,
> runs with instant language switching in any of 15 languages, and is written
> 98% in Modula-2 (the Windows version has a small amount of Assembly language
> for critical bit-twiddling). About 100,000 lines of code, it is well on its
> way to becoming a million selling title.
Well, congratulation for this ..
>
> So keep the faith. Modula-2 is a fantastic language for large commercial
> applications, where the product will go through endless revision.
And which Modula should I use on the Linux ? Dear Sirs, this is such a
bunch of hollow sugar words. Let us see the reality !
1. Logitech M2 has exellent IDE on DOS but passed away with the WIN32
2. Stony Brook runs only on WIN32. It is good but lacks good IDE (v3 has
no help and even no highlighting in the source code !!!) and its
framework for mixed language programming is on too low level (not in IDE
but with the non-standard pre-compiler options)
3. XDS was better but became greedy when moved out of Russia and has
moved to Java camp and left its product where it was - half made and
unsupported
4. Mocka is local to its university
5. GPM (GardensPointModula) stayed on the Oct 10, 1996 or 1997 stage
(after Mr. Ledermann left) with no support for shared objects and with
no IDE and with no answers on direct questions.
Is there anything else that I missed ?
Dear sirs, after 15 years of intense using of M2 in industry and power
production/transmission (more than 300 high budget projects with the
products of more than 100.000 M2 lines each on embedded and PC
platforms) I (we) moved to the C++ camp. Not easyly and especially
unwillingly, I must say ! But, dear sirs, this is just the reallity -
The crappy M$ C++ and C# implementations rule this world and despite the
DEF/MOD feature nowaday's Modula-2 is completely unuseful !
Good morning, and have a good day !
p.s.: Oberon is from Mr.Wirth too. It is better than M2 but unuseful. M3
is good but it is only C++ translation with pascal keywords, so C++ is
better. Not pretty much to stick with, guys, hah !
Iztok Kobal wrote:
<snip>
I agree in most cases of what you have wrote, except - the switch to
C,C++. There are other alternatives. When I moved away from Modula, I
switched to Ada which has many advantages equal to Modula, and it's far
beter than C and its descendents.
Still, that applies to most developpers of the OS systems, but not necessarily
to their users.
I didn't expect people to recode Gnome in Kylix, but I did expect a waterfall
of small tools and apps made with the OE.
> People are going to be reluctant to leap into trusting Borland for
> this when there's the history of Corel's abortive attempt to get into
> the Linux biz. People barely got opportunity to get excited about the
> WP Office stuff, using the Nifty Corel Tools, when it all went down
> the tubes.
OTOH e.g. Netscape _did_ have the wide spread use.
> What do you do for systems that you need to plan to have running for
> ten years? GCC has been around longer than that.
Like nobody used a compiler that isn't 10 years old before. GCC wasn't very
usable until the very late 90s anyway.
I can remember well that I had to patch the GCC RH 5.0 compiler to get it
working on certain processors. It would SIGSEGV otherwise.
> Will Kylix still run on Red Hat 10.2 in 2005? Consider: I can't
> install Corel WPO2K on the distributions available in 2002, and it's a
> mere 2 years old.
Will you be able to run your current GCC binaries on that distro? No,
probably you even won't be able to compile your current source. IOW you
will need an update.
Borland's Pascal range of compilers has been around longer than GNU, so I don't
see why there wouldn't be an update from Borland by then either.
> What concrete reasoning would cause people to be more willing to trust
> Borland than Corel?
The same thing that makes them trust people like RMS and RedHat? Maybe I'll
have to pay per seat licensing for my RH server too in 10.2.
(Of course I don't run a RH server, but imagine that)
The one you help build?
> Dear Sirs, this is such a bunch of hollow sugar words. Let us see the
> reality !
> Dear sirs, after 15 years of intense using of M2 in industry and power
> production/transmission (more than 300 high budget projects with the
> products of more than 100.000 M2 lines each on embedded and PC
> platforms) I (we) moved to the C++ camp. Not easyly and especially
> unwillingly, I must say ! But, dear sirs, this is just the reallity -
> The crappy M$ C++ and C# implementations rule this world and despite the
> DEF/MOD feature nowaday's Modula-2 is completely unuseful !
That's because people like you want to use software, but not invest in its
creation. With that mentality, there wouldn't be a GCC either.
The fact that there is no sound Open Source M2 now, is that heavy M2 users
(like you apparantly) didn't collectively start building one in 199?.
> Good morning, and have a good day !
>
> p.s.: Oberon is from Mr.Wirth too. It is better than M2 but unuseful. M3
> is good but it is only C++ translation with pascal keywords, so C++ is
> better. Not pretty much to stick with, guys, hah !
Then use e.g. Object Pascal. Still less ugly than C++. It was my choice when
I stopped with M2.
Working through my messagebase, apparently Speed Pascal begat Sybil
which begat Megido using Free Pascal as the underlying compiler with
SpeedSoft's IDE and classes as open source. Begat Lazarus.
> VP (Virtual Pascal) is formally also alive,
I'd forgotten about that one.
> Delphi: Borland will come with .NET versions of Delphi in the fall,
> but if this is just a hype thing, a strategic change, or simply the
> creation of another backend is still in the dark.
There was also a variant of Delphi that generated Java bytecodes, this
appears to have departed for the great write-only archive joining Mystic
Pascal and many others.
> GPC and FPC: Alive, but totally separated. Not much contact except
> some friendly sharing of tests.
To summarise what I think I get from the remainder:
* FPC uses its own code generator.
* GPC uses the gcc backend.
* FPC supports x86/68K with the possibility of others.
* GPC potentially supports a wide range of targets but the number
that have actually been tested is unknown.
* FPC has a text-mode development environment although may benefit
from Lazarus.
* GPC is command-line but probably integrates with Emacs etc.
* Both have the potential of supporting (x)gdb.
GNU Modula-2 is in a very similar position to GNU Pascal, Gaius tells me
that he is looking to test with additional targets this Summer.
Lazarus looks impressive- and I'd add that the web page design is a good
example of what can be done with simple techniques. Good luck to them.
I do note that Lazarus is X-specific (as is Kylix), Kylix appears to
have the edge with their database support although I'd imagine that if
"live" db-aware components aren't needed it wouldn't be difficult to
emulate.
I'd still like to see a good text-mode RAD environment like the late
lamented VB for DOS, preferably being able to generate code to work over
a serial line to a standard VT terminal which would make it usable for
embedded systems in the field. Obviously having a CRT implementation
that worked via termcap/curses would be a start, something like the
TopSpeed windowing module would be even better.
>> being FPC core member)
>>
>> Medigo: dead (9 months after start or so) However a (quite early)
>> spin off called Lazarus is still alive and kicking:
>> lazarus.freepascal.org (see below)
>> Sybil: Never heard off. Isn't that the maker of Speed Pascal? If so,
>> afaik dead for years.
>
> Working through my messagebase, apparently Speed Pascal begat Sybil
> which begat Megido using Free Pascal as the underlying compiler with
> SpeedSoft's IDE and classes as open source. Begat Lazarus.
Afaik Megido was purely standalone, and no relation to Sybil.
>> Delphi: Borland will come with .NET versions of Delphi in the fall,
>> but if this is just a hype thing, a strategic change, or simply the
>> creation of another backend is still in the dark.
>
> There was also a variant of Delphi that generated Java bytecodes, this
> appears to have departed for the great write-only archive joining Mystic
> Pascal and many others.
Never heard of. But that would explain why suddenly a press commentof a
bytecode .NET compiler dropped from the sky.
>> GPC and FPC: Alive, but totally separated. Not much contact except
>> some friendly sharing of tests.
>
> To summarise what I think I get from the remainder:
>
> * FPC uses its own code generator.
True.
> * GPC uses the gcc backend.
of 2.x.x
> * FPC supports x86/68K with the possibility of others.
Has been restructured to do more.
> * GPC potentially supports a wide range of targets but the number
> that have actually been tested is unknown.
They actually run on quite a range of systems. But IMHO the usuability of the
compiler itself, and the support outside the core functionality (basic runtime
system) is more limited.
> * FPC has a text-mode development environment although may benefit
> from Lazarus.
Indeed. Note that the IDE is a spitting image from TP's IDE, including
integrated GDB. This includes (a separately available) Turbo Vision port.
> * GPC is command-line but probably integrates with Emacs etc.
Both FPC and GPC integrate with Emacs and RHIDE. But that is limited to e.g.
FPC's own IDE, or GPCs PENG enviroment.
> * Both have the potential of supporting (x)gdb.
Yes, FPC can even
> GNU Modula-2 is in a very similar position to GNU Pascal, Gaius tells me
> that he is looking to test with additional targets this Summer.
IMHO is the problems with GCC ports that less users contribute to the compiler
because it is in another language.
> Lazarus looks impressive- and I'd add that the web page design is a good
> example of what can be done with simple techniques. Good luck to them.
I'll pass it on to their webadmin :-)
> I do note that Lazarus is X-specific (as is Kylix), Kylix appears to
> have the edge with their database support although I'd imagine that if
> "live" db-aware components aren't needed it wouldn't be difficult to
> emulate.
Wrong. it is an abstraction on different widget sets. GTK is most advanced,
but there is somebody working on a win32 set.
> I'd still like to see a good text-mode RAD environment like the late
> lamented VB for DOS, preferably being able to generate code to work over
> a serial line to a standard VT terminal which would make it usable for
> embedded systems in the field.
Then try the FPC Turbo Vision system. Not textmode RAD, but as close as it
gets.
The closed source one is quite nice. The fully open source one is ready, but
has some bugs. (specially in the editor component). However I few serious
users could help advance it rapidly. (we are getting feedback mainly like
"I tried to compile/edit a program, but it doesn't work"
> Obviously having a CRT implementation
> that worked via termcap/curses would be a start, something like the
> TopSpeed windowing module would be even better.
FV (Free Vision) works via the terminfo part of ncurses. There are some
problems, (auto-margins off doesn't work properly on some systems),
but generally is fairly decently done, and if you only need it for a few
terminals, it will only need a few adaptations.
E.g. an xterm to it (like putty) works decently.
Marco van de Voort wrote:
> In article <3CFB11D5...@sysen.si>, Iztok Kobal wrote:
>
>>>So keep the faith. Modula-2 is a fantastic language for large commercial
>>>applications, where the product will go through endless revision.
>>
>>And which Modula should I use on the Linux ?
>
>
> The one you help build?
I have helped to build none since I am an end user. Commercial one. I
would buy a product which makes me profit. Any suggestion about M2 in
comparission to M$ Visual Studio ?
>
>
>>Dear Sirs, this is such a bunch of hollow sugar words. Let us see the
>>reality !
>
>
>
>>Dear sirs, after 15 years of intense using of M2 in industry and power
>>production/transmission (more than 300 high budget projects with the
>>products of more than 100.000 M2 lines each on embedded and PC
>>platforms) I (we) moved to the C++ camp. Not easyly and especially
>>unwillingly, I must say ! But, dear sirs, this is just the reallity -
>>The crappy M$ C++ and C# implementations rule this world and despite the
>>DEF/MOD feature nowaday's Modula-2 is completely unuseful !
>
>
> That's because people like you want to use software, but not invest in its
> creation. With that mentality, there wouldn't be a GCC either.
>
What could I invest in it besides to buy a decent product ? Build it,
I'll buy it. If it is decent, you will earn you a mountain of money - as
Bill Gates did (please, do not start to open an old wound concerning M$
at this point).
And besides - people like me need tools to make anther tools - I create
software which sombedoy uses to make water puring out of your faucet and
electricity to make your computer run for you to write down such notes.
And I do not bother about being free, cheap, licenced, expensive - I
need it to be good and productive - which M2 so-called IDEs are not.
Right now. Hoping for better.
> The fact that there is no sound Open Source M2 now, is that heavy M2 users
> (like you apparantly) didn't collectively start building one in 199?.
>
>
I have been using commercial M2 compilers from mid 80'. The tools have
been OK while supported. Nowadays are not. And it has nothing to do with
the OpenSource community and me. Dear Marco, I am spoiled - I admit that
I am using OpenSource SW to gain me profit. And I will continue to do
it. And I would like to use Modula-2 but there is no any. At least good
one. So I will use C++, C# (when available), Java, Kdevelop, KDE, Linux,
GDB, etc.etc.etc.etc. But I will also use M$VS, Kylix, JBuilder
etc.etc.etc. And I will probably never again use Modula-2.
Grow up, dear Marco, as soon as possible !
I agree - but I need to have standardized tool and similar as much as
possible on WIN32 and Linux. And there is no much choice, is it ? C++ is
de-facto standard which java tries to catch up. XML has potential. I was
hoping that C# would part from C/C++ weaknesses (especially for safe
programming regarding references and types) which did not happen - seems
that M$ does not (want to) know much about productivity where C/++ is on
the bottom of the ranks.
M2 had its point with the XDS. But as already said - they sold their
romantics for money (swapped to native Java as I know, which isn't so
bad move for programmers/productivity also).
Cheers, Iztok
>>>>So keep the faith. Modula-2 is a fantastic language for large commercial
>>>>applications, where the product will go through endless revision.
>>>
>>>And which Modula should I use on the Linux ?
>>
>>
>> The one you help build?
>
> I have helped to build none since I am an end user.
Maybe that's the position you have to review.
> Commercial one. I would buy a product which makes me profit.
Controlling your own buildtools can be quite profitable. Maybe you are a
dime a dozen RAD builder, but there are other jobs in programming.
> Any suggestion about M2 in comparission to M$ Visual Studio ?
You seem to equate M2 programming in general to creating GUI (probably database)
frontend in a RAD. There is more to programming than that.
>>>DEF/MOD feature nowaday's Modula-2 is completely unuseful !
>>
>>
>> That's because people like you want to use software, but not invest in its
>> creation. With that mentality, there wouldn't be a GCC either.
>
> What could I invest in it besides to buy a decent product ?
Time, support etc. You could even have invested time to make the commercial
RAD for it that you want, and sell it for a profit.
> Build it, I'll buy it.
I'm not interested in RADs. At least not for my current job.
> If it is decent, you will earn you a mountain of money - as Bill Gates did
Bill Gates first operated in the language area (as Altair) when there weren't
many bulk development tools available. I can't match that, even in theory,
because times are different.
Afterwards (starting from the early nineties), Microsoft can stand losing on
money the development tools, since it can subsidize them with revenues from
OS and Officeapplications. That killed nearly everybody in the bulk tool
market. Watcom is dead, Speed Pascal is dead, TopSpeed doesn't exist anymore
nearly all old vendors are dead.
The largest of the old ones, Borland was kept standing by Microsoft that feared the
US gov. trial.
> (please, do not start to open an old wound concerning M$ at this point).
Why, you brought it up! You keep dreaming about piles of money, but there
hardly is any money in bulk development tools (comparable to VS) left. The
money and life is in specialised tools. Either because their functionality,
or their target platform.
> And I do not bother about being free, cheap, licenced, expensive
The problem is that when you realize that something isn't commercially
feasable anymore, and you still need it.
> - I need it to be good and productive - which M2 so-called IDEs are not.
Me too, but stability and tools being customizable is more important for my
productivity than some RAD with bright, shiny colours.
It depends on the job.
> I have been using commercial M2 compilers from mid 80'. The tools have
> been OK while supported.
That depends on your work. Right tool for right job. You seem to incorrectly
assume that VC is good for everything.
> Nowadays are not. And it has nothing to do with
> the OpenSource community and me.
Opensource software is an extra. I use commercial software too. But I'm not
afraid to make own tools, create a working environment from some different
components if necessary.
And if I like a certain language, and have a large installed sourcebase, I
don't think switching to VS, as productive the IDE can be, will solve
anything.
> Dear Marco, I am spoiled - I admit that
> I am using OpenSource SW to gain me profit. And I will continue to do
> it. And I would like to use Modula-2 but there is no any.
I haven't used M2 since TopSpeed times. I switched to Delphi, but work
on an open source Pascal compiler to satify the other needs.
But I'm still monitoring here, and not writing of M2.
> Grow up, dear Marco, as soon as possible !
?
Only on Win32. WRONG!
We exist on Win32 (IA-32 processor), Linux (IA-32 processor) and Sun
Solaris (SPARC processor). All with native GUI development environments.
The development environment looks and operates the same across all
platforms.
If syntax highlighting is so important to you we have had that since
version 4, which has been out for quite some time. Never mind the fact
that our development environment has always been able to integrate any
editor. When it comes to programmers editors, religious wars get
started, and we have no intention of trying to write the "be all, end
all, satisfy everyone" programmers editor.
As for mixed language development. What specifically is deficient. I am
interested since I always want to improve things. To call a C, or
something else, procedure all one needs is to change is the calling
convention like this.
PROCEDURE Something1(...) [StdCall];
PROCEDURE Something2(...) [Cdecl];
If you have a host of procedures all of which want the same calling
convention.
<*/CALLS:StdCall*>
PROCEDURE Something1(...);
PROCEDURE Something2(...)
This makes it trivial to do interface files as we must do to do the
Win32, Linux and Solaris API interface files.
What specifically did you find lacking in the development environment. I
would be interested in knowing. Obviously features we want get added,
but unless users speak up about specific gripes, nothing happens.
A good example of this are the features we added to the compiler to make
it easy to have code compile on our compiler and the p1 Macintosh
compiler. Ed (the guy who started this thread) and another company use
our compiler on Win32 and the p1 on the Mac. Since Modula-2 did not
standardize some things like conditional compilation, people like Ed has
to migrate their sources with processing tools they wrote for
themselves. We did some work, a VBA interpreter, for this other customer
and the code had to compile Win32 and Mac. For the most part differences
could be encapsulated, but I wanted an easier way, and being the
compiler developer I did so. I added various p1 compiler extensions to
our compiler. Many overlapped existing features in our compiler, but so
what. For example our compiler supports the conditional compilation
syntax we always have, and also the p1 syntax. You can even mix the two.
After I did these things I told Ed and the other user that they should
have spoke up a long time ago. These were trivial additions and had very
real value.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Iztok Kobal" <iztok...@sysen.si> wrote in message
news:3CFB11D5...@sysen.si...
Norman Black wrote:
> Only on Win32. WRONG!
Sorry, I did not mean to be offensive against StonyBrook. I really did
not know (what a shame) about v4 and later. There was just the subtone
of that conversation that puts the language (which is really good but
not so widely supported and used) on such high levels, so I have
reponded with my frustration. And I had some problems with the v3's
linker some time ago (again with the integration project with C++) and
responses from SB were not as I have expected so I have given up. At
that time there was only v3 and only on WIN32 and we also quickly needed
Linux right afterwards so we have chosen GPM for the other side and I
was not following the SB very much afterwards.
> We exist on Win32 (IA-32 processor), Linux (IA-32 processor) and Sun
> Solaris (SPARC processor). All with native GUI development environments.
> The development environment looks and operates the same across all
> platforms.
>
>
>
So I am sorry for being away all that time - I will give it a try as
soon as possible !
> ..... the features we added to the compiler to make
> it easy to have code compile on our compiler and the p1 Macintosh
> compiler. Ed (the guy who started this thread) and another company use
> our compiler on Win32 and the p1 on the Mac. Since Modula-2 did not
> standardize some things like conditional compilation, people like Ed has
> to migrate their sources with processing tools they wrote for
> themselves. We did some work, a VBA interpreter, for this other customer
> and the code had to compile Win32 and Mac. For the most part differences
> could be encapsulated, but I wanted an easier way, and being the
> compiler developer I did so. I added various p1 compiler extensions to
> our compiler. Many overlapped existing features in our compiler, but so
> what. For example our compiler supports the conditional compilation
> syntax we always have, and also the p1 syntax. You can even mix the two.
> After I did these things I told Ed and the other user that they should
> have spoke up a long time ago. These were trivial additions and had very
> real value.
>
Yes - it is good unless you work with other/different ISO M2 compilers.
StonyBrook was the strongest some time, at least for WIN32, and as I
know you had also influence or at least strong contacts with the ISO M2
comitee. And you all could have thought up the preprocessor standard.
Now I basically have to use two levels of the preprocessor - first one
to help me with the cross-platform (has nothing with native compiler
preprocessor) and the second one to help me with the cross-language
(native to the compiler used). The code afterwards is unreadable before
executing first level. And afterwards the code is not understandable for
let say C-programmer who has just quickly read Wirth's M2 standard and
is just surprised to see preprocessor in M2. And not mentioning double
work when migrating to another compiler because first one became
unsupported !
There was time when I had strong urge to migrate all M2 code to XDS. But
they were Russians and they were changing something all the time. And
for long time they had not long bits support. We did not migrate and we
were right - they ran out of the M2 camp.
Now I am considering GNU M2 which also has not yet long bits support
(maybe I am wrong but that stands in their TODO list) but at least has
something in common with the development system for C++ (it is placed
into the gcc compiler tree) which could help with the mixed language
programming and maybe even let me use e.g. KDevelop for both languages
on Linux. Maybe SB should consider their way for preprocessor just to be
prepared for the case that the GNU M2 becomes stronger and mor than that
- for the sake of compatibility. And maybe SB could help the KDevelop
project with inventing the system for automating M2 makefiles within the
GNU project system - then it would be easy to mix M2 and C++ projects at
least on Linux.
When will this story come to an end ? All in all, I will some day have
all my code in C++ which I do not like at all. But M$VC and gcc at least
do not seem to die quite soon.
Regards !
Iztok
Iztok Kobal wrote:
>
> Alfred Hilscher wrote:
> >
> > Iztok Kobal wrote:
> >
>
> I agree - but I need to have standardized tool and similar as much as
> possible on WIN32 and Linux. And there is no much choice, is it ? C++ is
> de-facto standard which java tries to catch up.
But - Ada is best standarizised, and standard conformity is checked by
the validation process, every compiler must have before it may call an
"Ada"-Compiler.
GNAT for example (GNU Ada Compiler) is available on Windows, Linux,
OS/S2, MAC and some other systems. So _one_ compiler for _all_ platforms
(and it's free).
Marco van de Voort wrote:
> In article <3CFB5BB8...@sysen.si>, Iztok Kobal wrote:
>
>
>>>>>So keep the faith. Modula-2 is a fantastic language for large commercial
>>>>>applications, where the product will go through endless revision.
>>>>
>>>>And which Modula should I use on the Linux ?
>>>
>>>
>>>The one you help build?
>>
>>I have helped to build none since I am an end user.
>
>
> Maybe that's the position you have to review.
>
You have aswered for me later on - I have pretty much of the other work
to do.
>
>
>>Any suggestion about M2 in comparission to M$ Visual Studio ?
>
>
> You seem to equate M2 programming in general to creating GUI (probably database)
> frontend in a RAD. There is more to programming than that.
I meant the project system that helps - tree oriented for complex
projetcs, search, help, doc mechanisms, code autocompletion and more and
more which really put M$VS IDE on such higher and unreachable standards.
What level of the productivity would the M2 reach if it was instead of
C/++ in the M$VS - calculate yourself. I really hope that they will keep
their promise and prepare the .NET framework for all those promised
languages and maybe even for the one they did not promise to build in -
unfortunately it was M2 !
>
>
>
>
>>Build it, I'll buy it.
>
>
> I'm not interested in RADs. At least not for my current job.
>
see, you have the same thinking at this point !
>
>
>>- I need it to be good and productive - which M2 so-called IDEs are not.
>
>
> Me too, but stability and tools being customizable is more important for my
> productivity than some RAD with bright, shiny colours.
The shiny colours mostly mean that the product reached its maturity and
stability and that its developers have already time to do least
important things to it. And more - they most probably have long-term
prespective with their product which means that I will be safe with them
for some time.
>
>
>
>>I have been using commercial M2 compilers from mid 80'. The tools have
>>been OK while supported.
>
>
> That depends on your work. Right tool for right job. You seem to incorrectly
> assume that VC is good for everything.
It is not. But it is better for C/++ than any M2 IDE for M2. And VS
(which I consider as programmers IDE) is quite good for pretty many
other things - isn't it ? Unfortunately M2 is not on that list.
And if you think that I meant that the C/++ and M2 (sheer languages) are
equal in being good for the same things - they really are - they are
both high level sequential languages.
The whole discussion is really about language and not about IDEs. I know
that. But when it comes to the productivity, even C/++ which is much
lower on the table (I have read somewhere that C programmer does avg.
10-15 lines against M2's 25-30 per day working in equal conditions)
becomes winner with the best IDE. And M2 will be used only by
enthusiasts like Ed, you and me who beleive in this effective language,
but we are and will stay in deep minority which means that somebody is
going to replace all our M2 code with something else as soon as we leave.
Regards
iztok
.NET is looking quite good, considering where it is in typical
development
cycles.
http://msdn.microsoft.com/vstudio/partners/language/default.asp
Here you see Two Pascals, and Oberon.
Add to this Borlands commitment to do Delphi -> .NET, and it is not
looking
bleak. ( Apparently, Borland have already a Delphi -> Java in-house)
I would have thought the move from Modula-2 -> Newest Pascal variants
was
easier than M2 -> C/++ ?
Further away from M2, I also like the look of this language
http://research.microsoft.com/foundations/AsmL/default.html
Looks to have potential as a HW description language
( like VHDL/Verilog, but with ability to also CPU process )
-jg
1. Excelsior (former XDS) is in Russia so far (except the site)
2. XDS Modula-2/Oberon-2 has personal edition now that is free (about greedy :)
3. XDS Modula-2/Oberon-2 is supported as it was
4. Excelsior JET -- Java native compiler is written in XDS Modula-2/Oberon-2
entirely and shares back-end of XDS.
regards,
Nikita
> Now I basically have to use two levels of the preprocessor - first one
> to help me with the cross-platform (has nothing with native compiler
> preprocessor) and the second one to help me with the cross-language
> (native to the compiler used). The code afterwards is unreadable
before
> executing first level.
What is all this you talk about a "preprocessor". I really do not follow
what you are saying. If you are talking about a preprocessor like C
language compiler have then Modula-2 has no such requirements. Our
compiler has no preprocessor in this sense. The structure of our
compiler did not change from Wirth Modula-2 to ISO Modula-2.
> Yes - it is good unless you work with other/different ISO M2
compilers.
> StonyBrook was the strongest some time, at least for WIN32, and as I
> know you had also influence or at least strong contacts with the ISO
M2
> comitee.
The features we added were thing like CARD8, INT8, etc from the system
module and the conditional compilation syntax.
IMO The ISO committee dropped the ball big time in the setting a
standard for conditional compilation. Also why they only defined
CARDINAL and INTEGER is beyond me. They could have defined them like the
did some of the RTL modules. They are optional to support, but if they
exist they must follow the standard definition. The lack of types forces
compiler developers to add extended types. In our case we named them
SHORTCARD, LONGCARD, CARDINAL8, CARDINAL16, etc.. We made them pervasive
types. The p1 compiler added CARD8, CARD16, etc to the SYSTEM module.
> And I had some problems with the v3's
> linker some time ago (again with the integration project with C++) and
> responses from SB were not as I have expected so I have given up.
We do not expect our linker to link native MS C++ objects. MS has always
extended the OMF/COFF standard as they see fit. If our linker does not
handle something some other code which you want to link then use another
linker. Our development environment has always allowed integration of
any external linker, and it supports multiple linker response file and
link option switches for other common linkers out there.
Actually when linking to C/C++, especially C++, it is best to link via a
DLL. In this sense all Modula-2 programs on Win32 and Linux are mixed
language since the Modula-2 is calling the OS APIs which are C. Calling
a C++ object directly is basically a no-go since C++ has multiple
inheritance and other things incompatible with the Modula-2 object
model. Years ago we tweaked our object model implementation so that we
could map a COM object onto a Modula-2 CLASS type. This made COM/DCOM
programming very clean and easy.
Most C runtime systems are ***very*** particular about their runtime
system setup. We remember this greatly from the good old DOS days when
your only choice was to link directly. Linking via a DLL eliminates all
inter-language initialization issues. In the DOS days the best solution
was for our system to "fake" itself as the main program of the C system
so that the C system would be properly initialized and then we would
initialize ourselves.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Iztok Kobal" <iztok...@sysen.si> wrote in message
news:3CFC614E...@sysen.si...
.NET IL seems to be missing direct support for "nested/uplevel" variable
access. Am I missing this? This is strange since Pascal, Modula-2 and
Ada all support nested procedures.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Jim Granville" <jim.gr...@designtools.co.nz> wrote in message
news:3CFC7B...@designtools.co.nz...
> The features we added were thing like CARD8, INT8, etc from the system
> module and the conditional compilation syntax.
> IMO The ISO committee dropped the ball big time in the setting a
> standard for conditional compilation. Also why they only defined
> CARDINAL and INTEGER is beyond me.
Purity of design. The intention was that different storage sizes would
be handled using subranges, not by introducing new types. This has both
advantages and disadvantages. On the plus side, the language is simpler
(fewer types and type conversion rules). On the minus side, CARDINAL and
INTEGER have to be the largest size you intend to support, making for
inefficient programs.
Martin
Alas, the real world wins every time. 'Purity of design' sounds good,
and appeals to the accademics, but it is actually counter productive
if the resulting 'inefficent' makes the language impractical.
Oberon did the same thing, in removing unsigned types.
I second this notion. Programming languages are not abstract things, but
a real tools used by real users. The language makes no requirement that
"small" subranges be minimally stored. Real users, in the real world
need types of various storage sizes. Interfacing with operating systems
is an obvious situation.
Exactly how a compiler provides those types of differing storage sizes
is the moving target the standards committee intentionally created. So
we the implementers end up extending the language, all going our own way
to some extent, and the users end up with the shaft. Or you end up with
what I did by duplicating other compiler extensions in our compiler to
provide portability.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Jim Granville" <jim.gr...@designtools.co.nz> wrote in message
news:3CFD33...@designtools.co.nz...
Norman Black wrote:
> Iztok wrote:
>>And you all could have thought up the preprocessor standard.
>
>
>>Now I basically have to use two levels of the preprocessor - first one
>>to help me with the cross-platform (has nothing with native compiler
>>preprocessor) and the second one to help me with the cross-language
>>(native to the compiler used). The code afterwards is unreadable
>
> before
>
>>executing first level.
>
>
> What is all this you talk about a "preprocessor".
Conditional compiling (CC) is the right term. I write down these
messages in hurry and sometimes (since I am not an English) some terms
simply slip out of my mind or they swap with similar. But anyway, I mean
that the standard should be done for CC. The paragraph above (about two
levels of conditional compiling expressions) still stands for me - I am
using SB for WIN32 and GPM for Linux - both have CC and inline compiler
options - different definiton, though. Both are ISO, still, the syntax
of interfacing to/from another languages/libs (this is the real life you
are talking about in another discussion branch) differs. And since I am
mixing the C/++ code/libs with the M2 I have to make some interfacing
both to/from my C/++ code and OS. Again, I am lucky to use IA32 since it
supports 8bit aligning. Using different processors (we have been to
Motorola, previously) I would have to hack with the source code in even
bigger extent - and missing C bitfields (which again do not solve field
spanning over 32bit boundary). So much for Mr. Whitaker's "purity of
design" - still I understand - Mr. Wirth defined concept. ISO M2 comitee
should take care about M2 being functional. And to assure that the code
is easily portable.
Another one for the M2 support and purity of design - GPM development
kit does not support shared objects in Linux (and probably will not for
some time - I asked them for compiler and builder sources to do it
myself with no response). I must have shared objects - so I arranged me
the system to build them (not difficult, anyway). But the debugging
information sucks thus preventing me to easily debug the product
afterwards (at least at the source level). No obvious solution except
migrating to another M2 SDK/IDE where I would have to revise all CC,
inline compiler options and some code generation mismatches as long
(bit)sets support, subrange and set types storage sizes, structure
fields alignment, etc. etc. etc for all that code that is written and
mostly stabilized to the re-usability degree. I do not like it at all.
And this would be migrating within same hardware and OS ! If I look on
the C++ camp - executing development cycle first with the gcc (e.g. on
Linux) and porting to the M$VC for WIN32 aftyerwards is trivial -
everything works except OS calls, of course, and some sporadical tweaks
coming from slight misinterpretation of the C++ standard from the M$
side. And I have even class browser and code autocompletion on both
platforms to help me with the business.
And for unknown M2 implementations - my company cooperated with the ABB,
Sweden, some time ago. They have written their communication stuff in
previous family of their embedded systems in Pascal/M2. For whatever
reason they do not anymore now.
Norman Black wrote:
> Only on Win32. WRONG!
> We exist on Win32 (IA-32 processor), Linux (IA-32 processor) and Sun
> Solaris (SPARC processor). All with native GUI development environments.
> The development environment looks and operates the same across all
> platforms.
Iztok
Does IL use a stack like a compiled language? Maybe they have a different
way to access different frames?
Yes.
A load local instruction always loads from the current frame. If they
had a load local from a previous frame then that would be direct support
by my definition.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Marco van de Voort" <mar...@toad.stack.nl> wrote in message
news:slrnafs8vn...@toad.stack.nl...
No problem. I just needed to understand your terminology.
I could not agree more that CC not being standardized by ISO is an
unbelievably huge missing piece. The Ada83 and Ada95 languages are in
the same boat. No standard CC.
>I am
> using SB for WIN32 and GPM for Linux - both have CC and inline
compiler
> options - different definiton, though. Both are ISO, still, the syntax
> of interfacing to/from another languages/libs (this is the real life
you
> are talking about in another discussion branch) differs.
Compiler options are something that can never, and should never, be
standardized. CC should be standard.
ISO standardized the syntax for directives, but wisely left out the
contents. They could have gone one step further, by mandating a field at
the beginning of the directive that identifies and implementation. In
this way a compiler can ignore directives not meant for it, but for some
other compiler. The identifying field at the beginning could be optional
for simplicity sake.
For example
<* *> ISO directive.
<*@...@ *> The @ immediately after the directive could signify the
presence of an implementation identifier. If the identifier is not
"yours" then ignore the directive.
>Again, I am lucky to use IA32 since it
> supports 8bit aligning. Using different processors (we have been to
> Motorola, previously) I would have to hack with the source code in
even
> bigger extent - and missing C bitfields (which again do not solve
field
> spanning over 32bit boundary).
These are reasons that our compiler support record field alignment like
C compilers, and why we added bitfields to our compiler, and C style
enumeration extensions. All to make interfacing with C easy.
Well,.......easier.
The GTK interface files used in our "Unix" versions make extensive use
of the bitfield and enumeration extensions. In fact, it (GTK) is the
reason I added those features to the compiler. Those additions were not
necessary to use GTK, but were necessary if one wanted to write your own
widgets and possibly let others, even C folk, use the widget. I wanted
our interface to be as "complete" as possible. I even added a compiler
feature to deal with the extensive casting macros GTK uses. The normal
translation to M2 is to make a procedure but I did not want the extra
code generation this would cause. C has a pre-processor and has macros.
I allowed M2 procedure bodies in a DEF file and the procedure is inlined
at each instance. This is close enough to allow translating most C
macros practically unchanged to M2.
I am a Modula-2 zealot, and in no way was I going to let an M2
implementation using GTK take second seat to C, in capability or code
generation size.
Of course I am in a unique position to do something about it. I don't
add features just for the sake of adding features. M2 is a tiny world
and not self contained like C. I want my M2 world to be able to access
that other world as easily, and cleanly as reasonably possible.
> And this would be migrating within same hardware and OS ! If I look on
> the C++ camp - executing development cycle first with the gcc (e.g. on
> Linux) and porting to the M$VC for WIN32 aftyerwards is trivial -
> everything works except OS calls, of course, and some sporadical
tweaks
> coming from slight misinterpretation of the C++ standard from the M$
> side. And I have even class browser and code autocompletion on both
> platforms to help me with the business.
I have talked to many who would disagree with that statement. A simple
look at the "config" headers of various open source C project shows
significant adjustments are made for various compilers and or platforms.
And this stuff is not even calling the OS. Anyway...portable code never
depends on compiler types and such. You have your own "primitive" types
and use those. The primitives can be conditionally compiled. Oops,
there's that non standard feature again (CC).
This is another reason I like the idea of CARD8, CARD16, etc... being
standard. Those types, if supported by an implementation, are not
ambiguous like CARDINAL and LONGINT are. For example on our compiler
CARDINAL is 32-bit and LONGINT is 64-bit, and INTEGER64 is always
64-bit, regardless of what LONGINT is. When we go 64-bit we could let
LONGINT stay 64-bit or keep the concept of a double sized integer.
> No problem. I just needed to understand your terminology.
I got it from the context and agree wholeheartedly with his sentiment,
having struggled with this sort of thing before. To save me finding the
manuals, perhaps you could point me at the page on Stony Brook's Website
that summarises the way you do it?
A web site is a sore issue and something I have no direct control over.
We do not have one. Not that we would do much with the site other than
post the help files online, and post updated maintenance releases.
Regarding conditional compilation syntax. Briefly...
Method 1. The one we have always had since version 1, 1987.
%IF VersionTag %THEN
%ELSIF (Tag1 %AND Tag2) %OR %NOT Tag3 %THEN
%ELSE
%END
So you see it is just like a normal Modula-2 IF statement.
Method 2. The one used by the p1 compiler, and which we also support.
<*IF VersionTag THEN*>
<*ELSIF (Tag1 AND Tag2) OR NOT Tag3 THEN*>
<*ELSE*>
<*END*>
Again like a Modula-2 IF statement but put into directives.
How to define conditional compilation tags.
Doing this from the environment is trivial since we have a dialog for
this.
As for embedded version tag definitions in the source code.
<*/VERSION:Tag*> (* "defines" a version tag *)
<*/NOVERSION:Tag*> (* undefines a version tag *)
/VALIDVERSION defines version tags that are valid to use in a
conditional compilation structure. The existence of VALIDVERSIONTAG is a
bit lame, but it never existed until version 3, and added new capability
without breaking anything that had existed since version 1. The new
capability wanted was to get compilation errors when you typo the name
of a version tag.
We also support p1 compiler directives for doing embedded directives.
To define a new version tag use the following.
<*DEFINE(TagName, value)*>
where value can be FALSE or TRUE
To change the value of a previously defined version tag use
<*ASSIGN(TagName, value)*>
where value can be FALSE or TRUE
Relationship between DEFINE, ASSIGN and /VERSION, /NOVERSION,
/VALIDVERSION
<*DEFINE(TagName, TRUE)*>
is identical to
<*/VERSION:TagName/VALIDVERSION:TagName*>
<*DEFINE(TagName, FALSE)*>
is identical to
<*/VALIDVERSION:TagName*>
<*ASSIGN(TagName, TRUE)*>
is identical to
<*/VERSION:TagName*>
<*ASSIGN(TagName, FALSE)*>
is identical to
<*/NOVERSION:TagName*>
The counter-argument from the academics is that you should not be using
the base CARDINAL and INTEGER types anyway, but always declare your
variables as subranges, to document the values they are expected to
take. This means that the compiler can work out the best storage size
for each variable without the programmer having to worry about what
sizes are actually available. It also increases the portability of
programs, so I can see the appeal of this approach. The point I think
they missed (although having not been present at the discussions, I am
just guessing) is that it is not just storage size that is important but
also precision of arithmetic calculations. The compiler could try to be
smart in this area, but there is bound to be some overhead.
Not being an academic, I would have preferred the proliferation of types!
> Oberon did the same thing, in removing unsigned types.
Along with many other nice features :-(
> I could not agree more that CC not being standardized by ISO is an
> unbelievably huge missing piece. The Ada83 and Ada95 languages are in
> the same boat. No standard CC.
Personally, I have always preferred to isolate my non-portable code in
small modules and provide different implementation modules for each
target - I hate wading through reams of conditional compilation directives.
I don't know why CC wasn't standardised, though. Maybe nobody present at
the inaugural comittee meeting asked for it. It could always have been
added later as an extension, like OO Modula-2 and Generic Modula-2.
Still could be, if anyone wants to volunteer to do the work!
I don't know that detail, but I did see that .NET was changed/extended
as a direct result of the various language efforts.
There is probably a clue in the QUT Component Pascal, as they have both
.NET and JAVA working ?
TMT also plan a .NET version of their Pascal.
-jg
The language deliberately does not specify how things are implemented,
to allow it to be targeted at any hardware. For example, INTEGERs are
not specified as being represented as 2's complement binary values,
because there is hardware out there which represents them in other ways.
> Exactly how a compiler provides those types of differing storage sizes
> is the moving target the standards committee intentionally created. So
> we the implementers end up extending the language, all going our own way
> to some extent, and the users end up with the shaft. Or you end up with
> what I did by duplicating other compiler extensions in our compiler to
> provide portability.
No, the standards comittee intended that you provided the different
storage sizes in the way I described earlier. Sadly, such guidelines
(and the rationale behind them) didn't get published.
- and list your available products, and prices.
certainly sounds like a good idea to me - who does have 'direct control
over' this ?
-jg
Are these C style extensions in all your compilers, or just the UNIX one
?
-jg
If reality allows this, it is a good idea. More often, CC is used to
solve more granular problems.
> I don't know why CC wasn't standardised, though. Maybe nobody present at
> the inaugural comittee meeting asked for it. It could always have been
> added later as an extension, like OO Modula-2 and Generic Modula-2.
> Still could be, if anyone wants to volunteer to do the work!
Sounds like Norm has already done this :)
Also sounds like a good thing to put on the elusive StonyBrook web site
!
Years ago, when we did the CC in our compiler, we followed the Borland
and
TSM2 V3 syntaxes, as they were the stds of the time.
Conditional Compile is over looked in many other languages too.
We do a lot of CPLD designs, and the HDL language we use (CUPL) is the
(only?)
one with a (sort of ) preprocessor and with conditional defines.
Given the nature of the downstream tools, this is an essential feature.
-jg
>The counter-argument from the academics is that you should not be using
>the base CARDINAL and INTEGER types anyway, but always declare your
>variables as subranges, to document the values they are expected to
>take. This means that the compiler can work out the best storage size
>for each variable without the programmer having to worry about what
>sizes are actually available.
It's an excellent idea in principle. Indeed, it's so tempting that I've been
trying for years to figure out how to make it work. Unfortunately there
are still both theoretical and practical barriers to this ideal.
1. The big unsolved theoretical problem: what are the correct types
for intermediate values in calculations? For addition it's simple enough.
For example if A has type [-3..500] and B has type [0..255] then it's
fairly obvious that (A+B) must have type [-3..755], but it gets less
clear once you bring in things like multiplication and division.
Here's an example that keeps hitting me over and over again in my
own programming. Let A, B, and C be 16-bit numbers, and suppose
you know a priori that the quantity A*B/C will fit into 16 bits. How
do you write the code such that it won't overflow, assuming a
16-bit processor? This is elementary in assembly language, but
in Modula-2 (and most other high-level languages) you need a
kludge to code around it, and that's a pain if (as often happens) this
code is in a low-level module where speed of execution is paramount.
The problem here is that M2 is missing the very basic concept of a
multiplication operator that yields a double-precision result.
Now, it's possible that that problem could be solved neatly in a
language that took ranges as the basic integer type, but to make it
work, and work portably, _you have to write into the standard the
rules about intermediate results_, so that every compiler adopts the
same rules. In other words, you must NOT trust the compiler to find
the best storage size, because if you do that you don't get portability
across compilers. You have to tell the compiler writers exactly
what rules to follow.
2. The big unsolved practical problems: if you need to write
embedded software or device drivers you have to interact with
the hardware. If you're writing a program that does input and output
then you have to interact with the user, and in most cases that
means you have to interact with the API of your operating system.
In any of those cases, you need a hidebound guarantee that the things
you're passing have a specified number of bits. Not a specified
numeric range, but precisely the number of bits that the other end
expects.
Now, it just so happens that practically every significant piece of
software I've ever written had that requirement. I'll bet that
that's true for almost everyone else. If you limit attention to pure
numeric calculations, with no I/O to device registers and no
interaction with the OS, then practically the only thing you're
addressing is undergraduate programming assignments.
> It also increases the portability of
>programs, so I can see the appeal of this approach.
It's just occurred to me that "portability" can have several different
meanings. Yes, scrapping bit-width notions does increase the
portability between binary machines and machines that, for example,
use base-5 arithmetic. But when was the last time you saw a
non-binary machine? This was one of those ideas of the 1960s
that never panned out.
The portability that matters to me is this: if my compiler supplier
goes belly-up (as seems likely in the case of XDS for OS/2, and
will probably keep happening for many other compilers, given the
predatory state of the compiler market), how easily can I port my
code to a different compiler? Same binary computer, same
operating system; all I want is for my code to keep working. The
present version of the standard won't give me that.
--
Peter Moylan pe...@ee.newcastle.edu.au
See http://eepjm.newcastle.edu.au for OS/2 information and software
There is a project under GNU to add M2 into the GNU compilers tree.
http://floppsie.comp.glam.ac.uk/Glamorgan/gaius/web/GNUModula2.html
They made some things which I would not like to become "standard"
(assuming that gcc is widespread it could easily happen with GNU M2
(gm2) - it is M$ power-play effect):
They compile M2 as C: everytime importing all needed def-files (this is
slllooowww)
They introduced the TopSpeedM2-like base module library (which is not
ISO, but one can compile ISO himself)
But there is an idea which I like: The compiling process would be
controlled by the GNUproject-like Makefile (it is way off, though, right
now) which means that the M2 code would easily be mixed with the C/++
code within the same project. And not only M2 - there is also Ada and
Fortran.
The CC is C-style and works equaly: you can e.g. "assign" #define'd
value to M2 variable, and (I assume this) also write C-style M2 macros
(which I do not like again - it is not M2 but may be even useful...)
I haven't check it yet for the alignment but there is no compiler option
concernig this anyway so I think that it does not support
less-than-processor-word alignment (like gcc by default)
Maybe it is worth to look at it and to participate/influence that they
finally make useful standard M2 compiler instead of another dead branch.
Iztok
Martin Whitaker <martin_whitaker@remove_this.ntlworld.com> wrote:
> I don't know why CC wasn't standardised, though. Maybe nobody present at
> the inaugural comittee meeting asked for it. It could always have been
> added later as an extension, like OO Modula-2 and Generic Modula-2.
> Still could be, if anyone wants to volunteer to do the work!
>
The approach taken with GNU Modula-2 is to use cpp (if the user
specifies 'gm2 -Wcpp -c module.mod'). Thus gm2 will kick off the C
preprocessor with the options: cpp -traditional -lang-asm -C before
parsing any definition module or implementation module.
Iztok Kobal <iztok...@sysen.si> writes:
There is a project under GNU to add M2 into the GNU compilers tree.
http://floppsie.comp.glam.ac.uk/Glamorgan/gaius/web/GNUModula2.html
> The CC is C-style and works equaly: you can e.g. "assign" #define'd
> value to M2 variable, and (I assume this) also write C-style M2 macros
> (which I do not like again - it is not M2 but may be even useful...)
yes I guess many tricks are possible - given that cpp is used. Its
real intention was to allow conditional compilation, given a mature
well documented preprocessor and it also fits in with other GNU
compilers (ie GNU Fortran, C and C++, use cpp).
It also provides an easy way to access host definitions. Ie
#if defined(__SOLARIS__)
#elif defined(__WIN32__)
#endif
> They compile M2 as C: everytime importing all needed def-files (this is
> slllooowww)
the compiler is not fast, I grant. But I'm not convinced this is due
to parsing imported definition modules. There was a paper written
12-15 years ago which did some analysis of this technique - the
performance penalty was IIRC minimal. Also consider that parsing a
few hundred (max) lines of definition modules shouldn't really cause
much of a performance problem nowadays..
> They introduced the TopSpeedM2-like base module library (which is not
> ISO, but one can compile ISO himself)
> But there is an idea which I like: The compiling process would be
> controlled by the GNUproject-like Makefile (it is way off, though, right
> now) which means that the M2 code would easily be mixed with the C/++
> code within the same project. And not only M2 - there is also Ada and
> Fortran.
currently the front end `gm2' can be invoked via, say:
`gm2 -Wmakeall -g module.mod'
which will build a makefile behind the scenes and compile all dependant
modules and link the program `module'. Alternatively you can build
a makefile from the gm2m utility or compile by hand via:
`gm2 -c -g module.mod'
`gm2 -g module.mod'
> Maybe it is worth to look at it and to participate/influence that they
> finally make useful standard M2 compiler instead of another dead branch.
yes please - participation is very welcome!
Gaius
The programmer must know the exact size of the data if they are to
transmit the data over a network, or write it to disk such that some
other software compiled under some other compiler, possibly in some
other language can use the data. If the compiler is working out the best
storage size then the size of the data is not under the strict control
of the programmer, except on a given implementation where the subrange
allocation algorithm is defined.
Portability is decreased if two implementations implement differing
subrange allocation algorithms.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Martin Whitaker" <martin_whitaker@remove_this.ntlworld.com> wrote in
message news:3CFE7402.2030302@remove_this.ntlworld.com...
Standards comittee proposal. The virtually guarantees something will not
happen, at least not in any time frame users really care about. That is
why I took matters into my own hands. A defacto standard can be created
if compiler developers agreed on a syntax.
A part of the ISO standards committee did have an idea about CC. If I
remember correctly...Albert Wiedemann, author of the p1 compiler and a
member of the German ISO sub-committee, told me they proposed the CC
structure that he implemented in his compiler. I am not sure who they
proposed this to. The main committee or just within themselves, or after
the fact of the original standard.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Martin Whitaker" <martin_whitaker@remove_this.ntlworld.com> wrote in
message news:3CFE7A8C.90801@remove_this.ntlworld.com...
Rich. And I bug him a bug him.
His son is working on a web site using Frontpage. I know this because he
wanted to put the help files on the site and was asking if I had the
help files in HTML format. I do have HTML format because that is how the
"Unix" versions of our product do online help and documentation.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Jim Granville" <jim.gr...@designtools.co.nz> wrote in message
news:3CFE83...@designtools.co.nz...
A Web site doesn't have to have a large amount of dynamic info, just
outline the product, say what its strong point is, where it may be
obtained, and who is responsible for the site in case there are
complaints.
Looking at CC notation, this is something I've sweated blood over in the
past and I hope that some comments will be tolerated. Noting Jim's
comment about being compatible with TS I think that they managed the
worst of all worlds since while symbol definition was typically done
within M2 syntax the actual CC directives obviously have to be outside
it, I've also got reservations about anything that doesn't work within
the context of language0defined comments.
What I've done in the past was used a preprocessor that applied a
reversible transform to an application-defined comment format, this was
originally written to handle content-management for documents being
piped into Ventura Publisher but has proven generally useful. This
assumes that symbol definition is handled on the command line (or,
potentially, in an IDE) but also has provision for setting/resetting and
checking (set/reset/don't know/don't care) symbols as part of the
conditional format.
The example below is from a microkernel support file which can be
compiled for Z80 or x86 (real or protected) on various targets, note how
opening and closing comments are converted reversibly to pairs of
underscores.
(*<NotPC *)
CONST HundredthSecond= TimerResolution DIV 100;
TenthSecond= TimerResolution DIV 10;
Second= TimerResolution;
(*<JPI *)
Minute= 60 * TimerResolution;
(*>JPI *)
(*>NotPC *)
(*<PC __
CONST HundredthSecond= 1;
TenthSecond= 2;
Second= TimerResolution;
Approx18= Second = 18;
(*%T Approx18 *)
Minute= 1090;
(*%E *)
(*%F Approx18 *)
Minute= 60 * Second;
(*%E *)
__>PC *)
1983. In general I agree with you, very often I code defensively by
doing size checks in the unit initialisation but that still doesn't
guard against (e.g.) sign-magnitude rather than 2s complement.
I'd add that another of my betes noir is the lack of implicit subranges
or enumerations which wrap (e.g. an enumeration of the seven days of the
week).
I have been wondering about/keeping an eye for a Z80 Modula-2.
What do you use ?
I did manage to locate Turbo Modula-2, and apparently it can run under
a CP/M emulator on a PC, but I have not tested it yet.
Support and fixes would be a problem with that...
I also asked Borland to add it to their Mueseum downloads :)
-jg
Sounds actually quite close then, good :)
-jg
there's some info about the Stony Brook compiler available at
http://www.ehm.kun.nl/~eh/SBM.htm
"Pricing is $495.00 US" - are there any special offers for students or
educational use?
> I have been wondering about/keeping an eye for a Z80 Modula-2.
> What do you use ?
> I did manage to locate Turbo Modula-2, and apparently it can run under
> a CP/M emulator on a PC, but I have not tested it yet.
> Support and fixes would be a problem with that...
I managed to run it with emulator "22nice". I just extracted Turbo Modula-2
and 22nice to one
directory, renamed m2.com to m2.cpm (to 'hide' it from command.com) and ran
the
"gencom.com" utility with m2.cpm ("gencom m2.cpm"). this produces a small
.com file wich
can be executed from dos after running 22nice.com (TSR).
I did not compile anything yet - just wanted to know if it works.
1. by the way: what do you want to do with it?
2. by the way: there's another Modula-2 Compiler available for CP/M called
FTL-Modula-2.
Christoph
Did, project is on ice at present. Dave Moore's FTL, I'm afraid, which
is defunct- can't recall who handled it in the USA. Note that I'm not
recommending it unreservedly, but if all you want is a standalone
compiler and have your own support modules... well, I've seen worse.
I've used it for a fairly large codebase shared with TopSpeed v3,
including microkernel and networking, and in general found the
combination usable; however in both cases the generated code was going
into RAM rather than ROM so I can't comment 100% on the purity.
Don't, however, be tempted to use any of the linker's tricks to e.g.
handle paged overlays- ten years ago we gave the author a lot of grief
over the bugs and only got a partially-fixed version out of him which
required a heck of a lot of postprocessing to generate correct code.
We asked Dave if he would release the sources to us and he declined
because they were a mess, allowing that Z80 implementations are still
cost-effective I'm unsure what compiler I'd use today.
> I did manage to locate Turbo Modula-2, and apparently it can run
> under a CP/M emulator on a PC, but I have not tested it yet.
> Support and fixes would be a problem with that...
>
> I also asked Borland to add it to their Mueseum downloads :)
That will be the one that they made available for a while via Echelon
and was a direct ancestor of JPI. I'd be (very) surprised if they could
or would make it available, there were allegations at the time that
Borland gave Echelon a hard time by not getting manuals printed for it.
(I stress that this is hearsay that I cannot attribute, I am quoting it
without opining whether it is true. What's more I'm probably not worth
suing.)
We make a 80C51 Modula-2, so focus on the deeply embeded space.
With the somewhat re-discovery of Z80 devices ( see Zilog eZ80,
and the not-quite-binary clones of Rabbit R2000 and R3000 ), there
would seem to be an opening there.
Ideally, such a compiler should have access to the code generator, but
not essential.
Also, bottom end C86 class devices are getting EOL'd, and the
'drag effect' of the top end results in short design lives, and
feature creep.
CP/M or ZCPR and the newer Z80s could give a stable, embedded platform
for medium tasks : too complex for single chip 80C51, too simple for
a 32 bit BGA processor C++/Linux...
> 2. by the way: there's another Modula-2 Compiler available for CP/M called
> FTL-Modula-2.
Have you used this ?
- jg
See my other post. Newer devices like Rabbit 3000 and eZ80 can avoid
those fudges.
> We asked Dave if he would release the sources to us and he declined
> because they were a mess, allowing that Z80 implementations are still
> cost-effective I'm unsure what compiler I'd use today.
What is it written in ?
He should not be too worried, 'mess' can be applied to all source
codes,
especially those with long historical roots :)
> > I did manage to locate Turbo Modula-2, and apparently it can run
> > under a CP/M emulator on a PC, but I have not tested it yet.
> > Support and fixes would be a problem with that...
> >
> > I also asked Borland to add it to their Mueseum downloads :)
>
> That will be the one that they made available for a while via Echelon
> and was a direct ancestor of JPI. I'd be (very) surprised if they could
> or would make it available, there were allegations at the time that
> Borland gave Echelon a hard time by not getting manuals printed for it.
> (I stress that this is hearsay that I cannot attribute, I am quoting it
> without opining whether it is true. What's more I'm probably not worth
> suing.)
Borland did not dismiss it out of hand ( but nothing has happened... )
I even gave them a URL where I found some Turbo M2 stuff.
The TM2 fork may have been 'signed over' to JPI, but given the
present Topspeed status, I can't see that being an issue.
-jg
... don't have any idea what this is all about (programming newby)...
> > 2. by the way: there's another Modula-2 Compiler available for CP/M
called
> > FTL-Modula-2.
> Have you used this ?
nope. I just compiled the (simple) examples from Turbo Modula-2 and
was really amazed (Turbo-Pascal3-feeling). compiling and running of
simple things like hello.mod is not a problem. starting the editor resulted
in unusable output. but the WordStar-commands work - nice22 wants to
be configured, but I do not have the time to play with it (exams...)
FTL and Turbo are available from
http://www.retroarchive.org/cpm/lang/lang.htm
(different FTL-versions), also lots of other compilers there
Christoph
Mod your IL generator to push a procedure's frame before it calls the next
then.
That's how you do it with x86 too :-)
>> They compile M2 as C: everytime importing all needed def-files (this is
>> slllooowww)
>
> the compiler is not fast, I grant. But I'm not convinced this is due
> to parsing imported definition modules. There was a paper written
> 12-15 years ago which did some analysis of this technique - the
> performance penalty was IIRC minimal. Also consider that parsing a
> few hundred (max) lines of definition modules shouldn't really cause
> much of a performance problem nowadays..
Is gm2 capable of compabiling a main program with modules in one run?
If so, then don't worry about the above, since it will only be parsed once
anyway.
One of the main reasons for producing the ISO standard was to reduce the
number of more granular problems :-) You can isolate them entirely,
but of course this imposes an efficiency penalty. I don't deny the need
for CC, I just try to restrict its use as far as possible. Much depends
on the type of software you are writing - embedded software is going to
throw up a lot of hardware dependancies that need CC for efficient
operation.
>>I don't know why CC wasn't standardised, though. Maybe nobody present at
>>the inaugural comittee meeting asked for it. It could always have been
>>added later as an extension, like OO Modula-2 and Generic Modula-2.
>>Still could be, if anyone wants to volunteer to do the work!
>>
>
> Sounds like Norm has already done this :)
No, what he has done (which I applaud) is to make his compiler accept
another vendor's syntax. What could still be done is to define a
standard, either through the ISO process (which would take lots of
work), or, more sensibly, by the remaining few M2 compiler vendors
getting together and agreeing a de-facto standard that all will support.
Martin
Not in this case. Linker output was WRONG.
> > We asked Dave if he would release the sources to us and he declined
> > because they were a mess, allowing that Z80 implementations are
> > still
> > cost-effective I'm unsure what compiler I'd use today.
>
> What is it written in ?
Night Parrot Pascal, I believe.
Practical, yes, hence my comment that I would have voted for a
multiplicity of types. Theoretical - I'm not so sure. Taking your example:
> Here's an example that keeps hitting me over and over again in my
> own programming. Let A, B, and C be 16-bit numbers, and suppose
> you know a priori that the quantity A*B/C will fit into 16 bits. How
> do you write the code such that it won't overflow, assuming a
> 16-bit processor?
Assume the compiler decides to order the calculation as (A*B)/C. This
needs a 16 x 16 -> 32 bit multiply followed by a 32 / 16 -> 32 bit
divison, followed by a 16 bit range check before assigning the result to
a 16 bit storage location. Not the most efficient solution, I agree, but
doable.
> Now, it's possible that that problem could be solved neatly in a
> language that took ranges as the basic integer type, but to make it
> work, and work portably, _you have to write into the standard the
> rules about intermediate results_, so that every compiler adopts the
> same rules. In other words, you must NOT trust the compiler to find
> the best storage size, because if you do that you don't get portability
> across compilers. You have to tell the compiler writers exactly
> what rules to follow.
I used storage size to refer to the precision of values stored in
variables, not intermediate results. As written, the ISO standard
requires all intermediate results to be held to the full precision of
the base type (INTEGER or CARDINAL), so all compilers should produce the
same result providing the base type has sufficient precision (ignoring
order of evaluation, which opens up a whole new can of worms!). Again,
this is not going to produce the most efficient code...
> 2. The big unsolved practical problems: if you need to write
> embedded software or device drivers you have to interact with
> the hardware. If you're writing a program that does input and output
> then you have to interact with the user, and in most cases that
> means you have to interact with the API of your operating system.
> In any of those cases, you need a hidebound guarantee that the things
> you're passing have a specified number of bits. Not a specified
> numeric range, but precisely the number of bits that the other end
> expects.
Agreed, but this comes down to the quality of the compiler. If I wrote
VAR x : [0..255];
and compiled it for a machine that supported byte addressed storage, I
would expect x to be stored in exactly 8 bits.
>>It also increases the portability of
>>programs, so I can see the appeal of this approach.
>>
>
> It's just occurred to me that "portability" can have several different
> meanings. Yes, scrapping bit-width notions does increase the
> portability between binary machines and machines that, for example,
> use base-5 arithmetic. But when was the last time you saw a
> non-binary machine? This was one of those ideas of the 1960s
> that never panned out.
Perhaps I should have expressed this differently. What I mean is that a
program written using subranges will compile and run correctly on any
machine, regardless of the size and number of storage sizes supported,
providing the subranges do not exceed the size of the base types. Of
course, if you wish to interface to external hardware or software,
actual storage sizes become important, but these interfaces are
inherently non-portable parts of a program.
All compilers support all extensions. I am not sure why I would limit
such a thing, especially since I would have to alter code to stop it
from being allowed.
As a quick exposition of the subject
The enumeration extension looks like
TYPE
enum =
(one,
two = 53,
three
);
ORD(one) = 0
ORD(two) = 53
ORD(three) = 54
(* priorities for path lookups *)
GtkPathPriorityType =
(
GTK_PATH_PRIO_LOWEST = 0,
GTK_PATH_PRIO_GTK = 4,
GTK_PATH_PRIO_APPLICATION = 8,
GTK_PATH_PRIO_RC = 12,
GTK_PATH_PRIO_HIGHEST = 15,
GTK_PATH_PRIO_MASK = 00fh
);
Bitfields
TYPE
GtkContainer =
RECORD
widget : GtkWidget;
focus_child : pGtkWidget;
BITFIELDS
border_width : guint BY 16;
need_resize : BOOLEAN BY 1;
resize_mode : guint BY 2;
reallocate_redraws : BOOLEAN BY 1;
END;
(* The list of children that requested a resize *)
resize_widgets : pGSList;
END;
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Jim Granville" <jim.gr...@designtools.co.nz> wrote in message
news:3CFE84...@designtools.co.nz...
Should be true for any compiler.
Parsing Win32 APIs will slow things down, even a compiler like ours,
since the API is so large
WIN32.DEF = 8271 lines (core kernel APIs)
WINUSER = 7915 lines (windowing APIs)
WINGDI = 5638 (graphics drawing APIs)
This only scratches the surface.
For "Unix" GUI
GTK.DEF = 13815 (all GTK. GDK adds about 4000 more)
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Gaius Mulley" <ga...@glam.ac.uk> wrote in message
news:87n0u83...@floppsie.comp.glam.ac.uk...
No problem with comments and the CC syntax we support.
--
Norman Black
Stony Brook Software
nospam => stonybrk
<mark...@cix.compulink.co.uk> wrote in message
news:adobs6$4a8$1...@thorium.cix.co.uk...
21000 indeed. The entire windows RTL is about 37000 lines over here.
The win32 API is even slowing down with headers in binary form.
> This only scratches the surface.
The full translation would be near to 100 MB.
> For "Unix" GUI
> GTK.DEF = 13815 (all GTK. GDK adds about 4000 more)
Which version?
[...]
> They compile M2 as C: everytime importing all needed def-files (this is
> slllooowww)
> They introduced the TopSpeedM2-like base module library (which is not
> ISO, but one can compile ISO himself)
>
> But there is an idea which I like: The compiling process would be
> controlled by the GNUproject-like Makefile (it is way off, though, right
> now)
This is probably the main reason it is so slow. This forces to load and
reinit the compiler, reparse all headers for _each_ module.
> which means that the M2 code would easily be mixed with the C/++ code
> within the same project. And not only M2 - there is also Ada and Fortran.
This has nothing to do with Make. One can mix with GCC code using other
compilers too. But for e.g. FreePascal you have to make header units for
the other language parts, and fix up calling conventions.
There is one advantage in that though, one can read C headers without
converting. (requires less header porting).
I wouldn't bet on Ada though. Most GCC language allow interoperability with C,
but not necessarily with eachother.
> I haven't check it yet for the alignment but there is no compiler option
> concernig this anyway so I think that it does not support
> less-than-processor-word alignment (like gcc by default)
I think that indeed will be a problem. GPC still doesn't have this right
either.
> Maybe it is worth to look at it and to participate/influence that they
> finally make useful standard M2 compiler instead of another dead branch.
I _won't_ code in C. Make a self contained compiler, and I'll think about it
:-)
IMHO gcc is too focussed on C, and it takes a tremendous amount of manpower
to get even the base core (plain simple standard M2) working, including
things like alignment. (which is essential to keep old programs binary
compatible when ported to gm2)
The problem is also that one asks M2'ers to code the compiler in C, which is
also not really encouraging, since core-people will have to spent a *really*
lot of time with it, coding in C.
Norman already seconded this. That leaves me to "third" it? Funny language
English :_)
But it is true, and can be seen not only in the Modula2 camp, but
fragmentation is even worse on Pascal, where the ISO branch and the usable
branch (Turbu Pascal/Delphi) totally splitted.
That page with information is the web site of one of our key beta
testers. Egbert is the author of the IDL compiler used in our system for
creation of COM objects. He prompted us to adjust our object mapping to
allow mapping a COM object directly to a M2 CLASS type. He also
contributed the OO version (CLASS types) of the Win32 Ole2 interface,
HTML help and various cryptography modules in our runtime library.
We have no special offers for students, with one exception. Trinity
Western University. A long time ago we made a deal with Rick Sutcliffe.
We were moving to ISO and he gave us his ISO RTL source for a p1 Mac
system, that we could modify to run on our platform(s), and we gave him
a license to us our compiler for their classes. We also gave the
students there a price break, should they want something for working at
home. Getting a full ISO RTL source base saved us time converting our
system to ISO. At the time Rick was my primary contact to actual ISO
committee members for any question I might have had while modifying the
compiler for ISO.
1.2 right now.
I closely monitor the GTK mailing lists and 2.0 is having some growing
pains. Two maintenance releases in a month or so and there seems to be
some performance issues. Therefore doing 2.x interfaces is not a hot
burner issue as yet. Probably in a few months or so we will get around
to it.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Marco van de Voort" <mar...@toad.stack.nl> wrote in message
news:slrnag141n....@toad.stack.nl...
Our entire Win32 API DEF file set is currently at 72884 of which 60902
are non blank lines. !!! And we are behind the curve. !!!
I have yet to update our headers from the current SDK .h files. Luckily,
99.99% of people do not care about that latest bleeding edge API which
only the brand new operating system supports.
Out of curiosity...What do you mean by "binary form". Our "SYM" files
are simply tokenized source with some minor extra information.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"Marco van de Voort" <mar...@toad.stack.nl> wrote in message
news:slrnag141n....@toad.stack.nl...
We only put the base in there. Also that version doesn't have interface
support. We can however use the Jedi (delphi open source group) headers for
98% which are another 200k lines.
> I have yet to update our headers from the current SDK .h files. Luckily,
> 99.99% of people do not care about that latest bleeding edge API which
> only the brand new operating system supports.
I do some Win2000+ specific programming, specially active directory.
> Out of curiosity...What do you mean by "binary form". Our "SYM" files
> are simply tokenized source with some minor extra information.
Ours are streamed parsetrees + symboltables afaik. In the future it can even
contain code (cross unit inlining)
The rest is info about defines (recompile if preprocessor defines change),
dates, endian info, target, checksums, compiler version, ppu format etc.
We are also at 1.2, fixed for 1.3 (windows GTK)
> I closely monitor the GTK mailing lists and 2.0 is having some growing
> pains. Two maintenance releases in a month or so and there seems to be
> some performance issues. Therefore doing 2.x interfaces is not a hot
> burner issue as yet. Probably in a few months or so we will get around
> to it.
We also hope that someone will submit it :-)
Well, this one is way out of sync with the facts.
C++ is very close to 8 times the complexity of M3 (Get
out the ARM and count pages, language definition only).
C++ lacks several semantically important features, e.g.
GC, threads, separate interfaces, and has nothing close
to opaque types and partial revelations.
But worst of all, much of the 390 or so pages of C++
definition are what I call "affirmative undefinedness",
meaning page after page of arcane detail on all the
things you could write that are "undefined", where
"undefined" is a precisely defined technical term meaning
"if you do this you can expect neither a compile-time nor
runtime error message, nor predicable behaviour."
--
Rodney M. Bates
> TYPE
> GtkContainer =
> RECORD
> widget : GtkWidget;
>
> focus_child : pGtkWidget;
>
> BITFIELDS
> border_width : guint BY 16;
> need_resize : BOOLEAN BY 1;
> resize_mode : guint BY 2;
> reallocate_redraws : BOOLEAN BY 1;
> END;
Can you explain this syntax ? Are the BY values number of bits? Counted from
which direction?
Or when directly linking to another language, including the kernel or
OS-userland interfaces.
Syntax is really just like C. The value after BY is the number of bits
for the given type. Bits are allocated according to the "typical" C
algorithm for bit allocation, so translating a C struct is a no
brainier. I checked our allocation against MS VC, Gnu and the Sun Forte
compiler. So big and little endian work fine. I did not invent anything
on my own, I mimicked C, which was the whole point of adding bitfields
as an extension.
This is going to be a crappy explanation, but here it is anyway.
Bits are allocated and packed into an aligned boundary depending on the
type of the bitfield. Bits are packed left to right. Little endian
0..31, big endian 31..0. This means the bit fields have the same order
in memory space regardless of endian. A bit field is fetched as a full
type of the bitfield type, and then masked to access the individual
field bits. This means it is possible for the a bitfield to "overlap"
the non bit field elements of the record. This is no issue since it is a
bit field after all. A bit field must fully fit within aligned native
type boundary of the bits field type.
For example the "resize_mode" field is a 32-bit integer so it is loaded
as a 32-bit type. "need_resize" is a 8-bit Boolean so it is loaded as a
1 bit type. You will notice that all four bitfields can fit into the
same 32-bits in the record.
Guidelines for CC (and compiler directives in general) were part of
a document called "Interfacing Modula-2 to C" which was apparently a
side project of the ISO committee but did not become part of the standard.
A quick search shows that a PDF version is available through the Web.
I will download that and have a look. Maybe I will find something
interesting.
I am NOT trying to link m2 to C libraries.
Instead I wish to translate C to m2.
/cmg
Yes. I copied the declaration from the DEF module to this message. Also
note that the Win32 API has a wsprintf API call. The API in our RTL
(FormatString) exists everywhere we support.
PROCEDURE FormatString(formatStr : ARRAY OF CHAR;
VAR OUT destStr : ARRAY OF CHAR) : BOOLEAN
[RightToLeft, LEAVES, VARIABLE];
(* destStr can be the same string as formatStr. *)
(* this procedure takes a variable number of parameters *)
(* to accomodate the contents of the format string *)
(* note that when you pass a numeric constant you are *)
(* passing a longint, since in Stony Brook compilers all constants *)
(* are longint size until typed by an assignment or expression. *)
(* there is no formal parameter for a variable parameter to type check
the constant to. *)
(* use ORD or INT if you need a CARDINAL or INTEGER sized constant *)
(* passed as one of the variable parameters. *)
(* the following character combinations allow simple entry of some *)
(* control characters into a format string *)
(* control characters are prefixed by a backslash character *)
(* \n = new line
on Unix systems this is a single linefeed character CHR(10).
on other systems this outputs two characters CHR(13)CHR(10)
*)
(* \t = horizontal tab *)
(* \v = vertical tab *)
(* \f = form feed *)
(* \x000 = character in hexidecimal form *)
(* if you want a backslash character in the output string then you need
*)
(* to use a double backslash character sequence \\ character sequence.
*)
(* if an unknown control specifier follows a backslash it is simply
output *)
(* as itself, remember that the \ preceeding the char is not output *)
(*
Format specifications, discussed below, always begin with a percent
sign (%). If a percent sign is followed by a character that has no
meaning as a format field, the function immediately returns FALSE,
unless the next character is also a percent sign. In this case a
single percent sign is placed into the output string.
The format-control string is read from left to right. When the first
format specification (if any) is encountered, it causes the value of
the first argument after the format-control string to be converted and
copied to the output string according to the format specification.
The second format specification causes the second argument to be
converted and copied, and so on. If there are more arguments than
format specifications, the extra arguments are ignored. If there are
not enough arguments for all of the format specifications, the results
are undefined.
The function will return FALSE if the output string cannot fully
contain
the results of the formatting.
An illegal format field will cause a return of FALSE.
A hex control character that is too large will cause a return of
FALSE.
A field width that is too large will cause a return of FALSE.
Otherwise the function returns TRUE.
A format specification has the following form:
%[-]['][width]type
Each field is a single character or a number signifying a particular
format option. The type characters that appear after the last optional
format field determine whether the associated argument is interpreted
as a string, or a number. The simplest format specification contains
only the percent sign and a type character (for example, %s).
The optional fields control other aspects of the formatting.
Following are the optional and required fields and their meanings:
Field Meaning
- Pad the output to the right to fill the field width,
thus
justifying output to the left.
If this field is omitted, the output is padded to the
left,
thus justifying it to the right.
' The single quote character is followed by a character
which will be used as the padding character if padding
of the field is necessary. The default padding character
is a blank space.
width A field width is specified by a cardinal constant or
an asterisk. An asterisk specifies the field width is a
variable and is the next parameter in the format
parameters.
The parameter type is CARDINAL.
Width must be <= 256.
Copy the specified minimum number of characters to the
output string. The width field is a cardinal and the
default is zero.
The width specification never causes a value to be
truncated; if the number of characters in the output
value
is greater than the specified width, or if the width
field
is not present, all characters of the value are printed.
Pad characters will be output as necessary to fill the
field width if the value does not fully occupy the
specified width.
type Output the corresponding argument as a string, or a
number.
This field can be any of the following character
sequences:
The output type specifiers are not case sensitive.
c the type is CARDINAL, output is in decimal
h the type is CARDINAL, output is in hexadecimal
i the type is INTEGER, output is in decimal
l the type is LONGINT, output is in decimal
s the type is a null terminated ARRAY OF CHAR.
The terminating null character is mandatory.
*)
PROCEDURE FormatStringEx(formatStr : ARRAY OF CHAR;
VAR OUT destStr : ARRAY OF CHAR;
args : ADDRESS) : BOOLEAN;
(*
as FormatString except the base address of the "variable" arguments is
passed in the args parameter.
this function can be useful within a procedure that accepts a variable
number of
arguments and still wants to call FormatString with those variable
arguments.
in this case do the following.
PROCEDURE MyProc(...);
VAR
addr : ADDRESS;
BEGIN
VA_START(addr);
...
ok := FormatStringEx(format, dest, addr);
END MyProc;
*)
C does not have the single quote padding specifier. At least not the VC
documentation I just looked at.
Our FormatString does not emulate C sprintf. It is similar to C sprintf.
AKA offers the same functionality.
I have no idea where I got the idea for the single quote padding
character specifier.
--
Norman Black
Stony Brook Software
nospam => stonybrk
"William Burrow" <no....@completely.invalid> wrote in message
news:slrnahi6ak...@molokai.surfcity.nb.ca...
> On Tue, 25 Jun 2002 12:47:20 -0700 in comp.lang.modula2,
> Norman Black <nos...@ix.netcom.com> wrote:
> > "6079 SmithW" <not_a...@orwell.org> wrote in message
> > news:MPG.178261df8...@news-server.san.rr.com...
> >> Q. Does SB m2 (Windows version) have a library with the functional
> >> equivalent of C's printf and scanf function?
> >
> > Yes. I copied the declaration from the DEF module to this message.
Also
> > note that the Win32 API has a wsprintf API call. The API in our RTL
> > (FormatString) exists everywhere we support.
>
> Gosh, I learned some stuff about C reading that, in particular the
> single quote part of a specifier....
>
>
> --
> Copyright 2002 William Burrow o
> Uptime means money. ~ /\
> ~ ()>()
I couldn't find the HTML version a few days ago, but it turns out it is
still available online after all:
http://www.zi.biologie.uni-muenchen.de/~enger/SC22WG13/im2c-981130.html
--
- David.Thompson 1 now at worldnet.att.net