RAD -> Low-Code -> ?

331 views
Skip to first unread message

Pascal Bergmann

unread,
Aug 15, 2018, 3:36:13 AM8/15/18
to Eve talk
Do you remember Rapid Application Development? The first language that comes to my mind is Visual Basic. You dragged and dropped UI elements on forms, just coded some details and compiled a big EXE that got everything in it. Copy, paste, deployed!

RAD developed into Low-Code platforms, like Mendix. Here, you not only have some visual editors, you also make good use of the modeling languages you speak in a business context. BPMN. UML. The platform takes all those artifacts that describe different perspectives of your solution and builds an app out of that.

What do you think will be the next realistic step in that direction? We can keep the business context for now.

Liam Proven

unread,
Aug 15, 2018, 7:51:44 AM8/15/18
to eve-...@googlegroups.com
Apple's Dylan language was very nearly an important step in that
direction. It combined a VB-like interface builder with a Lisp-like
but far more readable infix-notation language which had an
exceptionally rich IDE, far ahead of its time.

Sadly almost all the screenshots and walkthroughs are gone now.

https://opendylan.org/history/apple-dylan/screenshots/misc.html

https://opendylan.org/history/apple-dylan/screenshots/dynamic.html

For instance it had very rich mechanisms for browsing through its
object libraries:

https://opendylan.org/history/apple-dylan/screenshots/browsers.html

https://opendylan.org/history/apple-dylan/screenshots/index.html

More:

https://opendylan.org/history/apple-dylan/eulogy.html

https://web.archive.org/web/20061016135501/http://osteele.com/museum/apple-dylan

https://web.archive.org/web/20060101181134/http://apple.computerhistory.org/discuss/msgReader%24186?mode=day

It's possible to run the tech preview in an emulator such as SheepShaver:

https://www.macintoshrepository.org/1358-apple-dylan-tr



--
Liam Proven - Profile: https://about.me/liamproven
Email: lpr...@cix.co.uk - Google Mail/Hangouts/Plus: lpr...@gmail.com
Twitter/Facebook/Flickr: lproven - Skype/LinkedIn: liamproven
UK: +44 7939-087884 - ČR (+ WhatsApp/Telegram/Signal): +420 702 829 053

magicmo...@gmail.com

unread,
Aug 22, 2018, 4:07:05 AM8/22/18
to Eve talk
It is very tempting to imagine that some miracle language somehow didn't get out of Apple's crucible. The Dylan/Ralph language did not build reliable products.
In this amusing history of the Newton:  https://gizmodo.com/5452193/the-story-behind-apples-newton, you can read about how even late in the game Apple had thousands of outstanding bugs. The fact that what is now considered a very simple device, had so many errors, is a sign the implementation language is not addressing the single most important aspect of programming, which is helping the user avoid making mistakes. Human error is the problem with computer programming, and some languages, like Modula-2 for example, was designed by Prof. Wirth to avoid many of the most vexing problems in C: null pointers, numeric overflow, array bounds violations, etc., so that a program can be debugged much faster. As one of the users of Modula-2 when everyone else was using C, programs were half the number of lines, and at least 5x more reliable. To this day large C programs (like Windows itself) can be compared to swiss cheese with all the bugs still in them. 

Evidently Ralph/Dylan used some of the deadliest features of Lisp, which is self-modifying code. There is no evidence in the 50 years since we have had LISP that large systems built in self-modifying language can be maintainable by someone other than original author. When you are programming you have to model in your head what is going to do in response to your instructions. If you code in self-modifying language, that means second-level thinking. That immediately narrows down the potential labor pool, and although self-modifying languages like LISP and FORTH are shockingly compact, they will forever remain niche languages. Yes, we may eventually evolve methods of handing self-modifying systems, but at present there is mathematically sound method available, and the dream of having A write B which then does C, or even higher levels of nesting, will have to wait for the future. LISP is like a bad penny that keeps circulating over and over; it never goes away, and each generation of programmers gets excited about some derivative of LISP, but then realizes it isn't practical. 

At some point Jobs canned Dylan inside Apple, and it wasn't that he didn't put a lot of resources into it. Jobs was very aware of the power of languages, but he was a practical man and if it didn't make products more reliable, it has to be considered a failure. People constantly show benchmarks on how fast languages are, or how small they are, but you never see the error probability charts for a language, or measurements showing the sturdiness of a language. If you were to take a big glob of source code, and randomly inject keystrokes, how many of them could be found? Some languages like JavaScript (which is actually an uncredited copy of Macromedia/Adobe's ActionScript 2) are so flabby that they only check errors at execution time, which forces people to use a "linter" to find errors, because the compiler does such a crappy job.


Liam Proven

unread,
Aug 26, 2018, 12:54:16 PM8/26/18
to eve-...@googlegroups.com
On Wed, 22 Aug 2018 at 10:07, <magicmo...@gmail.com> wrote:
>
> It is very tempting to imagine that some miracle language somehow didn't get out of Apple's crucible.

Is it? I suppose that's one interpretation. It's not mine.

> The Dylan/Ralph language did not build reliable products.

"This is a generalisation. All generalisations are false. Therefore
this statement is false."

Ha ha, only serious.

Firstly, the Newton project did deliver working, viable products. I
own 2 Newtons, an OMP and a 2100 with NewtonOS 2.0.

I love them, although they were far too far ahead of their time.

> In this amusing history of the Newton: https://gizmodo.com/5452193/the-story-behind-apples-newton, you can
> read about how even late in the game Apple had thousands of outstanding bugs.

Oh, piffle. That is true of every released OS ever created.

They worked. They worked astonishingly well for the time.

Did you own Newtons? Ever use a Newton? Use it for long enough for it
to learn your handwriting?

If not, then I suggest that you don't really know and are relying on
hearsay and poor oral histories.

> The fact that what is now considered a very simple device,

Right, so you _really_ don't know much about the Newton.

For its time, it was arguably the most complex personal computing
product ever built.

See the comments here:

http://www.loper-os.org/?p=231

And here:

http://www.loper-os.org/?p=568

There were in essence 2 versions of the Newton.

The first would be a pocket Lisp Machine, with single-level store, no
artificial distinction between filesystems and RAM and so on. But
rather than Lisp, John McCarthy's original low-level language, the
Apple team finished McCarthy's work with his
planned-but-never-implemented higher-level language on top. Dylan is
an infix-notation language, unlike the prefix-notation Lisp. It looks
and works more like a conventional programming language, readable and
easy.

This means that it lost Lisp's homoiconicity. Code was no longer
simply data, and Lisp's self-modifying macros became a lot harder.
That was not the important aspect.

> had so many errors, is a sign the implementation language is not addressing the single most important aspect of programming, which is helping the user avoid making mistakes.

It's _one_ of them. Whether it's _the_ most important is dubious.

It is, like everything in life, a compromise.

> Human error is the problem with computer programming,

One of the problems.

> and some languages, like Modula-2 for example, was designed by Prof. Wirth to avoid many of the most vexing
> problems in C: null pointers, numeric overflow, array bounds violations, etc., so that a program can be debugged
> much faster. As one of the users of Modula-2 when everyone else was using C, programs were half the number
> of lines, and at least 5x more reliable. To this day large C programs (like Windows itself) can be compared to
> swiss cheese with all the bugs still in them.

Overall, yes, I'd agree with that.

One my current areas of research is what followed Modula-2. Oberon,
and the Oberon OS.

I wrote about it here:

https://www.theregister.co.uk/2015/12/02/pi_versus_oberton/

More info and links:

https://liam-on-linux.livejournal.com/46523.html

Oberon is a fascinating OS and what it grew into -- A2/Bluebottle --
perhaps even more so.

Here's a pretty good overview:

https://www.progtools.org/article.php?name=oberon&section=compilers&type=tutorial

> Evidently Ralph/Dylan used some of the deadliest features of Lisp, which is self-modifying code.

"Evidently"?

This would seem to betray a lack of knowledge of the language.

Hint: it doesn't.

> There is no evidence in the 50 years since we have had LISP that large systems built in self-modifying language
> can be maintainable by someone other than original author. When you are programming you have to model in
> your head what is going to do in response to your instructions. If you code in self-modifying language, that
> means second-level thinking. That immediately narrows down the potential labor pool, and although self-
> modifying languages like LISP and FORTH are shockingly compact, they will forever remain niche languages.

This, in part, was exactly my point.

Dylan is not some cryptic language for solitary geniuses, like Lisp,
Forth or APL.

It's as readable as Visual Basic.

> Yes, we may eventually evolve methods of handing self-modifying systems, but at present there is
> mathematically sound method available, and the dream of having A write B which then does C, or even higher
> levels of nesting, will have to wait for the future. LISP is like a bad penny that keeps circulating over and over; it
> never goes away, and each generation of programmers gets excited about some derivative of LISP, but then
> realizes it isn't practical.

Because most of them lack the courage to break away from the original
into new directions.

> At some point Jobs canned Dylan inside Apple, and it wasn't that he didn't put a lot of resources into it. Jobs was
> very aware of the power of languages, but he was a practical man and if it didn't make products more reliable, it
> has to be considered a failure.

Again, you criticise something of which you seem to know little.

*None* of this was under Jobs.

The Newton was John Sculley's baby, the man who fired Jobs. It was
long gone before Jobs returned.

> People constantly show benchmarks on how fast languages are, or how small they are, but you never see the
> error probability charts for a language

That *would* be very interesting.

> , or measurements showing the sturdiness of a language. If you were to take a big glob of source code, and
> randomly inject keystrokes, how many of them could be found?

I don't think there is or has ever been a language which could withstand that.

> Some languages like JavaScript (which is actually an uncredited copy of Macromedia/Adobe's ActionScript 2)

That is vastly too sweeping. It contains a kernel of truth, but no more.

> are so flabby that they only check errors at execution time, which forces people to use a "linter" to find errors,
> because the compiler does such a crappy job.

Sounds about right. But it's not the point, really...

We need much more powerful tools, and we need ones that are also safer, as well.

Lisp is _one_ step in that direction. Your criticisms smack of unreasoning bias.

Looking back, the Lisp Machines are one of the most-loved,
most-missed, and to their admirers, most powerful, capable computers
ever created. I think that strongly counters your assertion that Lisp
is unmaintainable and can't build large systems.

magicmo...@gmail.com

unread,
Aug 26, 2018, 5:32:13 PM8/26/18
to Eve talk




I can assure you that Actionscript is indeed the source of JavaScript. With a simple find/replace script you can convert Actionscript 3 code into JavaScript code. there is less than 1% difference now between the two languages, as JavaScript has gradually added in the missing parts of Actionscript 3 (except typing), like constants, module imports, etc. They can't ever admit it because look at the nasty lawsuits from Oracle over Java in Android. So people pretend that JavaScript just magically sprung up in a few days. Look at the quirks of the date functions to see how closely it was copied; the Month value is based on 0 in both but days start at 1. Having converted tens of thousands of lines to JavaScript, trust me, they are 99% the same. Macromedia did pioneering work that should be recognized more. They were arguably even better programmers than Adobe, which didn't even write the original photoshop which came from John Knoll at Industrial Light and Magic, George Luca's R&D division.

The Newton as a machine was indeed far ahead of its time. It is quite dangerous to be ahead of your time. My father invested his entire fortune in electric cars in 1970... oops, bad timing to be too early, 50 years later electric vehicles have a whopping 1% market share. But he saw the future. I am sure that Scully didn't invent the newton, it was probably in gestation for years. You could easily argue that it reincarnated as the Galaxy note, and their handwriting recognition which needs no training, is superb. The reason the Note is so popular in korea is that it recognizes korean writing very well, hence its massive market share (#1 phone in the country). In the case of English, where we have a tiny 26+26 letter alphabet the benefit of recognition is lower, and devices with a good keyboard like the blackberry were far more popular.  The korean alphabet has thousands of letters so handwriting recognition is a big win for that language. 

The newton was a 20 Mhz processor with 640kbytes of RAM, and frankly for those tight constraints they should have written in assembly language. It was a mistake to attempt a fancy slow language that gobbles RAM.  Other than AutoCad which uses LISP as its internal language, i cannot think of another successful LISP product in broad use. The original WordStar word processor was written entirely in Assembler, and i consider it the greatest programming feat ever, because it could run in 64kbytes ram, which is nothing. The genous author wrote in an amazingly compact way. You couldn't do it in C in 10 times that space. The authors of tens of thousands of popular products often had free choice to any language they wished, and when it comes to that critical decision at the beginning of a project, for whatever reason the winners of the race to build reliable, easy to use products did not win when they picked LISP. After working on a gigantic multi-hundred-thousand line project to build WordStar 2000, which was done in C, i vowed never to work again in C, and found Modula-2 to be greatly superior. Oberon which was not just a language but an operating system for the Lilith computer was intriguing, but unfortunately Oberon 1 did not have enumerated types, as Wirth had foolishly removed them, and i had such a huge code base i could not switch over. He corrected that mistake in a later revision to Oberon. Imagine if Apple or Microsoft had used a Wirth language instead? Apple up to a certain version of the Mac OS programmed entirely in Pascal, but when Job switched to the Cargegie Mellon version of UNIX, he forced a switch to C, and the reliability of the Mac OS took an immediate and noticeable drop.

Backus called LISP a transformation language, and he dismissed them as a dead end in his famous Turing award lecture. He was hoping for the era of interchangeable parts. the Oberon system had a fantastic way of dynamic linking which was far superior to the MS DLL approach, and the shared libraries of UNIX.It is most unfortunate that people ignore his work today and instead are now obsessing over functional languages with category theory overlays, and nonsensical terms like monads, monoids, and functors are bandied about

I have build products in LISP, and an extra comma or period has drastic consequences, while an extra period or comma will more often than not be flagged at compilation time in a Wirth language. That no one is doing measurements of language fragility is a sign of the blindness of our industry, which values billable hours over a reliable finished product. In every period of computer history, where multiple languages were available, the industry as a whole picked the most verbose one that would generate the largest number of billable hours. First they chose COBOL over the superior FORTRAN (and Algol, and others), ignored PL/1 and then picked Java over the alternatives, which is the COBOL of our time. 

I am working now with languages that have outrageously high overheads, where a simple true/false/undefined/error value which theoretically should only take 2 bits, takes probably 50 bytes overhead. But there is a tradeoff between safety/reliability and speed/size, and in today's world what we need is reliable software that can be delivered on time and works flawlessly. 98% of the computers are idle at any time, so why are we will optimizing about size and speed? The focus should be on eliminating programmer error. After all debugging takes approximately 85% of the total time of programming, so we desperately need to go back and revisit the principles that generated Prof. Wirth's languages.

magicmo...@gmail.com

unread,
Aug 28, 2018, 3:38:56 AM8/28/18
to Eve talk
Don't get me wrong; i have the greatest admiration for the early programmers who had to deal with ridiculously small memory spaces. An entire word processor running in 64kbytes, when nowadays the sweathog Google Chrome will casually gobble up 600 Mbytes of RAM for a single web page, 10,000 times more memory. In fact the absurdly large required-for-the-AppStore 1024x1024 icon for an IOS application today, when unpacked, is larger than Wordstar! That is mind blowing actually. Because we can, we waste massive amounts of computer resources making things pretty. Computers don't feel that much faster, because what the hardware people giveth the software people taketh away hah.

And as for Dylan, any language which adds a very powerful macro language, which gives you the power to do subtle and sneaky transformations of the code is a temptation that is best not presented to weak humans, who cannot resist the allure of showing how clever they are. One could argue that COBOL was favored by the business community because it was immune to skill.

stew...@gmail.com

unread,
Aug 28, 2018, 4:30:04 AM8/28/18
to Eve talk
As a non programmer I'm reading this discussion with interested incomprehension :)

However..
The Korean alphabet only has 24 letters. They are arranged into syllable blocks.
스티븐 is Su Tee Bun and the closest approximation to Stephen.

Carry on!

magicmo...@gmail.com

unread,
Aug 28, 2018, 2:00:58 PM8/28/18
to Eve talk
My point was that the Samsung Note series is the #1 phone in Korea on the strength of the stylus pen plus handwriting recognition feature. Samsung has done a fabulous job of handling their own language both from a handwriting and voice recognition point of view. It isn't just a "buy Korean" mentality that his holding Samsung in #1, it is the strength of hte product. The korean language also occasionally uses chinese characters, which are entered phonetically and then mapped to one of thousands of glyphs, and handwriting recognition for chinese characters is fantastic, because one cannot predict how to pronounce a chinese symbol from its shape.  I can't tell you the hours i have wasted trying to enter a chinese character into a japanese word processor.  It's very tough, because if you don't know how to pronounce it you are then having to look it up by shape which is a trial and error process. Handwriting recognition is a huge win for Asian languages; English not so much, particularly since most people are not trained to write with good handwriting any more. My own handwriting has deteriorated by infrequent usage.

The ultimate point is that the Newton was trying to do english handwriting recognition, and at the time they didn't know how to do it that well, and now the software techniques have improved, not to mention the 100x speedup of computers since the Newton's days. Mixing a high overhead language with a slow computer was a fatal combination, and the Newton remains one of Apple's most obvious failures. Interestingly at the time the Newton came out, Apple had designed but never shipped a telephone that had textual capabilities. Apple almost got into telecom and who knows what the world would look like had they gone into that area. 

And as for having thousands of errors in a shipping product, there is no excuse for that. If you use the right language, errors get squeezed out of the code fast, and it iterates quickly into a highly reliable product. Modula-2 for example allowed small teams to build big products. The Google Go language is actually Modula-2 reborn, they are almost identical. 

Liam Proven

unread,
Aug 31, 2018, 9:19:23 AM8/31/18
to eve-...@googlegroups.com
On Sun, 26 Aug 2018 at 23:32, <magicmo...@gmail.com> wrote:
>
> I can assure you that Actionscript is indeed the source of JavaScript.

Fine. I don't really care, TBH; it's not really germane to the discussion here.

> The Newton as a machine was indeed far ahead of its time.

True.

> It is quite dangerous to be ahead of your time.

Also true.

> I am sure that Scully didn't invent the newton, it was probably in gestation for years. You could easily argue that it reincarnated as the Galaxy note, and their handwriting recognition which needs no training, is superb.

Now hang on.

[1] Nobody's claiming he invented it. It was an internal R&D project
while he was in charge, that's all. First major post-Mac platform for
Apple. Well, that or A/UX.

[2] Yes it did take years. Sure. But it started under Sculley.

[3] Reincarnated as the Note? WTF? The fact that it's a pocket device
with a stylus? So the PC was a 3270 terminal reincarnated, just
because of the form factor?

The Note is a standard Android phone with a stylus. I had one. The
stylus was, for me, a waste of space. I am sure some people liked it.
I just liked the big screen.

> The reason the Note is so popular in korea is that it recognizes korean writing very well, hence its massive market share (#1 phone in the country). In the case of English, where we have a tiny 26+26 letter alphabet the benefit of recognition is lower, and devices with a good keyboard like the blackberry were far more popular. The korean alphabet has thousands of letters so handwriting recognition is a big win for that language.

Now I _know_ you don't know what you're talking about and can't even
be bothered to do a cursory Google.

You're thinking of Chinese, not Korean. Korean uses the
purpose-designed Hangeul alphabet which is smaller than our Roman one.

Japanese uses a subset plus two syllabaries.

The Note is not big in China nor in Japan, AFAIK.

So your claim is wrong, TTBOMK, and it is based on incorrect facts.

> The newton was a 20 Mhz processor with 640kbytes of RAM, and frankly for those tight constraints they should have written in assembly language.

*Rolls eyes*

I thought the purpose of this group was discussing future programming
languages and other things?

For its time, the Newton was a powerful pocket RISC workstation.

Its main relevance today and to this discussion is 2-fold:
[1] a radical OS design with no filesystem
[2] an original design with a radical language, replaced with a far
more conservative one that was _still_ radical -- point 1.

> It was a mistake to attempt a fancy slow language that gobbles RAM.

[A] Nonsense.
[B] That is the _reason I brought it up_.

> Other than AutoCad which uses LISP as its internal language, i cannot think of another successful LISP product in broad use.

Emacs.

> The original WordStar word processor was written entirely in Assembler, and i consider it the greatest programming feat ever, because it could run in 64kbytes ram, which is nothing.

There's more to life than compact code.

Go take a look at the Canon Cat, or OS/9, or QNX, or
Taos/Intent/Elate, for miracles of software design _and_
implementation efficiency.

> and found Modula-2 to be greatly superior. Oberon which was not just a language but an operating system for the Lilith computer was intriguing, but unfortunately Oberon 1 did not have enumerated types, as Wirth had foolishly removed them, and i had such a huge code base i could not switch over. He corrected that mistake in a later revision to Oberon.

I would like more detail on this, please.

> Imagine if Apple or Microsoft had used a Wirth language instead? Apple up to a certain version of the Mac OS programmed entirely in Pascal, but when Job switched to the Cargegie Mellon version of UNIX, he forced a switch to C, and the reliability of the Mac OS took an immediate and noticeable drop.

I was going to say ... Apple *did*.

But Mac OS X is unrelated to MacOS.

MacOS was Pascal and Assembler, done by Apple, originating as a
simpler relative of Lisa OS.
OS X is a Unix, by NeXT.

> Backus called LISP a transformation language, and he dismissed them as a dead end in his famous Turing award lecture. He was hoping for the era of interchangeable parts. the Oberon system had a fantastic way of dynamic linking which was far superior to the MS DLL approach, and the shared libraries of UNIX.It is most unfortunate that people ignore his work today and instead are now obsessing over functional languages with category theory overlays, and nonsensical terms like monads, monoids, and functors are bandied about

I agree, actually.

> I have build products in LISP, and an extra comma or period has drastic consequences, while an extra period or comma will more often than not be flagged at compilation time in a Wirth language. That no one is doing measurements of language fragility is a sign of the blindness of our industry, which values billable hours over a reliable finished product. In every period of computer history, where multiple languages were available, the industry as a whole picked the most verbose one that would generate the largest number of billable hours. First they chose COBOL over the superior FORTRAN (and Algol, and others), ignored PL/1 and then picked Java over the alternatives, which is the COBOL of our time.

Agreed again. Perhaps we're getting back to something here.

> I am working now with languages that have outrageously high overheads, where a simple true/false/undefined/error value which theoretically should only take 2 bits, takes probably 50 bytes overhead. But there is a tradeoff between safety/reliability and speed/size, and in today's world what we need is reliable software that can be delivered on time and works flawlessly. 98% of the computers are idle at any time, so why are we will optimizing about size and speed? The focus should be on eliminating programmer error. After all debugging takes approximately 85% of the total time of programming, so we desperately need to go back and revisit the principles that generated Prof. Wirth's languages.

Absolutely, yes.

So, what if anything can be done about it?

My current area of interest is using Ultibo to attempt to get native
A2 -- AOS with Bluebottle -- on the Raspberry Pi.

Liam Proven

unread,
Aug 31, 2018, 9:25:25 AM8/31/18
to eve-...@googlegroups.com
On Tue, 28 Aug 2018 at 20:00, <magicmo...@gmail.com> wrote:
>
> My point was that the Samsung Note series is the #1 phone in Korea on the strength of the stylus pen plus handwriting recognition feature.

OK, this may be so.

But even so, it's interesting that this hasn't carried across to the
vast Chinese market, with a native ideographic alphabet.

> The ultimate point is that the Newton was trying to do english handwriting recognition, and at the time they didn't know how to do it that well,

Apple's system was licensed in from Paragraph, a Russian company IIRC.

https://en.wikipedia.org/wiki/Handwriting_recognition#Software

The big spinoff and the true intellectual offspring of the Newton was
Palm's Graffiti -- originally a NewtonOS app.

NewtonOS 2 didn't require the learning phase and worked much better.

Modern Apple's big insight was that an onscreen keyboard was much
simpler _and_ better.

But iPhones and Android are just more Unix boxes, continuing the
plague of C. They are of no interest except in UI. The Newton, OTOH,
was.

> and now the software techniques have improved, not to mention the 100x speedup of computers since the Newton's days. Mixing a high overhead language with a slow computer was a fatal combination, and the Newton remains one of Apple's most obvious failures. Interestingly at the time the Newton came out, Apple had designed but never shipped a telephone that had textual capabilities. Apple almost got into telecom and who knows what the world would look like had they gone into that area.

Partially conceded.

It just needed an onscreen keyboard as well... but the stylus doomed
it. Fingers are the answer to that and that needed capacitative touch
screen tech.

> Modula-2 for example allowed small teams to build big products. The Google Go language is actually Modula-2 reborn, they are almost identical.

Interesting -- this is new to me. Any links or citations for that?

magicmo...@gmail.com

unread,
Sep 1, 2018, 2:19:57 AM9/1/18
to Eve talk
You can tell Go is partially derived from Modula-2 from the IMPORT syntax, which is basically identical to Modula-2 which offers 3 ways to import symbols from an external module, either named specifically, or import a module and require the module prefix for each symbol, or full symbol import which means you don't have to bother to prefix. Whenever you see an import system that offers those 3 methods, you know that you have looked closely at Modula-2 which really pioneered a flexible namespace/import syntax. 

Also, Modula-2 had coroutines, which was a super efficient way of handling some concurrency; goroutines are basically the same thing.
Syntactically Go resembles C quite a bit, but not surprising considering one of the authors was one of the developers of C. That being said, putting the type after the name is a Pascal/Modula-2 style syntax, and the strongly typed approach of Go and the improved definition for pointers to arrays of xxx is also influenced by Modula-2. Wirth had a way of doing things in the most elegant way possible, so you can't avoid imitation if you are looking for elegance.

I considered Go dead on arrival. It doesn't include a drawing model, nor a database, and adds nothing to the area of data structures, which was the key area Oberon was innovating in. Oberon had not only clever dynamic linking, but also created the concept of guards so you could pass a superset record to a function that needed a subset, which made libraries much less fragile. It is a tragedy that not even one reasonable sized company adopted Oberon or the principles that created it, instead everyone keeps rebuilding on the crappy unix foundation. I remember when Sun only had 20 employees and there were 100 companies building unix boxes. We all thought UNIX would take over, but it was lousy at graphics, history was kind to the personal computer.  I am sure with the legal losses at Google they wish they had done a clean sheet approach for android and not used Java.

Jonathan Blow's Jai language is better than Go for low level coding; it does a much better job of packing values so they are close together. Go is going nowhere in my opinion. 

I have been following the next gen language projects with some interest. Eve may be dead, but there is Luna out of Poland, Nenad of the Red project pulled off a miracle and make a cryptocurrency token and raised enough money to now be the biggest project them  all, and of course there is Elm.

There are other languages of course, but to me, if you can't draw well in the language, it won't matter. People aren't building terminal products any more; being able to deploy on desktop, mobile, and web all with one language is pretty much a given for whatever wishes to be the next general purpose language. 

the key feature that will determine the winner, is whomever can solve the issue of reducing programmer error. The programming profession is dismally unique in that a significant majority of the total time spent by a programmer is consumed by fixing their own errors. Any language which eliminates the errors up front before running the program, will have a commanding lead over any competing language. 

So one can measure for example pretty easily by hand, the fragility of a language. Take a correctly working program, and then added extra digits or punctuation and see if it catches the error, or instead leaves some bomb inside the code that will go off at some random time. Or transpose a pair of letters, and see if the spelling mistake is caught. Or supply garbage data and see what happens. Languages like C are a disaster; the interchangeability of a pointer and an array is a deadly equivalence, and the source of innumerable exploits. That arrays don't carry their bounds at execution time in C was an incredible mistake (one that Modula-2 corrected). Really Modula-2 maps one to one with C, except it is cleaner in syntax and supports checks. But Modula-2 could be considered just above Assembler, and will not address the issue of programmer error.

You can take any operation you do in a visual language and map it into algebra; that has been done since the 3D graphics system Maya, which every time you click on a button in the maze of menus and palettes, it writes out the code equivalent of that operation. So there is even if you don't see it, a textual representation of the operations performed by graphical manipulation. The real question therefore, is what notation do you adopt, and what data structure power do you have.  I told Chris Granger to his face that using a relational database model for Eve was a mistake that would lead to a dead-end, and he dismissed my opinion as uniformed. The graph database like Neo4J is the future; its the most flexible, most powerful topology mathematics offers. It is a little known fact that shortly after the invention of the relational database at IBM by Codd, and the introduction of Oracle and ingres, that Codd improved upon his design, and came up with the concept of a "universal relation" database. Today that concept exists more or less in Apple's Filemker database, which has a huge commercial installed base, and it shows how simple concepts can make things much easier for people.Filemaker calls it an implied join, but their very powerful portals are basically what Codd had proposed, as he didn't think joining was much fun. Neo4J of course being a graph database doesn't have joins, you instead follow relationship links. 

Chris Granger

unread,
Sep 1, 2018, 2:40:00 AM9/1/18
to magicmo...@gmail.com, Eve talk
I told Chris Granger to his face that using a relational database model for Eve was a mistake that would lead to a dead-end, and he dismissed my opinion as uniformed.

That's a bit of a strange assertion - Eve certainly didn't arrive at a dead end because it was based on relations. Moreover, graphs are trivially modeled in relations and the later versions of Eve were based on triples, which is just a way of storing... graphs.

--
You received this message because you are subscribed to the Google Groups "Eve talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eve-talk+u...@googlegroups.com.
To post to this group, send email to eve-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/eve-talk/aa35c574-8e57-4d78-9647-f297f0503459%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Chris Granger

unread,
Sep 1, 2018, 3:15:12 AM9/1/18
to magicmo...@gmail.com, Eve talk
As an aside, this thread has a lot of rhetoric in it. Some of it is verging on disrespectful. How things are said is as important as what is said - please tone it down.

Eugen Schindler

unread,
Sep 1, 2018, 4:09:38 PM9/1/18
to eve-...@googlegroups.com
Seeing the key features you describe, I wonder why JetBrains MPS (http://www.jetbrains.com/mps/) is not considered. It solves exactly many of the problems of "front-loading error-detection" you describe.

--

magicmo...@gmail.com

unread,
Sep 1, 2018, 4:50:07 PM9/1/18
to Eve talk
I am not familiar with JetBrains MPS. JetBrains is a huge company now, basically now the leading firm supplying IDE's and compilers. I am sceptical however that domain specific languages as an approach will pan out. It has been done before.
There is a famous book by Abelson & Sussman that was used at MIT to teach their LISP course, which was a required course in the dept. of electrical engineering (MIT still does not have a computer science department, which is both an interesting statement on the bureaucracy of MIT and perhaps their suspicion that Computer Science isn't worth of being called a science). This "Structure and interpreation of computer programs" was all about creating domain specific languages, which is very easy to do in LISP. The Red language excels at this as well. All of the transformation languages are great at domain specific languages. However, maintenance is an issue in a domain specific language. Oftentimes the implementor doesn't document the grammar and semantics of the language, leading to obscurity. If a program isn't maintainable, it is in the end very expensive for commercial use. Sure the original author can have that 10x leverage, but what about the new guy? It is so tempting to create your own language and not document it. If you had enough documentation discipline, i think it could work, but there is little evidence that programmers short of compulsion will document things.

But back to Liam Proven's points about liking Oberon; Oberon was trying to innovate in the area of interchangeable parts by allowing record types to be more flexible across function calls.This is a very important innovation that was completely ignored, and i assert that it is the weak data structures of LISP that make it such a fragile language; you only have S-expressions, and you can't add data to a level in a S-expression without disrupting the structure, and why business programming has steadfastly avoided LISP for 50 years. Businesses are subject to constant minor modifications; new taxes, change in tax rates, zip code changing length, credits changing the number of digits, minor tweaks happening externally that have to be tracked, and a language with weak data structuring doesn't hold up. Since most languages are Turing complete they are all equivalent, but that doesn't make it convenient and it certainly doesn't promise low maintenance costs. 

Although I am lone voice in the wilderness, i assert that data structures is the key area to attack in the next gen language that will replace the current general purpose languages like Python, JavaScript, and Java. It won't some unstructured type like triples, which cannot disclose structure without complete traversal. Unstructured data will lead to terrible scaling problems, especially with distributed databases. JSON is the current popular format, which is actually a subset of S-expressions, and is not the future. Yes it is a 100x better than XML, but it is still very old-fashioned. You want to push data into functions and get it out, and for the era of interchangeable parts to happen, a better way of storing structured data needs to be permitted in functions, and then we can stop re-inventing the same programs over and over. The google code database is gigantic, and has reached such ridiculous proportions that one re-uses very little, why else would it be in the billions of lines. Look at Github; how much code is really usable by the average programmer. You take a piece of code and it expects data to be structured in a certain way, and if your data doesn't fit you skip over that code. We aren't in an era where you can pick up a piece of code and smoothly add it to a program you are working on. This is what some call the holy grail of programming languages, and certainly John Backus who invented functional programming was on a mission to solve interchangeable parts. Interestingly, Backus did not solve the entire in his lifetime, and he was demonstrating Functional languages in 1973, when i was fortunate enough to attend a guest lecture at the Sloan School when he was visiting from IBM. The professors attending the lecture just shook their heads afterwards, saying the man was crazy. The fact that you were not allowed to modify a variable's value was impossible for them to accept. I believe that the missing piece of Backus' work was in the area of data structures. Modula-2 had such a superior record system compared to C, and offered protection at runtime for tagged union strutures, something that C doesn't have and leads to countless runtime errors of the worst kind. Oberon really was clever, and the Lilith computer's operating system was incredibly short and simple. We are talking at least 100x smaller than Microsoft, possibly even 1000x shorter and simpler. Although i never got to meet Prof. Wirth, he influenced my work more than any other figure in computing, and i find real beauty in his elegant work. The Swiss are the best linguists in Europe, and their products are uniformly excellent. 


Chris Granger

unread,
Sep 1, 2018, 8:37:20 PM9/1/18
to magicmo...@gmail.com, Eve talk
We aren't in an era where you can pick up a piece of code and smoothly add it to a program you are working on.

Eve 0.2-0.4 represented a really interesting point in this space. We designed it explicitly for composition. Blocks don't rely on any kind of order or scoping rules. You can throw a block in and as long as the match succeeds, it will produce its output. If you don't have your data in the right "shape", you don't have to modify your existing code at all. At most, you write a block that takes your current set of records and produces whatever is needed. 

It won't some unstructured type like triples, which cannot disclose structure without complete traversal. 

Eve side stepped this since it wasn't unstructured, it was progressively structured. Using integrity constraints, you had access to the most powerful type system I am aware of. You could express constraints all the way down to the meaning of a record, not just its structure. Dependent types on crack basically. Similarly, Eve's tags gave it something like structural typing, without losing the ability to talk about explicit requirements.

Businesses are subject to constant minor modifications; new taxes, change in tax rates, zip code changing length, credits changing the number of digits, minor tweaks happening externally that have to be tracked, and a language with weak data structuring doesn't hold up.

This was exactly the kind of stuff Eve was designed for. Because you could express invariants globally, programs grew much more safely than in most languages. No matter what new code is added or how things shift, the constraints you've expressed are always in effect. Similarly, since typing was progressive it was trivial to add a new field or change an existing one without modifying the world around you. Eve's semantics further allowed us to know the effect of changes you made, which would've allowed us to make a pretty amazing set of tools for understanding the extent of the edits you make.

Any language which eliminates the errors up front before running the program, will have a commanding lead over any competing language. 

One of the neat things we realized later on is that the semantics we ultimately arrived at reduced the number of classes of errors down to just a handful. All the crazy mistakes you could make in C, JS, and even Modula2 can't even happen in Eve. At the end of the day, I think our "modern" set of languages ended up being far too powerful than they really needed to be, and as a result, we open ourselves to all sort of mistakes powerful tools let us make. Whatever the ultimate answer here is, I suspect it will revolve around removing power to free us up to focus on what actually matters: the expression of the meaning of our system.

--
You received this message because you are subscribed to the Google Groups "Eve talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eve-talk+u...@googlegroups.com.
To post to this group, send email to eve-...@googlegroups.com.

William Taysom

unread,
Sep 1, 2018, 10:59:06 PM9/1/18
to Eve talk
Eve side stepped this since it wasn't unstructured, it was progressively structured. Using integrity constraints, you had access to the most powerful type system I am aware of.

To me, Eve's most noteworthy aspect is it's radical flexibility.  Without much ceremony, you could can integrate any data in the system.  Conventional program modularity and encapsulation attempt to hide implementations and limit interactions so as to make for smoother conceptual surfaces – not so much can happen here.  The drawback is that significant plumbing is required to reagents together so that they can interact.  Even in a nicely layered architecture, where everything has a place, a given feature will touch many of the layers.

Because you could express invariants globally, programs grew much more safely than in most languages.

My Eve programs tend to not to grow safely.  In Eve, setting up a multiple step process proved a tricky for me.  I attribute this to immaturity of the system and lack of runtime introspection tools rather than Eve's aspect-oriented flexibility.

Chris Granger

unread,
Sep 1, 2018, 11:06:29 PM9/1/18
to William Taysom, Eve talk
 In Eve, setting up a multiple step process proved a tricky for me.

Yeah, this is the downside of orderless composition. Like you said, I don't think this was a problem with the semantics, but that we didn't have the time to ultimately craft the support systems that could've made that a non-issue. There's a dependency order and the tools could've easily show you that. They also could've shown you the code as if it were a normal process and have the intermediate ceremony left implicit. We just ultimately ran out of time.

--
You received this message because you are subscribed to the Google Groups "Eve talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eve-talk+u...@googlegroups.com.
To post to this group, send email to eve-...@googlegroups.com.

William Taysom

unread,
Sep 1, 2018, 11:23:22 PM9/1/18
to Eve talk
Indeed!  Eve works best with conceptually unordered data-structures and rule applications.  Eve is not intrinsically suited for step-by-step sequential tasks.  Contrast imperative programing languages with their a fundamental, sequential execution model.  What is easy to do imperatively is hard to do in Eve and vice versa.  Even in imperative languages when a step-by-step task does not match the statement-statement-call-return setup of your subroutines, you will find yourself in callback hell or worse.  (Coroutines do well in some instances without letting out the worst worms from the can.)

Eugen Schindler

unread,
Sep 2, 2018, 12:42:49 PM9/2/18
to eve-...@googlegroups.com
You should really look into MPS. It deals exactly with what you talk about: data++, i.e. data with quite some front-loading of error-checking on top of it. In addition, a DSL in MPS is definitely different from all the text-based internal DSLs that have been done in LISP or other languages that can accommodate them, and even quite different from external DSLs which traditionally use only concrete syntex (à la (E)BNF) as a primary artifact. A language in MPS is considered to consist out of many aspects, starting with its structure (somewhat similar to a BNF or a schema for database or XML), and then it gets only more and more refined: there's editors for defining various concrete syntax representations, constraints for checking inputs of your programs, type system, etc. Essentially, MPS guides you to define the DSL in such a structured way, that it is quite understandable for not just the original author, but also other developers. Moreover, MPS has proven that what they call a DSL by far surpasses just very simple DSLs, as has been extensively proven by the mbeddr project (http://mbeddr.com/), which has implemented all concepts of the C language as an MPS DSL, and added on top many extensions for state machines, components, contracts, decision tables, requirements, documentation, etc.
So, TL;DR: have an open mind and just try to look at the new developments in DSL-land, and you will see that much of the stuff you are talking about is being solved in a pragmatic way that actually finds quite some traction and adoption.
As a final remark: you may want to nuance your ideas on "making programming open to everyone", since there is a huge difference between programming environments for a large group of people that don't require training (which are usually much less productive) and more expert-oriented programming environments (just compare with Maya or Blender 3D's OHOMOK interaction mechanism, which makes you super-productive, but has a bigger entry hurdle).

--
You received this message because you are subscribed to the Google Groups "Eve talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eve-talk+u...@googlegroups.com.
To post to this group, send email to eve-...@googlegroups.com.

magicmo...@gmail.com

unread,
Sep 4, 2018, 2:52:36 AM9/4/18
to Eve talk
Thanks for the reference to MPS. JetBrains is such a dominant company nowadays in the tool space, basically in the process of supplanting Eclipse, that any product of theirs deserves scrutiny. However, upon a quick inspection of the MPS tool, i can't for the life of me imagine a single situation in my rather lengthy past project list where i could have utilized MPS to some benefit. If i had a lot of data to massage textually, i would use other tools, and the idea of a proprietary internal data format of AST editor doesn't even make sense to me. One does not ordinarily wish to work directly with the AST; that is an internal form intended for code generation, and if you think about it, compilation of code involves creating a symbol table, and and AST. The MPS tool doesn't really present the symbol table so well, so i am pretty sure that it isn't a replacement for a next-generation general purpose language at all. In debugging sometimes the AST can be helpful, but if a user can't understand the semantics of the language they are working in, how are they going to write it fluently?  The AST is compact however MPS did a poor job of drawing the AST; once you get past a few hundred nodes their graphical representation will be unwieldy. I am sure some people find value in this tool, but it does seem like it is very strongly oriented around Java. It was written in Java and seems like it can create Java code. There are nice things about editors and visualizers for various structures; i think a pattern-matching run time debugger that shows you data of a particular structural type in a nice way would be great. 

Ionuț G. Stan

unread,
Sep 4, 2018, 3:57:53 AM9/4/18
to magicmo...@gmail.com, Eve talk
I was pretty impressed by how the Dutch Tax authority used MPS to
formalize/mechanize their tax laws:
https://www.youtube.com/watch?v=_-XMjfz3RcU
> --
> You received this message because you are subscribed to the Google
> Groups "Eve talk" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to eve-talk+u...@googlegroups.com
> <mailto:eve-talk+u...@googlegroups.com>.
> To post to this group, send email to eve-...@googlegroups.com
> <mailto:eve-...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/eve-talk/b7261660-ed2c-4287-a093-7ec3fec48ac1%40googlegroups.com
> <https://groups.google.com/d/msgid/eve-talk/b7261660-ed2c-4287-a093-7ec3fec48ac1%40googlegroups.com?utm_medium=email&utm_source=footer>.
> For more options, visit https://groups.google.com/d/optout.

--
Ionuț G. Stan | http://igstan.ro | http://bucharestfp.ro

Eugen Schindler

unread,
Sep 4, 2018, 1:45:26 PM9/4/18
to ionut....@gmail.com, Edward de Jong, eve-...@googlegroups.com
On Tue, Sep 4, 2018 at 9:57 AM Ionuț G. Stan <ionut....@gmail.com> wrote:
I was pretty impressed by how the Dutch Tax authority used MPS to
formalize/mechanize their tax laws:
https://www.youtube.com/watch?v=_-XMjfz3RcU
@Ionut: thanks for pointing this out. There are more such talks available on https://confluence.jetbrains.com/display/MPS/JetBrains+MPS+Community+Meetup%3A+Agenda

@magicmouse: OK,here are a lot of things mashed together. I'll pull them apart a bit and react to them (assuming you are interested in that):
On 04/09/2018 09:52, magicmo...@gmail.com wrote:
> Thanks for the reference to MPS. JetBrains is such a dominant company
> nowadays in the tool space, basically in the process of supplanting
> Eclipse, that any product of theirs deserves scrutiny.
Agreed.
 
> However, upon a
> quick inspection of the MPS tool, i can't for the life of me imagine a
> single situation in my rather lengthy past project list where i could
> have utilized MPS to some benefit.
That's because you don't understand the essence of the system yet. The basic essence is that MPS is a language workbench, i.e. a system to define languages and then, based on these languages, create an IDE in order to be able to write models (or "code", if you wish) in the various languages you defined. As for utilization: I cannot speak for you of course, but you can see some examples above (where Ionut already pointed out one of them). I also mentioned the mbeddr project (http://mbeddr.com/) to you, which has proven a lot of MPS's utility.
 
> If i had a lot of data to massage
> textually, i would use other tools,
No idea how you would think of doing that with MPS, but indeed I would not do that with MPS.

> and the idea of a proprietary
> internal data format of AST editor doesn't even make sense to me.
That's because the only things that you consider are fully plain text. For me, text-only is very limited, since people in many domains (you know, the ones that you want to make programming accessible to) each have their own notation, which are often not only text, but rather tables, diagrams, math formulae, of course also text, and more often than not a mix of a number of these.

> One
> does not ordinarily wish to work directly with the AST;
Why not? Depending on how you define and organize your AST, this can have a lot of advantages.
 
> that is an
> internal form intended for code generation,
Only in the very classical sense of text-based parsing. For systems that use projectional editing (see the picture below), like MPS, using an AST is a very handy vehicle to combine instances of concepts (comparable to non-terminals in a BNF) from different languages in one and the same program. I can write a long explanation, but I'd rather say that you can have a look at the (somewhat dated, but still very valid) explanation of projectional editing and why you want that: https://www.youtube.com/watch?v=OZ9xitypJZI
 image.png

> and if you think about it,
> compilation of code involves creating a symbol table, and and AST.
Only if your concrete syntax is plain text. In its generic form, compilation is just code generation.

> The
> MPS tool doesn't really present the symbol table so well,
Once you get away from the idea that your only way to describe models/code is in plain text, you don't need a symbol table anymore for compilation. This is exactly what MPS does.

> so i am pretty
> sure that it isn't a replacement for a next-generation general purpose
> language at all.
Indeed. Coding is not the new literacy, but rather modeling: http://www.chris-granger.com/2015/01/26/coding-is-not-the-new-literacy/. Making yet another nextgen GPL "to rule them all" will not solve the problem of getting modeling to a greater group of people. Modeling (and languages as created in language workbenches) have already proven to do so. There are many MPS-based DSLs today in use by non-programmers who can use them effectively to write and edit the models that they need for their work.

> In debugging sometimes the AST can be helpful, but if a
> user can't understand the semantics of the language they are working in,
> how are they going to write it fluently? 
And how are you going to teach the semantics of the language to them? So far the best way that I have seen is to encode part of the static semantics, such that the IDE points users out to possible inconsistencies or mistakes they make and the other part is operational semantics by letting users observe the result of what they have specified either through generating to some runtime system or by having an interpreter that can directly show you the result. In that context, you may be interested in the following talk of Markus Völter, who illustrates such things: https://www.youtube.com/watch?v=9BvpBLzzprA&feature=youtu.be&list=PLEx5khR4g7PJzxBWC9c6xx0LghEIxCLwm
 
> The AST is compact however MPS
> did a poor job of drawing the AST;
Well, given the context that I gave above, I hope you can understand that MPS doesn't have a pre-fixed concrete syntax, so it's up to the developer of a language to specify how the AST is drawn.
 
> once you get past a few hundred nodes
> their graphical representation will be unwieldy.
You would be surprised how much information in one root-node MPS can handle. Anyway, there's no need to store everything in one node, since there are various ways to modularize.

> I am sure some people
> find value in this tool,
Some? It's actually quite a lot of them already, many of which are not programmers. The language designers usually need to have quite some affinity with computer science, but the users of the IDE not.

> but it does seem like it is very strongly
> oriented around Java.
Yes, it does seem that way. If you have suggestions on how JetBrains can change that impression, so that the actual essence of MPS as a language workbench is much more clear, I think they (and me) would be interested to hear them.
 
> It was written in Java and seems like it can
> create Java code.
Correct. It was written in java and it can create, amongst many other things, java code. The MPS generator is, in fact, a rewrite-rule-based system that supports any type of code generation.

> There are nice things about editors and visualizers
> for various structures;
There definitely are. In fact, those things are not just nice, but actually quite important in the bigger picture of "Coding is not the new literacy".

> i think a pattern-matching run time debugger
> that shows you data of a particular structural type in a nice way would
> be great.
That definitely sounds interesting and I would be interested to hear more about that.

Best regards,
Eugen Schindler

magicmo...@gmail.com

unread,
Sep 5, 2018, 3:32:32 AM9/5/18
to Eve talk
Some great points. I rather disagree with Granger and others that modeling is the literacy not coding. Coding is the primary skill that is needed. If i look at my basic test program scale that the "next gen language association" publishes, which is a clock, a wristwatch, snake game, tic-tac-toe, and chess game, when you look at the code, the data structures and modeling of the system are a tiny fraction of the work to build the system. If modeling was all you needed, then UML jockeys would rule the world, but they don't because UML doesn't map directly into source code.  I can give you the rules of snake in a few lines of text. But chess has some very tricky rules related to castling and en-passant capturing. However you notate the model, by either using some graphical interface manipulating icons on the screen, or notate using text (and the two can proven equivalent, as every geometrical construction can be mapped into algebra), drawing and handling user input is by far the biggest amount of work. I have seen this time and time again in projects, where the precision required to draw things nicely, that fit on the device, takes a huge amount of effort.

Case in point, look at the lectures from the 2018 Apple WWDC; there are lectures an hour long on how they are doing dark mode, and the intricacies of how they finesse the glows and add small tints to blend in, and subtle glows. They spent thousands of hours tweaking simple little windows, and one must admit that in graphical interactive products, "graphical" is the operative word, and that drawing things so they are pretty is a ton of work, and that the computer is not likely to make things magically pretty without some input from the designer/author.  

I feel that making it easy to make nice looking screens is a basic requirement of a new general purpose language, and we must endeavor to make it less fussy and more flexible. There is a huge range of target devices now in the installed base; from 240 x 320 ipods to 4K cellphones, and giant monitors, and even circular screens on Android watches. It isn't just computation we are doing any more for the most part; a lot of gaming, and lots of graphics. I think this is why we are seeing so many dashboard tool startups that are not trying to create a new language, but are happy to settle for a nice collection of dashboard widgets that you string together to make a product. They are hints of the interchangeable part future we are headed for, but obviously a proprietary, closed set of modules is not what we are looking for in the industry as a whole.

That's what makes new notation projects so risky, but also the potential is correspondingly large.

Eugen Schindler

unread,
Sep 5, 2018, 6:06:11 AM9/5/18
to Edward de Jong, eve-...@googlegroups.com
Thanks for your quick reaction. This is an interesting discussion!
Although it would be interesting to hear your (and possibly others') reactions in a more structured way on a number of points and questions from my last mail, I will just go on and try to do this myself again.

On Wed, Sep 5, 2018 at 9:32 AM <magicmo...@gmail.com> wrote:
Some great points. I rather disagree with Granger and others that modeling is the literacy not coding. Coding is the primary skill that is needed.
Well, once you are able to express computer programs in a model (e.g. what mbeddr has done by modeling the entire C language as an MPS DSL), you can argue that there is actually no difference between model-driven development and coding. As such, modeling is again the superset of the two and coding is rather an interaction mechanism for a very specific way of modeling (namely modeling of computer programs). I recognize that it is always difficult to pin down the exact meaning of coding and modeling without getting into a philosophical discussion.

If i look at my basic test program scale that the "next gen language association" publishes, which is a clock, a wristwatch, snake game, tic-tac-toe, and chess game, when you look at the code, the data structures and modeling of the system are a tiny fraction of the work to build the system.
Really? Games is what's going to determine the "next gen language"? What about the other complex problems except for gaming (medical, high-tech, defense, agricultural, IoT, web tech, etc.), where there is a huge need for sustainable multi-disciplinary collaboration in an ever-growing complexity of systems- and systems-of-systems development? I can tell you that modeling plays a huge role there; in fact evidence is growing more and more that you can just automate much of the "classical coding" altogether and replace it with automatic derivation from models.
 
If modeling was all you needed, then UML jockeys would rule the world, but they don't because UML doesn't map directly into source code. 
I get the feeling that you have a completely different definition of modeling than I do. Let me put a pragmatic definition here, which may help us to properly (dis)agree about one pinned-down thing: I consider something a model when it is available in machine-processable form. A model can have more or less detailed information in it. Given enough detail, I can map any model onto source code. Case in point is modeling all the concepts existing in the C language. They can perfectly be mapped to C code.
 
I can give you the rules of snake in a few lines of text. But chess has some very tricky rules related to castling and en-passant capturing.
I can imagine that. But, again with the games? Why don't you find more practical examples?
 
However you notate the model, by either using some graphical interface manipulating icons on the screen, or notate using text (and the two can proven equivalent, as every geometrical construction can be mapped into algebra), drawing and handling user input is by far the biggest amount of work. I have seen this time and time again in projects, where the precision required to draw things nicely, that fit on the device, takes a huge amount of effort.
I agree. This is also seen in the ongoing standardization of presenting and composing different UIs in the web-world (HTML5 + js). I don't think though that making things "look nice" is the most important thing. It's about giving non-programmers (or a better term would be: experts from other domains than programming) a view on the model or collection of models they have to work with that is "native" to them. So this is about adapting the user interface of the model to the actual notations that are used by experts from different domains.
By the way, MPS does a lot to reduce this effort by standardizing the way editors are defined and thus doing much of the heavy-duty UI plumbing for you.
 
Case in point, look at the lectures from the 2018 Apple WWDC; there are lectures an hour long on how they are doing dark mode, and the intricacies of how they finesse the glows and add small tints to blend in, and subtle glows. They spent thousands of hours tweaking simple little windows, and one must admit that in graphical interactive products, "graphical" is the operative word, and that drawing things so they are pretty is a ton of work, and that the computer is not likely to make things magically pretty without some input from the designer/author.
This may be true for eye-candy, but for modeling purposes (as used in product development and engineering), it is important to have control over how things look like. There is no generic way of doing this, since the way things are depicted is specific to every (sub)domain. I'm not talking about making things look as fancy as possible; a bunch of schematics can be just as (or even more) practical in many cases.
 
I feel that making it easy to make nice looking screens is a basic requirement of a new general purpose language, and we must endeavor to make it less fussy and more flexible. There is a huge range of target devices now in the installed base; from 240 x 320 ipods to 4K cellphones, and giant monitors, and even circular screens on Android watches. It isn't just computation we are doing any more for the most part; a lot of gaming, and lots of graphics. I think this is why we are seeing so many dashboard tool startups that are not trying to create a new language, but are happy to settle for a nice collection of dashboard widgets that you string together to make a product. They are hints of the interchangeable part future we are headed for, but obviously a proprietary, closed set of modules is not what we are looking for in the industry as a whole.
A good point. Especially the statement that such building-blocks should be open (if not open source, then at least open for interfacing and interchange)
However, there is a lot more than only graphics to computing. Many practical problems that need the bringing together of domains, the introduction of formal methods for getting more safe and secure systems, being able to relate information over the models of various disciplines, correctness by construction, etc., etc. I know these topics may be less appealing to some people than fancy graphics, but they are very important things that play a huge role in the way many things in our world are designed and built.

That's what makes new notation projects so risky, but also the potential is correspondingly large.
What do you mean by "new notation projects"?

magicmo...@gmail.com

unread,
Sep 6, 2018, 4:20:44 AM9/6/18
to Eve talk
In response to your comments.

1) Games are very much going to determine the outcome of the "next gen language". Game programming is arguably the majority of all graphical interactive coding today. Not only is this borne out by the statistics from the App Stores, which shoes that games are more than 2x larger than any other category of product:


but also, when you look at all the dashboard companies popping up, the gamification of business products is well under way, and what was a stodgy dull statistical program is now singing and dancing. Get into your brand new car, and dashboard does a song and dance. I don't care where you turn, customers are suckers for flashing lights and motion, and if your language can't draw well, it is going to be a hard sell. You are correct that the language doesn't have to full-tilt into 3D complexity, but if you can't drastically simplify the pain of laying out screens in the quirky and frustrating HTML/CSS abomination, why did you bother making your tool in the first place? This by the way is why i consider terminal-based languages like LISP and FORTH to be near useless in this era. There is ample evidence that drawing needs to be integrated into the language.

2) This is why MPS is non-starter for me; I don't see a drawing system. The majority of all my code in every graphical interactive product i have made has been related to drawing. From a word-count perspective, drawing consumes an awful lot of program code. Numbers are easy. They have a value. Period. But a piece of text, it has a font list, a size, optional bold and italic, justification, indenting, stroke color, background color, and on and on. So naturally text is going to dominate the code. If you are building a billing system, generating a nice looking PDF bill for the customer, is a ton of work to drawing nicely, with pagination that works well. I spent decades in word processing/desktop publishing/graphic design product space, and there is just a lot of tricky stuff relating to languages. And don't get me started on the complexities of making your product read well in Asian languages. That was my specialty. 

And since it isn't just about drawing, but interacting, that is why HTML/CSS/JS is such a nightmare, and why there are so many frameworks, because the designers of the web did a rather poor job at anticipating interactivity, and their approach of laying out pages not with function calls but with a textual description basically calls forth a very complex framework system to compensate for this mistake. complex domain specific languages aren't a computable readable model; imagine if the web had an internal model that was not textual, that would have made it so much easier to build interactive graphics. A next gen language to succeed will at least need to allow people to not have to wrestle with webkit, which has a nasty habit of scrambling your layout when a tiny error is made. 

Apple has done a lot of work in their storyboard system in XCODE to make laying out things easier, although it is still evolving and i wouldn't call it settled. I don't know the android studio well, i imagine it has tools for this as well. But i would like to see a cross-platform layout system that makes it easy for a single code base to nicely fit into whatever device you are on. Making layouts fluid should be part of the language, and anyone who thinks they can just live on top of HTML/CSS they are doomed IMHO.

3) As for correctness by construction, there is ample evidence that completely untyped languages that can mutate the type of a variable accidentally from number to string, are very dangerous. The mistake of imitating ActionScript2's overloading of the + operator to mean both addition and string concatenation has caused countless errors in JS. If they had used PHP's & operator, or some other punctuation, millions of man-hours would have been saved. TypeScript and other transpilers/preprocessors are clearly a great win because JS by itself is a minefield. A successful next gen language will eliminate a lot of errors. Eve was very much inspired by SQL, which is a very declarative style of language, with little instruction given as to how do it; you just tell it what to do. However, it isn't that easy to recast a chess program into that style. there is a lot of sequential processing to do, so some compromise has to be reached, where you eliminate as many sequence related errors as you can at compile time by letting the compiler do more work, but still retaining the ability to specify the proper sequence for things to be done in, unambiguously of course else you have obscured the product, which is counter-productive. I believe that sequence related errors constitute half of all debugging time, so eliminating mistakes of sequence should yield a 2x improvement. 

4) there are many additional syntactical features one can add to a language, like runtime physical units, that allow the product do more integrity checking and catch subtle errors quickly. The Wirth family of languages emphasized compile time checks, and Modula-2 for example had overflow, underflow, range, array bounds, nil pointers, undefined variables, checks all of which could be disabled. In my Discus product, we leave the checks on until we ship the product, and it shrinks by 30% because the overhead of checking is substantial. Nowadays, with computers idle 98% of the time, one can argue that leaving the checks on in production products is now feasible, and probably a good idea. All that Microsoft C code, with no runtime checks, is a security hazard, and Microsoft has been endlessly patching their Windows monstrosity for decades now with no end in sight. When you pass an array to a Modula-2 function, the array bounds are sent in a hidden parameter, which allows the called function to not go over the limit. This doesn't exist in C, which means that any large C program is forever doomed to be unreliable. I cannot understand why the executives chose such flabby languages to standardize on. Surely they must have known early that the cost of all these tiny errors in the sum total represented a maintenance nightmare. Java has plenty of problems of its own, and don't get me started on the flaws of OOP.  Thank goodness few of the next gen projects even consider building atop OOP paradigms.
Reply all
Reply to author
Forward
0 new messages