Visual/Graphical programming is a failure.

498 views
Skip to first unread message

Josh Marinacci

unread,
May 17, 2012, 12:48:01 PM5/17/12
to augmented-...@googlegroups.com
I've spent some time recently looking at various research papers on visual programming languages and have come to realize that they have all been failures. Some are successful in narrow problem domains, but no general purpose visual language has arrived that seems useful, much less superior to it's textual equivalent. My question is why is this so? Have we just not tried hard enough or are there fundamental limitations to what can be described graphically vs textually? I'm interested in your thoughts.

- Josh


-- 
Josh Marinacci

Sean McDirmid

unread,
May 17, 2012, 8:04:30 PM5/17/12
to augmented-...@googlegroups.com

Failure is only meaningful when you’ve defined success. Are LabView, Max MSP, or VVVV failures? They are successful in their niche markets, but haven’t taken over the world. So I’m assuming you mean success as “visual PLs supplanting textual PLs for general purpose programming tasks.”

 

In this case, why visual/graphical/structured languages have failed is that free-form text is just so darn convenient for two reasons:

1.       Text has excellent abstraction capabilities and can be very concise/dense than pictorial/diagrammatic representations.

2.       The ability to write anything anywhere without having it be correct is a huge boon to the “flow” of our thinking; e.g., to delete a random range of text then fix the compiler error, or to use a symbol that you want to define later. Most VPLs rely on structured syntax and so disrupt the programmer’s own flow.

 

We can create a graphical language with the abstraction/density properties of text by using lots of text in the language (worst case, we have a non-graphical structured language). We could also create a visual language that allows for free-form editing; e.g., sketching. But if we do both, we just have a free-form textual language.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To post to this group, send email to augmented-...@googlegroups.com.
To unsubscribe from this group, send email to augmented-progra...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/augmented-programming?hl=en.

Boaz Rosenan

unread,
May 18, 2012, 7:08:53 AM5/18/12
to augmented-...@googlegroups.com
I agree with those two points, density and freedom of editing, and would like to add compatibility with external tools, such as diff engines and source configuration management tools.

About density, this is, I believe, the key difference between what is considered to be "modeling" and what is considered to be "programming".  In modeling, regardless of the presentation mode (diagram, XML, ...) there are less nodes with more information.  For "programming" there are many nodes, often represented by nodes in the program's AST, where each has a very simple structure.  A screen of code in a programming language, for example, can easily contain one thousand AST nodes, and still makes sense to the reader.  However, a diagram with only thirty or so nodes is already hard to follow.  As a result, some problem domains, such as state-machines and data flows, lend themselves nicely to diagrams, while others, such as imperative programming, do not.

In Cedalion, I try to address the density issue, as well as, to some extent, the freedom of editing and compatibility with external tools.

Density:
Cedalion is a projected, textual language.  Just like in MPS and IDW, programs are presented as text, although being edited graphically.  For example, if no special projection rules are defined, Cedalion code resembles Prolog code very much.  However, the ability to use custom projections for compound terms brings more power than what is available to ASCII languages.  It allows the use of colors, special symbols, font sizes and styles, as well as borders and layout.  All in the name of expressiveness.
An example can be seen in the attached image, where Cedalion code is integrated with Javascript code, all within the Cedalion projectional environment.  There is a bidirectional bridge between Cedalion and Javascript, where Javascript code can be embedded in Cedalion code and vice versa.  In an ASCII language, that would require some bidirectional escaping, to differentiate between the two contexts.  With a projectional approach, this can be displayed quite nicely, e.g., by changing the background color of the underlying code.

Freedom of editing:
This is a tough problem for every structured editor, of any kind.  Especially when there is no plain-text language underneath, where you can switch modes between textual and structural editing.  Strict-form structured editors require that everything present in the editor shall be valid code.  One implication of that is that everything you write must be top-down, which is not always the way we think about things.
In Cedalion we address this issue in two ways:  First, we allow sort of free-form textual editing, where the edited text uses a Prolog-like syntax to indicate structure.  This is a way to create things that are not already defined in the language.  After entering such code, Cedalion provides a structural representation of this code, and allows after-the-fact definition, to provide it syntax and semantics.
In addition, through the structured editing facilities, Cedalion supports (in some cases) bottom up editing, so that expressions such as X*Y+Z can be inserted from left to right.
As for copying pieces of code around, Cedalion has a copy/paste mechanism for list elements, so you can copy several elements of a list and paste them in some other list, or in a different place in that list.  Such mechanisms can be created by users without much effort when needed, to help in such situations.

Compatibility with other tools:
I think this is one of the things that are mostly feared when considering the use of non-ASCII languages.  There is much more to the development lifecycle than just coding.  Therefore, non-ASCII languages have the disadvantage of not being able to leverage existing CM tools, diff tools, tools for code review and other tasks.
Cedalion provides a partial solution for that, by using a partially-readable ASCII representation under the hood.  It is readable enough to provide an experienced programmer an idea of what was changed, and allows tools to perform tasks such as automatic merges, automatically.  However, as code becomes more and more complex, this solution may not be enough.
conjunction.png

David Barbour

unread,
May 19, 2012, 5:09:47 PM5/19/12
to augmented-...@googlegroups.com
HCI devices should take a lot of blame. Consider:

Text programming is easy because we have a precise, unambiguous, accessible and moderately convenient text-based input mechanism - keyboards. As we move into visual programming, our input devices become less precise, more analog in nature. We'll use touch, drawings, gestures, objects in our environment.

To provide input in a visual space requires consuming part of that visual space with input mechanisms. Until recently, screen real-estate was a precious commodity. Today, screen real-estate is a semi-precious commodity. Ideally, we would have enough visual space that we can waste it on things the programmer doesn't need right away. 

The advent of Project Glass and similar might allow us a sufficient visual space to fit everything we need - we can have objects accessible by turning the head (leveraging acceleration), and long term the glasses-like mechanisms should provide access to peripheral vision. We could even scatter representations of objects in the room, leveraging spatial awareness and a concept of ambient programming.

Even if we have visual space to waste, we will need flexible views of the system, or some concept of inventory (objects we've chosen to remember, perhaps augmented by automatic history of recently viewed items), so that we can get objects next to each other in a visual space that would otherwise be too far apart to compose, or find objects that might be a source of a problem. Effective use of gestures or physical objects could help us access objects, inventory, change "input tools" (brushes, paint, wire cutter, eraser, etc.) without looking for them, to make more efficient use of both visual space

(Aside: I expect machine learning of user behavior will also be important for gestures, voice, etc.. Much depends on context to which individual applications shouldn't have much access. Much is personal and needs to account for a user's peculiar accent. I have a lot of ideas on how to make this a cooperative learning experience between man and machine, and no time to explore them.)


-- 
Josh Marinacci

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To post to this group, send email to augmented-...@googlegroups.com.
To unsubscribe from this group, send email to augmented-progra...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/augmented-programming?hl=en.



--
bringing s-words to a pen fight

Sean McDirmid

unread,
May 19, 2012, 6:26:50 PM5/19/12
to <augmented-programming@googlegroups.com>, augmented-...@googlegroups.com
It's not mainly that screen space is a precious resource; it's that screen based input kind of sucks.  The mouse is precise but slow, touch is fast but not very precise, and we have occlusion and guerilla arm issues. 

And none of these mechanisms are tactile like a good ibm model m keyboard. Swiping on a screen, waving your hands to a camera, and even eye tracking....they are one way input mechanisms without feedback, no satisfying bounce or click. The touch screen can't stop or move your finger...yet. All feedback must be processed visually disconnected from input. Haptics is still in its infancy. 

Josh Marinacci

unread,
May 21, 2012, 5:07:19 PM5/21/12
to augmented-...@googlegroups.com
I don't think symbolic programming will ever be faster to enter than with a keyboard. It is a symbolic device, after all. However, forms of programming which are visual can use more visual input devices, like a mouse or touch screen. If we ever figure out how to make programming by example work well, at least in certain domains, then I bet a gestural interface would work.  Our programming model has to come first, though.

-- 
Josh Marinacci

Sean McDirmid

unread,
May 21, 2012, 6:35:51 PM5/21/12
to <augmented-programming@googlegroups.com>, augmented-...@googlegroups.com
The ultimate symbolic input device is the ear of course! The next best one is the eye; especially for those that can read; some of us output with gestures though. We can learn a lot about our computers from ourselves. 

BGB

unread,
Jun 8, 2012, 2:35:54 PM6/8/12
to Augmented Programming


On May 17, 11:48 am, Josh Marinacci <jos...@marinacci.org> wrote:
> I've spent some time recently looking at various research papers on visual programming languages and have come to realize that they have all been failures. Some are successful in narrow problem domains, but no general purpose visual language has arrived that seems useful, much less superior to it's textual equivalent. My question is why is this so? Have we just not tried hard enough or are there fundamental limitations to what can be described graphically vs textually? I'm interested in your thoughts.
>

in many regards, text is just a better way to represent most forms of
information than are graphics.
those things which are better as graphics, are typically graphical
already (images, 3D geometry, ...).
those things which are better as text, but which people shoehorn into
graphics (flowcharts, ...), are just painful.
rarely does it go the other way.

now, a downside of text, at least in its traditional form, for many
tasks, is that it is a single large linear/monolithic structure.
however, such things as editors which support code-collapse, anchoring
code onto relevant objects, ... can make this considerably easier (so,
there is no longer a need to abandon good-old-text).

it is like with language:
do we write text or do we draw pictures?
people could use pictures for everything, but text largely wins out.

eliminating text may not really be an ideal goal.


a much better goal, IMHO, is trying to abolish the traditional "edit/
compile/run" cycle:
say, we use the app, encounter a problem area, edit the code in-place,
and go on using the new version (possibly with a key like "F5" or
similar, for "ok, done editing, compile and execute this new code",
which will then, if compiled successfully, replace whatever code was
there before).

if we spend several minutes recompiling, and another several minutes
restarting the app and getting back to where we were, for what are
often only very minor changes to the code, these are wasted minutes.

even as such, I still prefer though if all code and data is saved as
files though (since then these can be looked into as-needed, ...).

Justin Chase

unread,
Jun 8, 2012, 3:42:59 PM6/8/12
to augmented-...@googlegroups.com
I agree with the second half of this, the traditional compile cycle is definitely inferior to a dynamic one in terms of being able to iterate. And the ability to quickly iterate is incredibly powerful.

But I'm still wondering about the first part of this. You say "in many regards, text is just a better way to represent most forms of information than are graphics." I don't doubt that this is true today but I wonder if it is necessarily true given different tools driven by different concepts. I feel like given the sophistication of our various input devices, keyboards are the fastest and most ubiquitous way to unambiguously communicate large amounts of information to computers and therefore programming as text is the most practical. But in general the text you are authoring is just a representation of certain concepts in your mind, if you had a different want to represent those same concepts but you were able to express them more quickly then I think that way would be superior. Actually, it seems like there are a number factors that it would need to beat text on:
  • authoring speed
  • comprehension speed
  • toolability
  • version control-ability
  • ubiquity of input device
So whatever system you come up with really has to compete with text on all of those fronts to even begin to made an impact. At the moment it seems like only languages designed to work with domains that are intimately tied to things that are visual or 3-dimensional are actually suitable for visual tools.

But as we begin to see an increase in the ubiquity of new input devices I wonder if that doesn't change things some. For example touch screens are becoming much more common, and as such I think it brings some new powerful types of input, especially panning and zooming and the various other common gestures. Which really makes certain techniques such as semantic zoom shine... Also an increase in motion capture devices such as the kinect, that usb hand gesture thing in a previous email and surface technology, I wonder if as these input devices start to become more ubiquitous it begins to shift the balance away from the keyboard in a number of those vectors? I don't think that it's necessarily true that text is actually the most superior way to represent abstract concepts. I think it happens to be true today but it's not guaranteed that it will always be true. 


One problem I think we could try to work on is the problem of "binary code". Or a non-textual representation of your code on disk that is amenable to merging and diffing. I think it has the potential to be actually better than text because text diffing essentially uses line endings to solve these problems even though you might have multiple semantics and changes on the same line. If you had a binary format that was diff-able you could possibly breakdown changes even further, to what would be different semantics on the same line of code. That's kind of an orthogonal problem that I think causes new ideas to stumble on in practical scenarios.


--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To post to this group, send email to augmented-...@googlegroups.com.
To unsubscribe from this group, send email to augmented-progra...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/augmented-programming?hl=en.

David Barbour

unread,
Jun 8, 2012, 5:55:02 PM6/8/12
to augmented-...@googlegroups.com
On Fri, Jun 8, 2012 at 12:42 PM, Justin Chase <justin....@gmail.com> wrote:
in general the text you are authoring is just a representation of certain concepts in your mind, if you had a different want to represent those same concepts but you were able to express them more quickly then I think that way would be superior.

I think we'll want multiple representations of our code, multiple views based on the edit we're performing.  If we have multiple representations, it hardly seems an issue that the `canonical` representation (perhaps hidden under-the-hood) might be text. I think the relevant issues are: modularity, granularity, flexible abstraction (including syntactic abstraction). 

The main argument for structured editors has been the ability to bind elements structurally, i.e. with keys in a database. This seems a dubious argument to me: it has all the same evils of import. This doesn't mean we can't have structured editing (if our user-agents provide it), but the structure should be based on stable paths through stable bidirectional views (lenses) on code. In which case there is no difficulty if code is text, so long as it has enough structure to locate modules and provide stable paths.

 
Actually, it seems like there are a number factors that it would need to beat text on:
  • authoring speed
  • comprehension speed
  • toolability
  • version control-ability
  • ubiquity of input device
So whatever system you come up with really has to compete with text on all of those fronts to even begin to made an impact.

But keeping a canonical text backing and having our user-agents provide views for authoring and comprehension allows a smooth technology transition that doesn't need to compete on all these fronts (though it still competes on the first two, which is challenge enough). We would design languages that support multiple views and bi-directional editing, or find disciplines and design patterns to adapt existing languages (via structure, types, annotations, plugins model). 


One problem I think we could try to work on is the problem [...] non-textual representation of your code on disk that is amenable to merging and diffing.

Even with text we could benefit from grammar-aware (or guess-the-grammar) structural diffs. 

BGB

unread,
Jun 8, 2012, 9:07:32 PM6/8/12
to Augmented Programming
the issue I think is (beyond those issues mentioned), is that of the
computer->human side of the process (presenting information in a form
that a person can understand, and without wasting lots of "visual
space" in doing so).

for example, of another "information dense, non-textual
representation", consider something like a PCB:
a person can see the traces on one side;
they can flip the board over, and see the traces on the other side;
now, imagine for a moment that there were no traces obscured by
components or hidden away inside the board.

would it be readable? would it be "intuitive" or "understandable"? I
have doubts. people could probably use it, but instead of "hidden
connections between words", one has "explicit connections by tracing a
line".

many other options have very low information density, which some
people prefer, but may not be all that helpful in general (since only
limited information may be portrayed).


it is also worth noting where text came from:
it came mostly from people writing on papyrus and velum (for the most
part).

in this case, the limitation wasn't the keyboard, but whatever they
could easily write or draw.
text is what emerged (at least in the western world, pictographs and
ideographs emerged elsewhere, written Chinese being a modern example
of this, where it itself became a form of text).

elsewhere, for symbolic and numerical problems, mathematical notation
emerged, which loosely follows textual form.

...

it is possible that something text-like may be fairly close to optimal
regarding the human-side handling of information-dense systems.


now, granted, there may be some new possibilities opened up via modern
computers (given they are not limited in the same ways as paper or
handwritten input, or keyboards), but I have doubts that it means
abolishing text-like forms.

rather, instead, "extending" text-like forms may be preferable. in a
way, syntax highlighting may be a step in this direction, as are
things like autocomplete, tooltips, hyperlinks, ...

so, for example, source-code becomes more like hypertext, where it
gets easier to link between code and documentation, jump from a
function call to its declaration, ... (I have before imagined some if
"code" were more like a wiki, sort of like a project being a wiki of
source-code, though current IDEs are sort of already going in this
direction I guess...).


this doesn't mean though that text is universally optimal:
graphics editors and 3D modeling apps being a big example of cases
where alternative presentation makes more sense.

it is worth noting that, even in many cases where the application is
itself graphical, the underlying data storage may still be
"textual" (either in representation or in "form", 1). for example, in
many 3D apps, a textual format may be used for storing the 3D
models, ...

so, it may not be as much about how the data is represented, rather,
how it is presented.

1: "form" may refer to cases where the data is not itself stored in a
textual format, but has a 1:1 mapping with a textual format in the
structured-representation sense. this is not to be confused with the
alternative case where some other non-textual "form" is mapped to a
textual format for "interchange" (for example, a format which
represents tables or structs flattened out to text is not "textual in
form", even if it is "in representation", whereas a binary format
which is structured similarly to a textual format, may well have
"textual form", but not necessarily a textual representation).

this may be getting off track though (as the question is about the
"human side", rather than the "machine side").

this is not to say I oppose binary formats, as they have plenty of use-
cases as well.

in computing, it is ultimately all about tradeoffs, and finding the
best tradeoffs for a particular situation.


On Jun 8, 2:42 pm, Justin Chase <justin.m.ch...@gmail.com> wrote:
> I agree with the second half of this, the traditional compile cycle is
> definitely inferior to a dynamic one in terms of being able to iterate. And
> the ability to quickly iterate is incredibly powerful.
>
> But I'm still wondering about the first part of this. You say "in many
> regards, text is just a better way to represent most forms of information
> than are graphics." I don't doubt that this is true today but I wonder if
> it is necessarily true given different tools driven by different concepts.
> I feel like given the sophistication of our various input devices,
> keyboards are the fastest and most ubiquitous way to unambiguously
> communicate large amounts of information to computers and therefore
> programming as text is the most practical. But in general the text you are
> authoring is just a representation of certain concepts in your mind, if you
> had a different want to represent those same concepts but you were able to
> express them more quickly then I think that way would be superior.
> Actually, it seems like there are a number factors that it would need to
> beat text on:
>
> - authoring speed
> - comprehension speed
> - toolability
> - version control-ability
> - ubiquity of input device
>
> So whatever system you come up with really has to compete with text on all
> of those fronts to even begin to made an impact. At the moment it seems
> like only languages designed to work with domains that are intimately tied
> to things that are visual or 3-dimensional are actually suitable for visual
> tools.
>
> But as we begin to see an increase in the ubiquity of new input devices I
> wonder if that doesn't change things some. For example touch screens are
> becoming much more common, and as such I think it brings some new powerful
> types of input, especially panning and zooming and the various other common
> gestures. Which really makes certain techniques such as semantic zoom
> shine... Also an increase in motion capture devices such as the kinect,
> that usb hand gesture thing in a previous email and surface technology, I
> wonder if as these input devices start to become more ubiquitous it begins
> to shift the balance away from the keyboard in a number of those vectors? I
> don't think that it's necessarily true that text is actually the most
> superior way to represent abstract concepts. I think it *happens *to be

David Barbour

unread,
Jun 8, 2012, 10:38:56 PM6/8/12
to augmented-...@googlegroups.com
On Fri, Jun 8, 2012 at 6:07 PM, BGB <cr8...@gmail.com> wrote:
in computing, it is ultimately all about tradeoffs, and finding the
best tradeoffs for a particular situation.

That's an engineer's POV. To an inventor - or any worthy language designer - design is about avoiding tradeoffs. Tradeoffs are how we fit square solutions into round problems. Change the solution, or change the problem, or specialize - heterogeneous solutions for heterogeneous problems. (See TRIZ.)

I would note that humans do make extensive use of graphical expression. On paper these include blueprints, graphs, diagrams, maps. In social scenarios, these include gestures, posture, facial expression. We sometimes try to carry the latter into the former, e.g. those silly smiles we sometimes annotate our text with. 

Regards,

Dave

Steve Wart

unread,
Jun 8, 2012, 10:39:51 PM6/8/12
to augmented-...@googlegroups.com
I'm trying to understand this problem in a more primitive way.

I am happy with the commonly accepted definitions of a language, but I think the distinction between text and graphics is maybe somewhat artificial.

A language is a stream of symbols. There is a huge space to explore between the Jungian definition of a symbol and the Latin alphabet, but I think most attempts at graphical "languages" have thrown away the theoretical foundations in order to present things in a "friendly" way.

People understand symbols, but the vast majority of programming languages limit themselves to ASCII text (with only minor deviations).

Unicode is a big help, but we can also do better than lines and columns of characters. If a string of symbols can be streamed, then it can be compiled.

So yeah, you can draw a picture, but before it can be parsed, it needs to be representable as a language. I'm sure this isn't a fundamental law of nature, but I think there's a lot of room to explore, while still building on what we already know and understand.

On Fri, Jun 8, 2012 at 6:07 PM, BGB <cr8...@gmail.com> wrote:

BGB

unread,
Jun 8, 2012, 11:48:08 PM6/8/12
to Augmented Programming
yeah, but as-noted, I was including written Chinese as an example of
text, as well as other possibilities being earlier pictographic
systems, as well as things like Cuniform (which later led to Akkadian,
and later partly to the modern Hebrew alphabet), ...

so, programming need not necessarily remained confined to ASCII, or
even necessarily to the confines of traditional writing systems
(although, at present, ASCII makes sense due to both current user-
input devices and due to general familiarity, since learning
programming doesn't necessarily mean learning a whole new alphabet in
the process).

hypertext could be a more reachable near-term goal (possibly with the
language using some level of "hidden notation" internally), say, the
user edits a sort of hypertext language, with the actual on-disk
format being more like some form of markup-language.


the problem though is that many attempts at graphical programming have
gone into the domain of flow-charts or doing drag-and-drop stuff with
icons, ...

I suspect that at this point, the general usability for programming
tasks drops considerably.

then, there is also drag-and-drop with tiles describing actions (often
using words with "blanks" to drag objects into), which is at least
almost half-way usable, but I suspect it makes more sense as a
learning aide than as a way to seriously input programs (an
experienced programmer can likely type much more quickly and easily
than they could drag these sort of "action tiles").

I saw one system like the above before, and was personally motivated
much more by its execution semantics than by its code input strategy
(it had an execution model which was neither traditional single-
threaded nor multi-threaded execution, but rather "something
different", for which I have not yet found an ideal way to map to a
more traditional language design).
> ...
>
> read more »

Sean McDirmid

unread,
Jun 9, 2012, 12:02:48 AM6/9/12
to augmented-...@googlegroups.com
Human proto-language was very pictographic, as language became more advanced it necessarily became more abstract. In Chinese, for example, there are very little relationships left between what the pictograph meant when invented and what it means today in a word! There is not much point in going backwards on the evolution of language by making the symbols more concrete and intuitive. We are fully capable of becoming literate these days. Visual languages (along with proto languages) work great for beginners (and illiterates) to express simple thoughts or accomplish simple tasks, but they do not scale to larger tasks where more abstraction is involved.

If we want to move beyond the basic concept of text, then we really have to revolutionize human language in general without moving backwards. This is not an easy task, and is probably beyond any of us (but who knows).

I've played around with non-traditional execution systems (see Coding at the speed of touch) that were neither single nor multi-threaded, but rather based on Brooks' behavior-oriented paradigm. See also Kodu.
> > computer->form

Steve Wart

unread,
Jun 9, 2012, 12:30:31 AM6/9/12
to augmented-...@googlegroups.com, Augmented Programming
I think the time is ripe for new programming ideas. Drag and drop graphical PC environments evolved into GUI builders and sophisticated interactive environments. Pretty impressive for a failure, but clearly not exactly what the first people who started along this path might have expected.

There are a lot of parser generators and related language tools out there now, and they are surprisingly easy to use. Tablets and soft keyboards make it a lot easier to leave ASCII behind, but I've been struggling to find a set of tools that works for me.

I was reading the Smalltalk blue book again this week and it was really humbling to realize the effort it must have taken to use such a small kernel of a language to build a complete graphical programming environment, all in itself. Now it looks old-fashioned, but at the time, it was radical graphical programming.

I envision a similar level of effort to go to the next level, and I also expect most of the people whose opinion matters to be dismissive of it. It will have been "done before" or somehow missing some aspect of perfection. I will probably hate it.

But what I really want is to get started on something. What sorts of lexing and parsing tools do people here use? What are the pros and cons of different utilities? Is it better to just hand craft a Lisp interpreter and build everything from that? Why are my llvm binaries so big? Is APL dead forever? :)

Cheers,
Steve

Toby Schachman

unread,
Jun 9, 2012, 1:10:16 AM6/9/12
to augmented-...@googlegroups.com
On Sat, Jun 9, 2012 at 12:30 AM, Steve Wart <steve...@gmail.com> wrote:
> But what I really want is to get started on something. What sorts of lexing and parsing tools do people here use? What are the pros and cons of different utilities? Is it better to just hand craft a Lisp interpreter and build everything from that? Why are my llvm binaries so big? Is APL dead forever? :)

If you are developing non-textual programming "languages" or
alternative programming interfaces, I have found it more productive to
jump right in developing the user interfaces of the environment which
can then directly build the internal representation of your program
(e.g. the abstract syntax tree or whatever internal model you have in
mind). I have tried building mini-languages, lisps, etc. as
bootstrapping representations but at least in my personal experience,
these have always been quagmires.

Focusing on the *programming experience* has been more productive for
me than trying to build such a thing up in the traditional way.

Further, I would encourage you to start by building a proof of concept
of the programming experience you have in mind. Get a sense for what
your tool will *feel* like. If you're making a textual language, write
a bunch of programs in it before you even start writing a compiler or
even a grammar. If you're making an interactive programming
environment, put together a minimum of components to see (and feel)
what it's like to build/explore in the environment. A good motto is
"fake it work" :)

For example, my previous project was Recursive Drawing, itself a proof
of concept for a larger programming vision,
http://recursivedrawing.com/
http://totem.cc/2012/06/08/Recursive-Drawing/

But my original proof of concept for Recursive Drawing was this toy
which I implemented in a day,
http://electronicwhisper.github.com/toys/1/

Best of luck!
Toby

BGB

unread,
Jun 9, 2012, 2:20:33 AM6/9/12
to Augmented Programming
well, GUI builders work fairly well, but aren't exactly "graphical
programming" per-se, despite being often closely tied to programming
(double-click to edit the event-handler logic, ...).

it is much the same as attaching program logic to an object in a 3D
scene:
if you can select the object and then bring up the behavioral code in
an editor, well, this is nice, but it still isn't really graphical
programming.


as far as parsing:
personally, I just use hand-written recursive descent parsers.
sadly, the parser actually tends to be the easy part IME (once things
develop all that far).

now as for language cores:
actually, a simple dynamically-typed core can be fairly capable and
powerful;
it isn't necessarily fast, nor does it necessarily have conventional
semantics, but it isn't itself terribly complex or difficult to get
thrown together.


for example, as terrible and complex as my VM now is, it started out
as a vaguely simple core resembling a mix of Scheme and Self (a Scheme-
like core with some Self features bolted on). (actually, the history
is a bit more tangled than this, but this is the basic idea).

complexity then came in several major forms:
long ago, switching from AST-based interpretation to bytecode;
special cases to reduce the number of bytecode ops needed to complete
a task (compound operations);
developing and bolting on a big complex C FFI;
developing and bolting on a much more complex object system;
later switching to from bytecode to using threaded code internally;
adding on some features for static type-checking;
...

taking a simple core, and adding piles of stuff onto it, does not
necessarily lead to a simple end result.


many other languages work the other way, starting out with a more
efficient low-level design, and "building up" to more dynamic
features, usually building up a mountain of stuff on top in the
process.

meanwhile, much more of the complexity in my case is due to
interfacing with C, and the language acting like it has statically-
typed Class/Instance OO and "packages" and similar (as opposed to
being dynamically typed, using Prototype-OO, and building the scope by
linking objects together).

it is just merely a side-detail that packages are themselves objects
internally (and that "import" is built on delegation, 1), that class
layouts are mutable, and that pretty much every type of variable or
field is capable of delegation (it is possible to see scope "through"
variables), ...

(1: actually, import is implemented by delegating to an object from a
variable within the lexical scope, whereas delegating from within the
object scope in the case of a package would lead to different
behavior, namely a package which can forward the bindings imported
from other packages).


but, anyways, although many people obsess some on "simple languages",
ultimately it may be more effective to focus on providing a "good
programming experience". "simple" does not always equate to
"usable" (and much of the "complexity" in many cases may actually just
be "syntax sugar" to, among other things, make it more streamlined or
usable).

do we really "need" much more than S-Expressions? maybe not. do these
more elaborate syntax designs help? I say, in general, yes.


it is also notable that many people manage to underestimate what
things the traditional mainstream languages actually do well, as
"different" is not always "better".

as well, it may also make sense to make something which at least looks
familiar, such that "sane" programmers wont quite so readily balk upon
seeing it.


I don't really know if this helps, but I need to sleep...


On Jun 8, 11:30 pm, Steve Wart <steve.w...@gmail.com> wrote:
> I think the time is ripe for new programming ideas. Drag and drop graphical PC environments evolved into GUI builders and sophisticated interactive environments. Pretty impressive for a failure, but clearly not exactly what the first people who started along this path might have expected.
>
> There are a lot of parser generators and related language tools out there now, and they are surprisingly easy to use. Tablets and soft keyboards make it a lot easier to leave ASCII behind, but I've been struggling to find a set of tools that works for me.
>
> I was reading the Smalltalk blue book again this week and it was really humbling to realize the effort it must have taken to use such a small kernel of a language to build a complete graphical programming environment, all in itself. Now it looks old-fashioned, but at the time, it was radical graphical programming.
>
> I envision a similar level of effort to go to the next level, and I also expect most of the people whose opinion matters to be dismissive of it. It will have been "done before" or somehow missing some aspect of perfection. I will probably hate it.
>
> But what I really want is to get started on something. What sorts of lexing and parsing tools do people here use? What are the pros and cons of different utilities? Is it better to just hand craft a Lisp interpreter and build everything from that? Why are my llvm binaries so big? Is APL dead forever? :)
>
> Cheers,
> Steve
>
> ...
>
> read more »

Steve Wart

unread,
Jun 9, 2012, 10:39:18 AM6/9/12
to augmented-...@googlegroups.com
Thanks Toby

Recursive Drawing is amazing.

Josh Marinacci

unread,
Jun 10, 2012, 2:27:08 AM6/10/12
to augmented-...@googlegroups.com
I think GUI builders should be considered graphical programming.  If programming is defined as instructing a computer to do something, then GUI builders let you visually do half of the instruction. The choice of components, the visual look, and the layout.  I think that's very valid, especially since it's a task that was done by text files before we had GUI builders. Let us not discount our successes, lest we become AI.

I completely agree with Toby that you should start with the programming experience first, then work on the underpinnings.  The other day I had a though. What if we designed a language to be very easy to read, even at a glance. In this language the indentation, fonts, formatting, and extra symbols would all be meaningful.  Here's a mockup of what it might look like (those of you with pure-text email clients may not get the full effect):

root:Node pt:Point » find
     n in root.children
         «? find n (pt - root.translate)
     ? root.bounds.contains (pt-root.translate)
         « root
     « null


This is a function. What is a function but a subroutine which accepts input and returns output.  In this case the arguments are listed first followed by the french quotes (which I discovered by accident can be typed on the Mac keyboard with alt-| and alt-shift-|).   They give you a sense of what is coming into the function and what is coming out. Rather than the 'return' keyword it uses the left pointing french quote.  This way, at a glance, you can see which lines are the exit points of the routine.  I also used some other formatting to distinguish conditionals and loops.


On parsing: I've started playing with O-Meta/JS. It's really easy to work with. In about 20 min I got it working with NodeJS so I can build command line parsers and compilers.

- Josh


-- 
Josh Marinacci

ravi

unread,
Jun 11, 2012, 6:14:45 PM6/11/12
to augmented-...@googlegroups.com
I think there is a fundamental reason for this. There are no real world representations for processes. How do you represent a loop? Do you see any loops in real-world? The closest that comes to this is a circle, and it is not in the dimension of time. We as a species do not have better descriptions for processes than words, except for formulas, which are huge abstractions and not at all visually friendly. 

Note that imperative languages do not lend themselves to visual representation. Your best bet is in representing pure functional programs visually because, state(variables and such) hides intent.



George Maney

unread,
Jun 11, 2012, 11:45:30 PM6/11/12
to augmented-...@googlegroups.com

The one huge visual programming language success always overlooked is relay ladder logic of the sort widely used in programmable logic controllers. It’s in your car, your cell phone, and industrial automation everywhere. It’s been done digitally for more than fifty years but goes back to the early days of steam engines and steam relays (18th century) , so it may be the oldest programming language known.

Clemens Ott

unread,
Mar 21, 2015, 9:01:46 AM3/21/15
to augmented-...@googlegroups.com
Hi Josh,

The reason why you see visual programming as a failure, besides domain specific success like embedded and sensor centric programming environments is because visual programming languages have hard limits as Eric Hosick calls it. The fact that there is this limit in the algorithm layer is the reason why it is perceived that visual programming languages have failed, indeed we have come up with a language and platform in which we have overcome this hard limit.

If you want to have a look please visit www.customonlinesoftware.com

greetings
Clemens
Reply all
Reply to author
Forward
0 new messages