In conclusion, the advantages given for visual programming tools, that they make the program easier to create and understand are almost always a mirage. They can only succeed in the simplest of cases and at best result in the suboptimal situation where the visual elements are simply obfuscating containers for textual code.
So, it's not a bad idea in the situations where it works, then. (And you're missing some other places where it *is* a demonstrably good idea, like process control.)
I'm not going to teach my robotics class to code in C--I want them to actually get something working. This includes things you're saying aren't represented in visual programming paradigms, e.g., interrupt handlers and co-routines, which exist, and are fairly easy for kids to grasp. It's the same with anything: things need to be introduced piece by piece, and the vagaries of typing, "special" characters they don't normally use, error outputs in the console... just no. It's too much cognitive overhead for something they're simple not able to deal with in short, one-hour a week sessions.
Is it a bad idea for me, a (all-too) seasoned programmer? Not always, no. I'll build my apps the old-fashioned way--but even I use blockly-like programming for quick-and-dirty stuff, or composing things I've already written.
Blanket statements like "IS BAD MKAY" are disingenuous.
You are confusing graphical dataflow languages (the ones with the boxes with hidden options and arrows connecting those boxes) with what Scratch does. Scratch is a text language where the program statements and types are predefined shapes to eliminate syntax errors. You cannot make a syntactically-incorrect language in Scratch as the boxes don't fit together. Beyond this syntax help, Scratch doesn't hide anything and doesn't format differently from a pure text language.
That said, I do agree with much of what you say in regards to other educational languages, such as are used for the Lego Mindstorms robot kits. That language is derived from LabView, and most beginners find it pretty difficult to move beyond a few blocks or to wire in things like variables. My guess is that a language that hits a complexity barrier with variable assignment is not going to scale well :-).
Wow thank you for this article, I completely agree and at the same time completely disagree.
You are on point that programing can't be summarized to block and arrow.
At the same time, if all textual command could be made visual maybe we could have a visual language that could come close to a real interpreted language.
What if all keyword, variable and functions would have visual counterparts?
Maybe it is not about making programing more accessible but more about creating a visual programing language, one that could match the complexity of a standard textual language.
Some forms of visual programming is good for learning the basics and once you learn what you are doing then it get a lot easier to text elements to create more advanced programs. However, it is a bad idea to only use block type visual code.
While I agree that all these problems plague most visual programming languages, I disagree that these are inherent problems with visual languages. Perhaps they are inherent problems in visual languages that are representing imperative paradigms.
Consider a functional dataflow language with haskell-like types - text dataflow languages are really just constructing a DAG - not mutating state in an imperative style.Now consider that you can either define that DAG via textual (parse->AST->DAG) or visual (directly manipulate DAG). If you could easily switch between these representations, have a solid type system, 1st class functions and use modules instead of a single text file it wouldn't have any issues. The problem with visual languages is that they are usually not very good languages and that "structural"/low-level things are easier to deal with as text, but (for dataflow languages at least) high-level structures are more suited to visual representations.
Unreal's Blueprints are a pretty good example of how imperative programming language can work visually, your point about not being able to fully understand a program without looking at the corresponding c++ is totally right but in practice the intended audience isn't actual programmers so it's actually been really valuable - even at AAA studios.
Perhaps a better example is LabView or Reaktor - both of which feature highly complicated, encapsulated code completely written in the visual language. The problem is that writing low level code in these languages is super complicated (for the reasons you listed in the article), but the payoff of being able to do the high level "business logic" visually is really big. If a language could be represented expressively in both text and visual formats, and diffed using the textual representation, I think it would be very powerful (of course for specific domains).
Visual Studio offers tools such as ADO.Net or Entity framework designers which are excellent to quickly get the data structures and code to interact with a RDBMS. Another example is SSIS on MS SQL server. Workflow designers are also useful. I mean to say that hybrid is good.
Using Scratch as a blanket for all visual programming seems a little disingenuous. I also doubt the creators of Scratch believe your misconceptions. It was created for a specific purpose and the underlying code is not simplistic.
Take a gander at something like RFML from Red Foundry or the entire Mendux ecosystem for better examples of visual programming.
One big reason why it is a good idea where possible is that it is clear. And in a world being swallowed by poorly written bug ridden software I think this reason is quite worthwhile.
"Visual programming isn't for programmers."
He specifically discusses tools that were supposed to make professional programming visual using UML, so yes, visual programming was indeed at one point "for programmers."
"So, it's not a bad idea in the situations where it works, then."
That's a statement that can be applied to literally anything. Zeppelins work when the gas bubble is totally secure and there are no ruptures or flames that go near it. Yet, for some reason, we don't use them any more. Just because an idea isn't applicable in every single situation does not mean it is "good."
"I'm not going to teach my robotics class to code in C--I want them to actually get something working."
Why not write a robotics library for C (or whichever language you choose) and allow your students to make robotics code that scales, while circumventing the issue of your students writing robotics firmware? Honestly if you told me a robot was designed using visual programming, I wouldn't buy it, let alone allow it to handle my things.
I agree that dialog boxes as an input mechanism are terrible. Anything that splits the code up at too fine a level of granularity and that gets in the way of fluid navigation is going to be terrible for usability and productivity, but I don't see this as a necessary result of trying to explore different representations of program logic.
My personal preference would be to use YAML as a textual modelling language, combined with Python or C for functional components; with navigation and understanding aided by generating various visual representations of the system: Most usefully at the level of systems and components, but without limiting finer granularity representations (dependency graphs, control flow graphs, data flow graphs, parse trees, etc.. etc..) either.
I used to do seismic data processing before I became a software Developer, and a most of the programs we used were very similar to the visual languages described by the author. We built processing flows (or program diagrams) out of blocks connected together with arrows, with most of the parameters of each block burried in the settings. All of this is to say it was a way of allowing non programmers to do programming like work, without training as coders on top of training as geophysicists. This is, to me, the correct application of visual languages. I dont think we ever should be using them as a primary tool to build software, but they are damn useful in allowing others to do very complicated tasks within software, especially when we are talking about data analysis.
I agree with source control, the increased complexity of one line of code mapping to multiple blocks and the complexity of abstraction. But why do you think it's necessary to define high-level blocks through textual code?
A high-level block can represent a underlying visual function even defined in another file much like external higher-level functions in textual code. Like a Math.pow(2,4) isn't a single line of code, but in fact multiple lines just abstracted, you can have a Math.pow() block and it's underlying logic visually defined somewhere else
After spending two years in labview I concur.
In the end G-IDE systems are just as capable as traditional languages, the difference is a G-IDE allows people with minimal programming skills to create complex systems that are impossible to maintain.
Today in Traditional languages, it honestly takes someone of high intelligence to create systems that are impossible to maintain. Typically working alone, without a program manager.
No matter how much tinkering you do in the G-IDE it's not as functional as traditional code. I wrote a visa wrapper on one page of python code that does the same thing as five different VI that take up multiple windows.
Which is easier to adapt to new systems? Which is easier to maintain?
The python Visa wrapper.
Last decade I spent a fair bit of time working with .NET Workflow Foundation - a visual programming abstraction. Many projects devolved into "abstract art" - this paradigm failed, and Microsoft has phased this API out after several iterative retries.
7fc3f7cf58