Can we ever successfully have a programming experience like standing up and drawing on a big digital whiteboard, doing boxes and arrows, and yet making real code?
The thing in my mind is:
(a) the pictures will be virtual / backed by computer power / not just
ink on plastic. It must round-trip. It must be 'integrated'. It must
be an aspect, a viewpoint, cf. coretalk
(http://www.baychi.org/bof/future/20030325c/)
I’m working on this right now. The key seems to be “create by abstracting” or “programming by example”, the ability to sketch out a concrete artifact and then generalize it into an abstraction. So HoloLens, Sufrace (with stylus), are great for sketching and manipulating concrete artifacts, and then its just a matter of progressive abstraction (hopefully).
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
augmented-progra...@googlegroups.com.
To post to this group, send email to
augmented-...@googlegroups.com.
Visit this group at
https://groups.google.com/group/augmented-programming.
For more options, visit
https://groups.google.com/d/optout.
I want to move from "thinking to program" to "programming to think"
What we see as code is really just highly abstracted thought. It doesn't connect back to the concrete examples that led to these abstractions, there is no record, there are no mechanisms to convert concrete examples into code. So what we need is a second medium that exists alongside code to "think" though problems and mine abstractions from concrete problem solutions.
Once you have your concrete example abstracted into source code, it is trivial to re-create that concrete example again, or any other concrete example that is an instance of the abstraction J. With live programming, it’s just a matter of loading that example up and then working on it through the log view. So in a way, it is already “directly” in your code base.
In order to crack this nut, we have to get away from thinking about “what the code looks like to do something” and more about “how could we come up with the code to do something.” Most PL-based thinking focuses on the former and not the latter…maybe wikilon is falling into the same trap?
From: augmented-...@googlegroups.com [mailto:augmented-...@googlegroups.com]
On Behalf Of David Barbour
Sent: Wednesday, December 16, 2015 8:09 AM
To: pi...@googlegroups.com
Cc: augmented-...@googlegroups.com
Subject: Re: [PiLuD] Whiteboard programming?
On Tue, Dec 15, 2015 at 5:54 PM, Sean McDirmid <smc...@microsoft.com> wrote:
--
This is a question I’m trying to figure out right now as well. Frankly, I want to move from "thinking to program" to "programming to think", meaning the programming environment should replace the whiteboard, but in order for that to happen, I think we really need different abstraction levels. What we see as code is really just highly abstracted thought. It doesn't connect back to the concrete examples that led to these abstractions, there is no record, there are no mechanisms to convert concrete examples into code. So what we need is a second medium that exists alongside code to "think" though problems and mine abstractions from concrete problem solutions. Right now, my thinking is to take the "log view" that normally just traces code execution, and make it into a two-way experience. Adding a line "hello world" to the log view will necessary cause a printf("hello world") statement to be added to the code. There are many ways to arrive at a value, from simply stating the value (very concrete) to taking existing values bound to variables and combining them in some way. So in the log value, you state the value you want, and then work out how to get it abstractly with the help of having a concrete execution context to guide that. So replace log view with notebook or white board, and it at least provides a framework on how we should move forward. -----Original Message----- From: augmented-...@googlegroups.com [mailto:augmented-...@googlegroups.com] On Behalf Of Raoul Duke Sent: Wednesday, December 16, 2015 7:43 AM To: PiLuD
Loops are probably easy, say you have:
Hello world
Hello world
Hello world
Hello world
That could be four printfs or a loop with N=4 iterations. So in the latter case, you get a little loop widget with hello world in the iteration boxes, any changes to one iteration box affects all iteration boxes. Then you might want to generalize “4” into something more abstract that still produces “4”.
For “if” statements, it is more like getting rid of something you had in a previous execution context, to substitute behavior in a new execution context based on a condition.
So say we had:
Hot
And you now want:
Cold
So we edit the log view to suppress “Hot” and add the execution we want “Cold”
if false:
Log(“Hot”)
else:
Log(“Cold”)
Now, we work on generalizing “false”, so say we know it is related to a variable called temp, then we simply search for an f such that f(temp) is false. If temp’s concrete value is 5, then that could be temp > 5 or temp < 5 among other things. So just find something that fits, then try another example until you get the abstraction you want.
I’m calling it…human machine learning J.
While it might seem strange to do that to a regular programmer, on the other hand we have to acknowledge that we tend to do X intending to get Y, but get Y' as in Y+bugs that are think-os by us. So maybe doing things "backwards" can let us really get what we wanted in the first place.
Once you have your concrete example abstracted into source code, it is trivial to re-create that concrete example again, or any other concrete example that is an instance of the abstraction J.
we have to get away from thinking about “what the code looks like to do something” and more about “how could we come up with the code to do something.”
maybe wikilon is falling into the same trap?
If you created an abstraction around shakespeare’s plays, it could, or anything else for that matter (just look at RPG’s recent talk on machine-generated poetry!). The problem is recalling the example, which might be difficult. There is also always documentation and meta-text.
Of course, readability, extensibility, composability, factorability, etc… is all important. But cracking this “whiteboard” feature in particular, I guess, revolves around a concrete-abstract bidirectional mapping. You can’t explore all of them at once, and anyways, plenty of work has explored all of these features without any guidance on how the programmer actually thinks. It’s time to start from the programmer’s perspective and work our way from there. We want something that fundamentally “easier” not just “more clever!”
I’m just trying to be constructive here: when I read about your work, I feel so mentally taxed that I’m not sure what I should get out of it. It isn’t just you, but 90% of the PL papers are like that: they focus on features and actions, the encoding of solutions as code vs. the act of solving problems. We are truly lost in the wilderness. Try taking the programmer’s perspective in your work (and the stories you build around it). Talk from what they must do and what they are thinking when they are using your language to do something.
--
The problem is recalling the example, which might be difficult.
cracking this “whiteboard” feature in particular, I guess, revolves around a concrete-abstract bidirectional mapping.
It’s time to start from the programmer’s perspective and work our way from there.
We want something that fundamentally “easier” not just “more clever!”
90% of the PL papers are like that: they focus on features and actions, the encoding of solutions as code vs. the act of solving problems.
It's just that different designers have different visions for the programmer experience. You could look into Eve project and the 'Out of the Tarpit' paper for one vision. You could look into Smalltalk for some of Alan Kay's earlier visions. Forth language was yet another perspective on how we should interact with computers. My Awelon project is heavily oriented around the 'Personal Programming Environment as Extension of Self' short thesis I wrote on a mailing list a while back.
Rendered HTML is a concrete instance of the underlying HTML code, which can definitely be (and often is) more abstract. Of course, they are both just concrete bits, but the interpretation of the latter into the former and other instances is given because of its abstractness.
Machine learning provides a good example of how to go from multiple concrete examples to an abstract solution. Humans do pretty much the same thing (our abstraction hardware works over multiple examples). We need to capture that somehow.
> I don't believe this 'perspective' has been neglected nearly so much as you seem to believe it has been.
Lately it has been! When I say “90%”, I have to mostly go back to the 90s and 80s to find work in that 10% that does include a programmer experience perspective. I can definitely point out some papers (e.g. those related to Self), but after the PL community started on its academic/science kick at the end of the century, there hasn’t been much incentive to do such work or write from that perspective. When I read your blog posts, I’m not really getting the context I need to map your language innovations to programmer experience improvements. This is just a suggestion that might help you communicate what you are doing (or even improve the work).
I wish I were rich & could found an institute to fund all this & keep you in the beverages of your choice for life, no matter where you live.
Machine learning provides a good example of how to go from multiple concrete examples to an abstract solution. Humans do pretty much the same thing (our abstraction hardware works over multiple examples). We need to capture that somehow.
> I don't believe this 'perspective' has been neglected nearly so much as you seem to believe it has been.
Lately it has been! When I say “90%”, I have to mostly go back to the 90s and 80s to find work in that 10% that does include a programmer experience perspective.
When I read your blog posts, I’m not really getting the context
map your language innovations to programmer experience improvements
> For a given programming paradigm, there is no need to re-hash the 'programmer perspective' arguments in every paper. With OOP, you'd get many of these arguments from papers written twenty to thirty years ago. But this doesn't mean a paper oriented around enhancing OOP in some minor way (performance, C++ 'concepts', etc.) is somehow ignoring the programmer's experience. It's just assuming the experience you want is pretty close to what you have.
I guess this is where we’ve fallen off the tracks. We have made so many assumptions about what is good that we no longer reconsider it? I guess you are ok if you are communicating with a group of people that share your values, background, and assumptions, but then when we are trying to change programming in drastic ways, we don’t really have the benefits of such groups, and relying on those groups would anyways just lead to more of the same incrementalism. And anyways, we should be constantly challenging the usefulness of FP, OOP, and so on! OOP is a good example of a paradigm that needs a reboot through challenged thinking.
> And then there are programming tools - specific languages built around a paradigm, IDEs, projectional editors and live programming environments. Sometimes domain specific (e.g. for developing multi-media systems). There are plenty of papers written about these and their 'programmer perspective' motivations. They just rarely have the flavor to make it into academic journals.
Again, PL academia is a club that you either fit into or not. It encourages incremental research by its heavy reliance on unjustified programmer experience assumptions. If you want to leave those assumptions behind, you simply won’t fit.
> I usually try to provide context via hyperlinks in the first couple paragraphs. I have no desire to re-hash context in every post. But maybe I'll try to develop some nice summary posts for clearer context.
Treat each post/paper/essay as if it was your first! Every time you write an essay, your thinking has probably migrated away from the last time anyways, and the ongoing “story” can always be improved.
> Your choice of words 'innovations' and 'improvements' connote a small change to an existing model. I have no idea what that model would be for Awelon project. The closest projects I've encountered - Eve, Unison, Tunes OS, OVAL - are or were more or less designing entirely new programmer experiences.
If you are focused on designing an experience, both innovation (idea application) and invention (idea generation) are useful (innovate when possible, invent when necessary). Take for example Eve: their goal is to design a decent experience, not randomly invent new concepts (they will invent has they need to, however). So I probably should have instead said:
“When I read your blog posts, I’m not really getting the context I need to map your language design to improvements in the programmer experience.”
This assumes the experience that you are designing could still be considered as programming J
I guess this is where we’ve fallen off the tracks. We have made so many assumptions about what is good that we no longer reconsider it?
Treat each post/paper/essay as if it was your first! Every time you write an essay, your thinking has probably migrated away from the last time anyways, and the ongoing “story” can always be improved.
Typically what happens is that the older paradigm is challenged by a newer (or just ‘other’) paradigm, but it is just as bad (and doesn’t even replace) the old one. So FP is pushed forward as the new hotness vs. OOP, but in reality, they both are on shaky experiential foundations. What’s more, there is little discussion on what OOP is actually useful for vs. what FP is actually useful for, we only talk about features and attributes at a design free level. If you want to fit in, you gotta join a club, which means drinking the Kool-Aid on one of these positions. You make very few friends if you are “pro-OOP and pro-FP” and/or “anti-OOP and anti-FP.”
Ideologies just cloud the ultimate story and goal, and conversations that are premised by ideology are boring and not very useful. And anyways, your language seems to be different enough that such premises are not going to be very useful even if you buy the underlying ideology.
From: augmented-...@googlegroups.com [mailto:augmented-...@googlegroups.com]
On Behalf Of David Barbour
Sent: Thursday, December 17, 2015 9:00 AM
To: pi...@googlegroups.com
Cc: augmented-...@googlegroups.com
Subject: Re: [PiLuD] Whiteboard programming?
On Wed, Dec 16, 2015 at 6:19 PM, Sean McDirmid <smc...@microsoft.com> wrote:
--
if you want to compose various procedures that have fairly independent effects, it can be very difficult to "compose" them since everything must be threaded through
We should really keep an open mind about this. The procedural implicit imperative model has held on for so long because it is just so easy to get things working from the start (even if it has longer term maintenance costs). The story should come first, then whatever techniques can be used to make that story real can be applied. If it involves monads or a powerful effect system, so be it (of course, techniques have to be developed, but this should also be story driven).
So really, this is an example of being abstract or concrete:
> This shared state is used for effects: an outbox and inbox for messages, a list of published topics and subscriptions, a tuple space or blackboard, etc.. By simply 'yielding' occasionally (with a continuation in our shared state) we give our caller opportunity to perform effects: drain the outbox, inject messages to an inbox, manage subscriptions, etc..
This is a bit more concrete, but is it possible to walk us through a complete concrete example? You don’t even have to have it implemented, I’m just curious about what the experience would feel like in a real situation.
From: augmented-...@googlegroups.com [mailto:augmented-...@googlegroups.com]
On Behalf Of David Barbour
Sent: Thursday, December 17, 2015 2:30 PM
To: pi...@googlegroups.com
Cc: augmented-...@googlegroups.com
Subject: Re: [PiLuD] Whiteboard programming?
On Wed, Dec 16, 2015 at 8:19 PM, Sean McDirmid <smc...@microsoft.com> wrote:
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
augmented-progra...@googlegroups.com.
To post to this group, send email to
augmented-...@googlegroups.com.
Visit this group at
https://groups.google.com/group/augmented-programming.
For more options, visit
https://groups.google.com/d/optout.
The procedural implicit imperative model has held on for so long because it is just so easy to get things working from the start (even if it has longer term maintenance costs).
The story should come first, then whatever techniques can be used to make that story real can be applied.
So really, this is an example of being abstract or concrete:
> This shared state is used for effects: an outbox and inbox for messages, a list of published topics and subscriptions, a tuple space or blackboard, etc.. By simply 'yielding' occasionally (with a continuation in our shared state) we give our caller opportunity to perform effects: drain the outbox, inject messages to an inbox, manage subscriptions, etc..
This is a bit more concrete, but is it possible to walk us through a complete concrete example? You don’t even have to have it implemented, I’m just curious about what the experience would feel like in a real situation.