Abstract visual debugger (update)

39 views
Skip to first unread message

Weston Beecroft

unread,
Jun 11, 2017, 1:50:48 PM6/11/17
to Augmented Programming
Hello! Since someone posted an older video of my abstract visual debugger here in the past, I thought you guys might be interested to see where it is a couple years later: https://youtu.be/KwZmAgAuIkY

The general research direction I'm interested in exploring with it is along these lines: we need instruments in software development that have some relation to instruments like, e.g., microscopes in physical science—but with certain important differences. First, the commonality is that in both cases we would like to observe the state of some system, but in order for it to be comprehensible to us, it has to be transformed in some way, and we have to understand the nature of this transformation. In both cases we want an instrument which provides automatic, consistent transformations of the state of some system into something comprehensible to us.

In the case of software development, we'd like to observe the state of executing programs, but there is too much data, too rapidly evolving. So, I'm interested in exploring/developing things like 'abstractoscopes,' which automatically abstract away various subsets of program data in order to provide higher level, comprehensible views.

The above demo takes the approach of only presenting data structures evolving in time via operations applied to them. I believe this 'view' is useful because programmers tend to think about algorithms in terms of operations on data structures, but traditional debugging approaches are tied to lower-level control flow views. (It's not mentioned in the video, but the software can also switch between 'temporal modes' for operation playback: real-time, uniformly spaced, proportionally spaced.)

It's getting pretty close to being in 'alpha' state, so with any luck, a downloadable version will available before too long. (But I am just working on when I get lucky with free time now and again, so it's hard to say anything more definite.)

Lemme know what you think!

John Carlson

unread,
Jun 11, 2017, 4:31:42 PM6/11/17
to augmented-...@googlegroups.com
Where I'd like to see this go is manipulation of the visualized data.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

Weston Beecroft

unread,
Jun 11, 2017, 6:09:18 PM6/11/17
to Augmented Programming
Seems like there might be something there, but I think it would typically be more desirable to modify your algorithm after using this to get insights about it, rather than modifying the data it's operating on. One (remote) possibility is that it would infer an appropriate modification to the algorithm based on the way a user modified the visualized data—but that sounds... hard ;)
To post to this group, send email to augmented-...@googlegroups.com.

John Carlson

unread,
Jun 11, 2017, 6:56:10 PM6/11/17
to augmented-...@googlegroups.com
See Bret Victor's work.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.

To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.

Weston Beecroft

unread,
Jun 11, 2017, 7:34:14 PM6/11/17
to Augmented Programming
I assume you're referring to a concept he worked on where some program data is visualized, the user manipulates the visualization, and some parameter in the code is tweaked to match as a result. So maybe the analog for my software is that the manipulation causes an algorithm to be tweaked in a corresponding manner, which would be a generalization of the parameter tweaking concept from Bret Victor. For instance, you try writing a sort algorithm, but the visualization produced has a couple things out of order, so the user goes in and drags them into the correct place; then my software would need to figure out a way of modifying the original algorithm so that it would produce the list as it now appears. I think there's an inherent, potentially insurmountable difficulty here in that one set of data modified in this way is insufficient information to imply the necessary algorithmic change: a correct sorting algorithm should work for any data set, but there's only enough information available to say how the algorithm should be modified for the single data set being visualized. I guess you could eventually get around this by doing the same thing with enough different data sets...


On Sunday, June 11, 2017 at 3:56:10 PM UTC-7, John Carlson wrote:
See Bret Victor's work.
On Jun 11, 2017 6:09 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Seems like there might be something there, but I think it would typically be more desirable to modify your algorithm after using this to get insights about it, rather than modifying the data it's operating on. One (remote) possibility is that it would infer an appropriate modification to the algorithm based on the way a user modified the visualized data—but that sounds... hard ;)

On Sunday, June 11, 2017 at 1:31:42 PM UTC-7, John Carlson wrote:
Where I'd like to see this go is manipulation of the visualized data.
On Jun 11, 2017 1:50 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Hello! Since someone posted an older video of my abstract visual debugger here in the past, I thought you guys might be interested to see where it is a couple years later: https://youtu.be/KwZmAgAuIkY

The general research direction I'm interested in exploring with it is along these lines: we need instruments in software development that have some relation to instruments like, e.g., microscopes in physical science—but with certain important differences. First, the commonality is that in both cases we would like to observe the state of some system, but in order for it to be comprehensible to us, it has to be transformed in some way, and we have to understand the nature of this transformation. In both cases we want an instrument which provides automatic, consistent transformations of the state of some system into something comprehensible to us.

In the case of software development, we'd like to observe the state of executing programs, but there is too much data, too rapidly evolving. So, I'm interested in exploring/developing things like 'abstractoscopes,' which automatically abstract away various subsets of program data in order to provide higher level, comprehensible views.

The above demo takes the approach of only presenting data structures evolving in time via operations applied to them. I believe this 'view' is useful because programmers tend to think about algorithms in terms of operations on data structures, but traditional debugging approaches are tied to lower-level control flow views. (It's not mentioned in the video, but the software can also switch between 'temporal modes' for operation playback: real-time, uniformly spaced, proportionally spaced.)

It's getting pretty close to being in 'alpha' state, so with any luck, a downloadable version will available before too long. (But I am just working on when I get lucky with free time now and again, so it's hard to say anything more definite.)

Lemme know what you think!

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.

John Carlson

unread,
Jun 11, 2017, 7:41:32 PM6/11/17
to augmented-...@googlegroups.com
I think the idea is to not separate the algorithm from the visualization, but have them integrated.   So if something is sorting wrong, and you grab a value in the visualization, that value is highlighted in the algorithm, say at the last place it changed (you'll have to run the algorithm in reverse or some such--an undo stack).

Good luck.

On Jun 11, 2017 7:34 PM, "Weston Beecroft" <west...@gmail.com> wrote:
I assume you're referring to a concept he worked on where some program data is visualized, the user manipulates the visualization, and some parameter in the code is tweaked to match as a result. So maybe the analog for my software is that the manipulation causes an algorithm to be tweaked in a corresponding manner, which would be a generalization of the parameter tweaking concept from Bret Victor. For instance, you try writing a sort algorithm, but the visualization produced has a couple things out of order, so the user goes in and drags them into the correct place; then my software would need to figure out a way of modifying the original algorithm so that it would produce the list as it now appears. I think there's an inherent, potentially insurmountable difficulty here in that one set of data modified in this way is insufficient information to imply the necessary algorithmic change: a correct sorting algorithm should work for any data set, but there's only enough information available to say how the algorithm should be modified for the single data set being visualized. I guess you could eventually get around this by doing the same thing with enough different data sets...

On Sunday, June 11, 2017 at 3:56:10 PM UTC-7, John Carlson wrote:
See Bret Victor's work.
On Jun 11, 2017 6:09 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Seems like there might be something there, but I think it would typically be more desirable to modify your algorithm after using this to get insights about it, rather than modifying the data it's operating on. One (remote) possibility is that it would infer an appropriate modification to the algorithm based on the way a user modified the visualized data—but that sounds... hard ;)

On Sunday, June 11, 2017 at 1:31:42 PM UTC-7, John Carlson wrote:
Where I'd like to see this go is manipulation of the visualized data.
On Jun 11, 2017 1:50 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Hello! Since someone posted an older video of my abstract visual debugger here in the past, I thought you guys might be interested to see where it is a couple years later: https://youtu.be/KwZmAgAuIkY

The general research direction I'm interested in exploring with it is along these lines: we need instruments in software development that have some relation to instruments like, e.g., microscopes in physical science—but with certain important differences. First, the commonality is that in both cases we would like to observe the state of some system, but in order for it to be comprehensible to us, it has to be transformed in some way, and we have to understand the nature of this transformation. In both cases we want an instrument which provides automatic, consistent transformations of the state of some system into something comprehensible to us.

In the case of software development, we'd like to observe the state of executing programs, but there is too much data, too rapidly evolving. So, I'm interested in exploring/developing things like 'abstractoscopes,' which automatically abstract away various subsets of program data in order to provide higher level, comprehensible views.

The above demo takes the approach of only presenting data structures evolving in time via operations applied to them. I believe this 'view' is useful because programmers tend to think about algorithms in terms of operations on data structures, but traditional debugging approaches are tied to lower-level control flow views. (It's not mentioned in the video, but the software can also switch between 'temporal modes' for operation playback: real-time, uniformly spaced, proportionally spaced.)

It's getting pretty close to being in 'alpha' state, so with any luck, a downloadable version will available before too long. (But I am just working on when I get lucky with free time now and again, so it's hard to say anything more definite.)

Lemme know what you think!

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.

To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.

To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.

John Carlson

unread,
Jun 11, 2017, 7:52:06 PM6/11/17
to augmented-...@googlegroups.com
The algorithm may be separated from the visualization for "production" purposes.   Ideally the production code is bytecode or similar which can be decompiled back to the original algorithm.

John

Sean McDirmid

unread,
Jun 11, 2017, 7:58:08 PM6/11/17
to Augmented Programming

The holy grail of live programming is to be able to manipulate the logic of live executing programs with appropriate concrete examples to guide the process. 


One way this can be done is by scrubbing a concrete value displayed on screen (e.g. a position) and mapping the change back to some abstract code (I've done demos with this in APX, also see Ravi Chugh's direct manipulation in programming work). This isn't really good enough, it doesn't change the program's logic. 


Another possibility is programming by demonstration, where you take your concrete visualized data and create your abstractions around it (or, if you believe in automagic, have the computer do it for you if you provide lots of examples). 




From: augmented-...@googlegroups.com <augmented-...@googlegroups.com> on behalf of Weston Beecroft <west...@gmail.com>
Sent: Sunday, June 11, 2017 3:09 PM
To: Augmented Programming
Subject: Re: Abstract visual debugger (update)
 
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.

Weston Beecroft

unread,
Jun 11, 2017, 8:02:40 PM6/11/17
to Augmented Programming
Ah, I see—thanks for clarifying. I think that connection between code and visualization is important for some things, but it's actually an explicit goal of my project to keep the two separate. My hypothesis is that different classes of errors exist at different levels of abstraction, and current tools address low-levels (like lines of code), but it's more difficult to get information about high-level algorithm behavior (at a level independent of programming language even). It would be interesting to think of better ways of integrating with the development process in general, though.


On Sunday, June 11, 2017 at 4:41:32 PM UTC-7, John Carlson wrote:
I think the idea is to not separate the algorithm from the visualization, but have them integrated.   So if something is sorting wrong, and you grab a value in the visualization, that value is highlighted in the algorithm, say at the last place it changed (you'll have to run the algorithm in reverse or some such--an undo stack).

Good luck.
On Jun 11, 2017 7:34 PM, "Weston Beecroft" <west...@gmail.com> wrote:
I assume you're referring to a concept he worked on where some program data is visualized, the user manipulates the visualization, and some parameter in the code is tweaked to match as a result. So maybe the analog for my software is that the manipulation causes an algorithm to be tweaked in a corresponding manner, which would be a generalization of the parameter tweaking concept from Bret Victor. For instance, you try writing a sort algorithm, but the visualization produced has a couple things out of order, so the user goes in and drags them into the correct place; then my software would need to figure out a way of modifying the original algorithm so that it would produce the list as it now appears. I think there's an inherent, potentially insurmountable difficulty here in that one set of data modified in this way is insufficient information to imply the necessary algorithmic change: a correct sorting algorithm should work for any data set, but there's only enough information available to say how the algorithm should be modified for the single data set being visualized. I guess you could eventually get around this by doing the same thing with enough different data sets...

On Sunday, June 11, 2017 at 3:56:10 PM UTC-7, John Carlson wrote:
See Bret Victor's work.
On Jun 11, 2017 6:09 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Seems like there might be something there, but I think it would typically be more desirable to modify your algorithm after using this to get insights about it, rather than modifying the data it's operating on. One (remote) possibility is that it would infer an appropriate modification to the algorithm based on the way a user modified the visualized data—but that sounds... hard ;)

On Sunday, June 11, 2017 at 1:31:42 PM UTC-7, John Carlson wrote:
Where I'd like to see this go is manipulation of the visualized data.
On Jun 11, 2017 1:50 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Hello! Since someone posted an older video of my abstract visual debugger here in the past, I thought you guys might be interested to see where it is a couple years later: https://youtu.be/KwZmAgAuIkY

The general research direction I'm interested in exploring with it is along these lines: we need instruments in software development that have some relation to instruments like, e.g., microscopes in physical science—but with certain important differences. First, the commonality is that in both cases we would like to observe the state of some system, but in order for it to be comprehensible to us, it has to be transformed in some way, and we have to understand the nature of this transformation. In both cases we want an instrument which provides automatic, consistent transformations of the state of some system into something comprehensible to us.

In the case of software development, we'd like to observe the state of executing programs, but there is too much data, too rapidly evolving. So, I'm interested in exploring/developing things like 'abstractoscopes,' which automatically abstract away various subsets of program data in order to provide higher level, comprehensible views.

The above demo takes the approach of only presenting data structures evolving in time via operations applied to them. I believe this 'view' is useful because programmers tend to think about algorithms in terms of operations on data structures, but traditional debugging approaches are tied to lower-level control flow views. (It's not mentioned in the video, but the software can also switch between 'temporal modes' for operation playback: real-time, uniformly spaced, proportionally spaced.)

It's getting pretty close to being in 'alpha' state, so with any luck, a downloadable version will available before too long. (But I am just working on when I get lucky with free time now and again, so it's hard to say anything more definite.)

Lemme know what you think!

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.

John Carlson

unread,
Jun 11, 2017, 8:27:07 PM6/11/17
to augmented-...@googlegroups.com


On Jun 11, 2017 7:58 PM, "Sean McDirmid" <mcdi...@outlook.com> wrote:

Another possibility is programming by demonstration, where you take your concrete visualized data and create your abstractions around it


Yes, this is what we did in TWB/TE.   We provided condition desktop object which had visual inputs and outputs and a list of sequentially tested conditions (else if ...).   The body of the condition (the procedure in our parlance) was shown as a branch per condition in the recorder.   We had lookup menu item which started the condition and did recursion/looping.   Each input had a corresponding column that fed the input into a boolean term.   Inputs were compared in a string calculator, or subtracted in a number calculator.   Outputs/Procedures were similarly gathered from the conditions and fed out of the condition desktop object.   What we didn't really have in TWB TE was the idea of a collection, as shown in the video.   We had poorly implemented text trees and text documents--that was our domain.

(or, if you believe in automagic, have the computer do it for you if you provide lots of examples). 

We didn't do automagic, but the thought is tempting since you can create DFAs from examples.   I haven't heard whether this has been done for CFGs yet.   With human help, automagic has been done (very few examples in a limited domain).   See SMARTedit by Tessa Lau.

John

John Carlson

unread,
Jun 11, 2017, 8:40:39 PM6/11/17
to augmented-...@googlegroups.com
I'm not referring to lines of code when I say algorithm.   More like discrete operations on desktop objects, menu select, button/key press, cut/copy/paste, send/receive, read/write, open/close, reset

You'll have to define your own discrete operations on collections...we didn't get that far.  I'm sure the typical methods are fine.   No real mystery here.

Yes, TWB/TE programs were all compilable down to source code and reifiable from a compiled .exe

John

On Jun 11, 2017 8:02 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Ah, I see—thanks for clarifying. I think that connection between code and visualization is important for some things, but it's actually an explicit goal of my project to keep the two separate. My hypothesis is that different classes of errors exist at different levels of abstraction, and current tools address low-levels (like lines of code), but it's more difficult to get information about high-level algorithm behavior (at a level independent of programming language even). It would be interesting to think of better ways of integrating with the development process in general, though.

On Sunday, June 11, 2017 at 4:41:32 PM UTC-7, John Carlson wrote:
I think the idea is to not separate the algorithm from the visualization, but have them integrated.   So if something is sorting wrong, and you grab a value in the visualization, that value is highlighted in the algorithm, say at the last place it changed (you'll have to run the algorithm in reverse or some such--an undo stack).

Good luck.
On Jun 11, 2017 7:34 PM, "Weston Beecroft" <west...@gmail.com> wrote:
I assume you're referring to a concept he worked on where some program data is visualized, the user manipulates the visualization, and some parameter in the code is tweaked to match as a result. So maybe the analog for my software is that the manipulation causes an algorithm to be tweaked in a corresponding manner, which would be a generalization of the parameter tweaking concept from Bret Victor. For instance, you try writing a sort algorithm, but the visualization produced has a couple things out of order, so the user goes in and drags them into the correct place; then my software would need to figure out a way of modifying the original algorithm so that it would produce the list as it now appears. I think there's an inherent, potentially insurmountable difficulty here in that one set of data modified in this way is insufficient information to imply the necessary algorithmic change: a correct sorting algorithm should work for any data set, but there's only enough information available to say how the algorithm should be modified for the single data set being visualized. I guess you could eventually get around this by doing the same thing with enough different data sets...

On Sunday, June 11, 2017 at 3:56:10 PM UTC-7, John Carlson wrote:
See Bret Victor's work.
On Jun 11, 2017 6:09 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Seems like there might be something there, but I think it would typically be more desirable to modify your algorithm after using this to get insights about it, rather than modifying the data it's operating on. One (remote) possibility is that it would infer an appropriate modification to the algorithm based on the way a user modified the visualized data—but that sounds... hard ;)

On Sunday, June 11, 2017 at 1:31:42 PM UTC-7, John Carlson wrote:
Where I'd like to see this go is manipulation of the visualized data.
On Jun 11, 2017 1:50 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Hello! Since someone posted an older video of my abstract visual debugger here in the past, I thought you guys might be interested to see where it is a couple years later: https://youtu.be/KwZmAgAuIkY

The general research direction I'm interested in exploring with it is along these lines: we need instruments in software development that have some relation to instruments like, e.g., microscopes in physical science—but with certain important differences. First, the commonality is that in both cases we would like to observe the state of some system, but in order for it to be comprehensible to us, it has to be transformed in some way, and we have to understand the nature of this transformation. In both cases we want an instrument which provides automatic, consistent transformations of the state of some system into something comprehensible to us.

In the case of software development, we'd like to observe the state of executing programs, but there is too much data, too rapidly evolving. So, I'm interested in exploring/developing things like 'abstractoscopes,' which automatically abstract away various subsets of program data in order to provide higher level, comprehensible views.

The above demo takes the approach of only presenting data structures evolving in time via operations applied to them. I believe this 'view' is useful because programmers tend to think about algorithms in terms of operations on data structures, but traditional debugging approaches are tied to lower-level control flow views. (It's not mentioned in the video, but the software can also switch between 'temporal modes' for operation playback: real-time, uniformly spaced, proportionally spaced.)

It's getting pretty close to being in 'alpha' state, so with any luck, a downloadable version will available before too long. (But I am just working on when I get lucky with free time now and again, so it's hard to say anything more definite.)

Lemme know what you think!

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.

To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.

To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.

To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.

Weston Beecroft

unread,
Jun 12, 2017, 5:09:06 PM6/12/17
to Augmented Programming
I wasn't familiar with programming by demonstration before, but I think it makes a lot of sense. Here are a couple things that occurred to me (I realize there's probably nothing new here, but just kinda brain-dumping since sometimes an outsider's perspective is useful):

Since humans also learn by demonstration—more specifically, we inductively derive general categories by repeated exposure to diverse yet related instances of... things—is there anything we can borrow from our knowledge of human conceptual structure that might carry over to an effective program representation scheme? (i.e. what can cognitive models of human conceptual 'schemata' tell us?)

The easier problem is to do this in a more domain-specific way. I wonder if we'd make more progress finding good general solutions by initially building tools for multiple constrained domains. Like, a tool that can only build games, or e-commerce websites, etc.

Programming by demonstration may be intrinsically better suited for certain domains, while being weak with others. Maybe it's usually good for developing certain program modules, but not necessarily applications: maybe you could embed something into applications which executes separate programs which were constructed by demonstration.

It seems like 'iterative' demonstration-based programming systems would need something like an 'output syntax' as well as an input syntax, so that in the same way an interpreter embodies the relationship between input syntax and program semantics, maybe we could build two-way interpreters that understand the relationship between 'output syntax' and program semantics as well (keeping in mind that this is for domain-specific programming systems only, so the potential output of a program is highly constrained). Also 'representation' would be better than syntax, since that implies certain unnecessary things.

Maybe general two-way interpreters could be written that take in formal descriptions of the correspondence between input/output representations, so the work of producing new domain-specific programming systems largely consists in providing such a formal description.

Could it be useful to give demonstrations of some intermediate representations rather than final output? Maybe that's just another domain: you might do demonstrations with data structures as an 'output representation' (which might look something like my visualizer), and general algorithms could be induced.

Sean McDirmid

unread,
Jun 12, 2017, 5:27:09 PM6/12/17
to Augmented Programming

Is there anything we can borrow from our knowledge of human conceptual structure that might carry over to an effective program representation scheme? (i.e. what can cognitive models of human conceptual 'schemata' tell us?)


Humans are way too complex to analyze reliably in this way. We can only guess what our conceptual structure is, and how to leverage this in a programming representation. One thing is: we don't really think abstractly, we just slip in examples of abstract concepts; e.g. "a man walks into a bar" will conjure up some image of some many walking into some bar in your head, albeit a very un-detailed one. 


Programming by demonstration has largely failed to make much of an impact given a focus on "demonstrating" and "automatic abstraction", two concepts that haven't turned out to work very well. But one could just imagine starting with a less general program (fixed data and lots of mocking) to a more general one via manual adjustments. 


The easier problem is to do this in a more domain-specific way.

I would claim that this is still very hard. Getting PBD to work even in a constrained space can be very frustrating, especially with current practice. 

maybe you could embed something into applications which executes separate programs which were constructed by demonstration.

I'm not a big fan of this kind of programming by demonstration. I like to think of it as "a specific program (fixed data, lots of mocking) being transformed manually by a programmer into a general program", the "demonstration" aspect is simply a way of getting a specific program, not the only way, and abstracting examples can either occur in an automatic way (machine learning) or in a manually but assisted way (my personal interest). 

Could it be useful to give demonstrations of some intermediate representations rather than final output?

If you think of the problem as going from many specific programs to a general one, then there are plenty of intermediate programs to consider that increase abstraction incrementally. 


Sent: Monday, June 12, 2017 2:09 PM
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.

John Carlson

unread,
Jun 12, 2017, 5:34:11 PM6/12/17
to augmented-...@googlegroups.com


On Jun 12, 2017 5:09 PM, "Weston Beecroft" <west...@gmail.com> wrote:

It seems like 'iterative' demonstration-based programming systems would need something like an 'output syntax' as well as an input syntax, so that in the same way an interpreter embodies the relationship between input syntax and program semantics, maybe we could build two-way interpreters that understand the relationship between 'output syntax' and program semantics as well (keeping in mind that this is for domain-specific programming systems only, so the potential output of a program is highly constrained). Also 'representation' would be better than syntax, since that implies certain unnecessary things.

Representation is fine.   I think actions should be encapsulated as atomic operations and collections of atomic operations, so syntax doesn't matter much except to help understanding (visual metaphors).

John Carlson

unread,
Jun 12, 2017, 5:37:37 PM6/12/17
to augmented-...@googlegroups.com
Distinguish program output from software visualization perhaps.   You don't always want to show the software visualization.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.

To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.

To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.

Weston Beecroft

unread,
Jun 12, 2017, 6:33:44 PM6/12/17
to Augmented Programming
"a specific program (fixed data, lots of mocking) being transformed manually by a programmer into a general program"

Interesting. That's pretty much an exact inverse to an approach I've been thinking about for building programming tools: start with a structure much more general than any particular program, then the programmer guides a gradual instantiation process to produce some specific program.

In the inverse case, I suppose the programmer has to make decisions about which things that were once fixed should become parameters?

Weston Beecroft

unread,
Jun 12, 2017, 6:42:44 PM6/12/17
to Augmented Programming
The reason I was using the term syntax initially is because I think of it as: a representation with a definite relation to some semantics. So with an 'output representation' (aka output 'syntax'), I was thinking you could sort of re-parse the program's modified output in order to update the program semantics. There's some complication here if you think about it as literally syntax and parsing though: more likely there is a connection between program output and some abstract 'program model'; when the program output is modified, the program model is updated. 

To be more concrete, let's say your program input representation looks like typical program source code, which is used one way representing and specifying your 'program model' (maybe through parsing), and your program output is geometric figures. If there is a 'connection' between these geometric figures and the program model in the same way as there is between the source code and the program model, then the user can also manipulate the geometric figures in order to update the program model. (Granted, this is probably insanely complicated etc.—just an idea though.)

John Carlson

unread,
Jun 12, 2017, 6:51:24 PM6/12/17
to augmented-...@googlegroups.com

The only reason you need syntax, I think, is to order operations and parameters.  If operations and parameters are ordered by the GUI (or by the user), then syntax is not needed.

 

I think the model is the key.  Collections of collections are similar to syntax, I agree.

 

I think updating syntax is a waste of time, but could possibly be doable.  The key is to update values, and let the user deal with the order of operations and parameters.

 

(my 2 cents.  This is likely not a very user friendly system).

 

John

 

Sent from Mail for Windows 10

 

From: Weston Beecroft
Sent: Monday, June 12, 2017 6:42 PM
To: Augmented Programming
Subject: Re: Abstract visual debugger (update)

 

The reason I was using the term syntax initially is because I think of it as: a representation with a definite relation to some semantics. So with an 'output representation' (aka output 'syntax'), I was thinking you could sort of re-parse the program's modified output in order to update the program semantics. There's some complication here if you think about it as literally syntax and parsing though: more likely there is a connection between program output and some abstract 'program model'; when the program output is modified, the program model is updated. 

--

You received this message because you are subscribed to the Google Groups "Augmented Programming" group.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.

Weston Beecroft

unread,
Jun 12, 2017, 6:59:48 PM6/12/17
to Augmented Programming
I think we're mostly on the same page, I was just using the term 'syntax' in a confusing way. I think syntax in programming systems is sort of fetishized + historical accident. Parsing a user interface to arrive at a model of something seems over complicated; just render the model to look like text instead (not to be dismissive of the practical difficulties involved, but it seems like the right direction).

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.

John Carlson

unread,
Jun 12, 2017, 7:24:30 PM6/12/17
to augmented-...@googlegroups.com

Well, if you have a deep learning system, parsing a user interface may be possible.  Whether it’s useful is another question.  However, I don’t know if the deep learning system can operate the keyboard and mouse without a lot of training based on user traces.  This is the many examples to abstraction problem again that Sean mentioned.   They just now have a labelled collection of videos for human actions (DeepMind Kinetics).  Collecting human actions cross-application is difficult, but is likely doable with a totally integrated system ala smalltalk, as long as events aren’t swallowed without creating an undo record/object.  Within an application, you can design it however you want.

 

Caveat:  I did not write the model interpreter for TWB/TE and am only slightly familiar with the difficulties of creating an interpreter (I suck at it).  I believe it’s possibly one of the harder problems in computer science.

 

I am not sure how many people allow their traces to be captured.  Probably not many.  Labels can come from the application, I think.

 

The key is to capture the trace as the program itself, and have the user introduce control structures, possibly suggested by the software, possibly after the initial trace is done.  The user may have to identify traces that they want “joined” if the control structures are to be found automatically, along the lines of NFA to DFA conversion.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.


To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

 

--

You received this message because you are subscribed to the Google Groups "Augmented Programming" group.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.

Weston Beecroft

unread,
Jun 12, 2017, 8:01:33 PM6/12/17
to Augmented Programming
if you have a deep learning system, parsing a user interface may be possible.

I was trying to say that's what we're effectively doing with programming systems now. A programmer writes code as an interface to program construction—it's our way of selecting and configuring a set of 'language constructs' provided by a particular language. But ultimately, this is just a practical step for constructing a model of the program (e.g. an AST). I'm just trying to clarify that my stance is: it might be better to avoid syntax/parsing to begin with and structure things more like standard MVC systems, and have our program editors modify program models directly (and render those models as text). But I think the disconnect is mostly my poor choice of terminology.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.


To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.

Raoul Duke

unread,
Jun 12, 2017, 8:54:53 PM6/12/17
to augmented-programming
we should for sure all be working on views if the ast/asg (IntentionalComputing), and the views should be personal and implemented across all tooling so there are no more curly brace positioning wars.

John Carlson

unread,
Jun 12, 2017, 9:20:38 PM6/12/17
to augmented-...@googlegroups.com

I agree.  There’s the AST, or program model in a non-live program on disk on  a turned off computer, and the data model in a live program.  Your debugger handles the data model.   We’re trying to integrate the program model with the data model (the data model becomes the program model and vica versa), so that the program model is live and editable at the same time the data model is.  There may be two levels of program model, the compiled program and the interpreted program.  The trick is to remove the distinction between the compiled program and the interpreted program.  TWB/TE compiled the interpreted program at one point, but I removed the feature, because I was essentially duplicating code, so the compiled program actually runs the “interpreter” (collections of method calls) without the GUI.  It was slow (in 1993 on Sun 3/50), and could use JIT compiling, but I’m not sure what I lose in that case.  The holy grail I refer to is talking about is totally integrating the compiled code (interpreter) with the interpreted code (Java 9?), so that compiled code can become interpreted code during runtime and vica versa.  I did support both compiled code and interpreted code in the same output binary in TWB/TE for a short time, but I thought the compiled code would diverge from the interpreted code, so I removed the compiled code.  I thought the interpreted code more closely followed the user’s intent—if I could take the compiled code and test it in the GUI as a live program model, that would be ideal.  IF we could easily extend Java bytecode, people wouldn’t have to keep writing compilers and interpreters for their own (visual) languages, and can use the JVM (or CLR or whatever Apple provides) for both compiling to and interpreting.  I believe LISP did most of this, except I don’t know about the decompiling bit.  The trick behind extension is to provide ranges of type IDs for people, similar to what IANA does for TCP port numbers.  I guess Java/CLR didn’t want to take that business on.  There’s probably another way…

 

We may need a reasonable instruction set.  Java bytecode is a reasonable instruction set I believe.  I don’t have much experience with CLR, Lisp or Smalltalk.  Having a common bytecode for all languages would be another holy grail.  I am not sure if CLR has been put on a chip.

 

Note that I haven’t mentioned any text in the above.

 

TWB/TE was written in C++ in 1992-1993 before JVM, CLR, IDEs etc.  Control Center was available, but not very fast.  I wanted TWB/TE to become MOOSE (Multithreaded Object Oriented Stack Environment), but didn’t get support (industry was concerned we were competing with them).

I guess I can always download OpenJDK and start hacking.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.


To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.


To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.

John Carlson

unread,
Jun 12, 2017, 9:46:21 PM6/12/17
to augmented-...@googlegroups.com

How do we get the ast/asg both as compiled and as a data structure (interpreted) and maintain them in sync?   Do we have to test both now?

 

Do I have to maintain my own compiler?  I’ve not been good at interpreters or compilers in the past.  I do believe implementing operations separately from the base interpreter/compiler can be done.  Someone has to implement (ifthenelse/loops/recursion/tail recursion) for me I hope.  Do I have to implement undo operations for your code?  There should be an object model standard for operation objects [ do(), undo(), getUndoRecord(), etc. ]

 

What is the standard ast/asg and can I comment on it?  Should we put effort into defining a standard ast/asg?

 

John

 

Inquiring minds want to know.

 

John

 

Sent from Mail for Windows 10

 

From: Raoul Duke
Sent: Monday, June 12, 2017 8:54 PM
To: augmented-programming
Subject: Re: Abstract visual debugger (update)

 

we should for sure all be working on views if the ast/asg (IntentionalComputing), and the views should be personal and implemented across all tooling so there are no more curly brace positioning wars.

--

You received this message because you are subscribed to the Google Groups "Augmented Programming" group.

To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.

Reply all
Reply to author
Forward
0 new messages