--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
To post to this group, send email to augmented-...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.
See Bret Victor's work.
On Jun 11, 2017 6:09 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Seems like there might be something there, but I think it would typically be more desirable to modify your algorithm after using this to get insights about it, rather than modifying the data it's operating on. One (remote) possibility is that it would infer an appropriate modification to the algorithm based on the way a user modified the visualized data—but that sounds... hard ;)
On Sunday, June 11, 2017 at 1:31:42 PM UTC-7, John Carlson wrote:
Where I'd like to see this go is manipulation of the visualized data.
On Jun 11, 2017 1:50 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Hello! Since someone posted an older video of my abstract visual debugger here in the past, I thought you guys might be interested to see where it is a couple years later: https://youtu.be/KwZmAgAuIkY--The general research direction I'm interested in exploring with it is along these lines: we need instruments in software development that have some relation to instruments like, e.g., microscopes in physical science—but with certain important differences. First, the commonality is that in both cases we would like to observe the state of some system, but in order for it to be comprehensible to us, it has to be transformed in some way, and we have to understand the nature of this transformation. In both cases we want an instrument which provides automatic, consistent transformations of the state of some system into something comprehensible to us.In the case of software development, we'd like to observe the state of executing programs, but there is too much data, too rapidly evolving. So, I'm interested in exploring/developing things like 'abstractoscopes,' which automatically abstract away various subsets of program data in order to provide higher level, comprehensible views.The above demo takes the approach of only presenting data structures evolving in time via operations applied to them. I believe this 'view' is useful because programmers tend to think about algorithms in terms of operations on data structures, but traditional debugging approaches are tied to lower-level control flow views. (It's not mentioned in the video, but the software can also switch between 'temporal modes' for operation playback: real-time, uniformly spaced, proportionally spaced.)It's getting pretty close to being in 'alpha' state, so with any luck, a downloadable version will available before too long. (But I am just working on when I get lucky with free time now and again, so it's hard to say anything more definite.)Lemme know what you think!
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
I assume you're referring to a concept he worked on where some program data is visualized, the user manipulates the visualization, and some parameter in the code is tweaked to match as a result. So maybe the analog for my software is that the manipulation causes an algorithm to be tweaked in a corresponding manner, which would be a generalization of the parameter tweaking concept from Bret Victor. For instance, you try writing a sort algorithm, but the visualization produced has a couple things out of order, so the user goes in and drags them into the correct place; then my software would need to figure out a way of modifying the original algorithm so that it would produce the list as it now appears. I think there's an inherent, potentially insurmountable difficulty here in that one set of data modified in this way is insufficient information to imply the necessary algorithmic change: a correct sorting algorithm should work for any data set, but there's only enough information available to say how the algorithm should be modified for the single data set being visualized. I guess you could eventually get around this by doing the same thing with enough different data sets...
On Sunday, June 11, 2017 at 3:56:10 PM UTC-7, John Carlson wrote:
See Bret Victor's work.
On Jun 11, 2017 6:09 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Seems like there might be something there, but I think it would typically be more desirable to modify your algorithm after using this to get insights about it, rather than modifying the data it's operating on. One (remote) possibility is that it would infer an appropriate modification to the algorithm based on the way a user modified the visualized data—but that sounds... hard ;)
On Sunday, June 11, 2017 at 1:31:42 PM UTC-7, John Carlson wrote:
Where I'd like to see this go is manipulation of the visualized data.
On Jun 11, 2017 1:50 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Hello! Since someone posted an older video of my abstract visual debugger here in the past, I thought you guys might be interested to see where it is a couple years later: https://youtu.be/KwZmAgAuIkY--The general research direction I'm interested in exploring with it is along these lines: we need instruments in software development that have some relation to instruments like, e.g., microscopes in physical science—but with certain important differences. First, the commonality is that in both cases we would like to observe the state of some system, but in order for it to be comprehensible to us, it has to be transformed in some way, and we have to understand the nature of this transformation. In both cases we want an instrument which provides automatic, consistent transformations of the state of some system into something comprehensible to us.In the case of software development, we'd like to observe the state of executing programs, but there is too much data, too rapidly evolving. So, I'm interested in exploring/developing things like 'abstractoscopes,' which automatically abstract away various subsets of program data in order to provide higher level, comprehensible views.The above demo takes the approach of only presenting data structures evolving in time via operations applied to them. I believe this 'view' is useful because programmers tend to think about algorithms in terms of operations on data structures, but traditional debugging approaches are tied to lower-level control flow views. (It's not mentioned in the video, but the software can also switch between 'temporal modes' for operation playback: real-time, uniformly spaced, proportionally spaced.)It's getting pretty close to being in 'alpha' state, so with any luck, a downloadable version will available before too long. (But I am just working on when I get lucky with free time now and again, so it's hard to say anything more definite.)Lemme know what you think!
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.
The holy grail of live programming is to be able to manipulate the logic of live executing programs with appropriate concrete examples to guide the process.
One way this can be done is by scrubbing a concrete value displayed on screen (e.g. a position) and mapping the change back to some abstract code (I've done demos with this in APX, also see Ravi Chugh's direct manipulation in programming work). This isn't really good enough, it doesn't change the program's logic.
Another possibility is programming by demonstration, where you take your concrete visualized data and create your abstractions around it (or, if you believe in automagic, have the computer do it for you if you provide lots of examples).
I think the idea is to not separate the algorithm from the visualization, but have them integrated. So if something is sorting wrong, and you grab a value in the visualization, that value is highlighted in the algorithm, say at the last place it changed (you'll have to run the algorithm in reverse or some such--an undo stack).Good luck.
On Jun 11, 2017 7:34 PM, "Weston Beecroft" <west...@gmail.com> wrote:
I assume you're referring to a concept he worked on where some program data is visualized, the user manipulates the visualization, and some parameter in the code is tweaked to match as a result. So maybe the analog for my software is that the manipulation causes an algorithm to be tweaked in a corresponding manner, which would be a generalization of the parameter tweaking concept from Bret Victor. For instance, you try writing a sort algorithm, but the visualization produced has a couple things out of order, so the user goes in and drags them into the correct place; then my software would need to figure out a way of modifying the original algorithm so that it would produce the list as it now appears. I think there's an inherent, potentially insurmountable difficulty here in that one set of data modified in this way is insufficient information to imply the necessary algorithmic change: a correct sorting algorithm should work for any data set, but there's only enough information available to say how the algorithm should be modified for the single data set being visualized. I guess you could eventually get around this by doing the same thing with enough different data sets...
On Sunday, June 11, 2017 at 3:56:10 PM UTC-7, John Carlson wrote:
See Bret Victor's work.
On Jun 11, 2017 6:09 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Seems like there might be something there, but I think it would typically be more desirable to modify your algorithm after using this to get insights about it, rather than modifying the data it's operating on. One (remote) possibility is that it would infer an appropriate modification to the algorithm based on the way a user modified the visualized data—but that sounds... hard ;)
On Sunday, June 11, 2017 at 1:31:42 PM UTC-7, John Carlson wrote:
Where I'd like to see this go is manipulation of the visualized data.
On Jun 11, 2017 1:50 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Hello! Since someone posted an older video of my abstract visual debugger here in the past, I thought you guys might be interested to see where it is a couple years later: https://youtu.be/KwZmAgAuIkY--The general research direction I'm interested in exploring with it is along these lines: we need instruments in software development that have some relation to instruments like, e.g., microscopes in physical science—but with certain important differences. First, the commonality is that in both cases we would like to observe the state of some system, but in order for it to be comprehensible to us, it has to be transformed in some way, and we have to understand the nature of this transformation. In both cases we want an instrument which provides automatic, consistent transformations of the state of some system into something comprehensible to us.In the case of software development, we'd like to observe the state of executing programs, but there is too much data, too rapidly evolving. So, I'm interested in exploring/developing things like 'abstractoscopes,' which automatically abstract away various subsets of program data in order to provide higher level, comprehensible views.The above demo takes the approach of only presenting data structures evolving in time via operations applied to them. I believe this 'view' is useful because programmers tend to think about algorithms in terms of operations on data structures, but traditional debugging approaches are tied to lower-level control flow views. (It's not mentioned in the video, but the software can also switch between 'temporal modes' for operation playback: real-time, uniformly spaced, proportionally spaced.)It's getting pretty close to being in 'alpha' state, so with any luck, a downloadable version will available before too long. (But I am just working on when I get lucky with free time now and again, so it's hard to say anything more definite.)Lemme know what you think!
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
Another possibility is programming by demonstration, where you take your concrete visualized data and create your abstractions around it
(or, if you believe in automagic, have the computer do it for you if you provide lots of examples).
Ah, I see—thanks for clarifying. I think that connection between code and visualization is important for some things, but it's actually an explicit goal of my project to keep the two separate. My hypothesis is that different classes of errors exist at different levels of abstraction, and current tools address low-levels (like lines of code), but it's more difficult to get information about high-level algorithm behavior (at a level independent of programming language even). It would be interesting to think of better ways of integrating with the development process in general, though.
On Sunday, June 11, 2017 at 4:41:32 PM UTC-7, John Carlson wrote:
I think the idea is to not separate the algorithm from the visualization, but have them integrated. So if something is sorting wrong, and you grab a value in the visualization, that value is highlighted in the algorithm, say at the last place it changed (you'll have to run the algorithm in reverse or some such--an undo stack).Good luck.
On Jun 11, 2017 7:34 PM, "Weston Beecroft" <west...@gmail.com> wrote:
I assume you're referring to a concept he worked on where some program data is visualized, the user manipulates the visualization, and some parameter in the code is tweaked to match as a result. So maybe the analog for my software is that the manipulation causes an algorithm to be tweaked in a corresponding manner, which would be a generalization of the parameter tweaking concept from Bret Victor. For instance, you try writing a sort algorithm, but the visualization produced has a couple things out of order, so the user goes in and drags them into the correct place; then my software would need to figure out a way of modifying the original algorithm so that it would produce the list as it now appears. I think there's an inherent, potentially insurmountable difficulty here in that one set of data modified in this way is insufficient information to imply the necessary algorithmic change: a correct sorting algorithm should work for any data set, but there's only enough information available to say how the algorithm should be modified for the single data set being visualized. I guess you could eventually get around this by doing the same thing with enough different data sets...
On Sunday, June 11, 2017 at 3:56:10 PM UTC-7, John Carlson wrote:
See Bret Victor's work.
On Jun 11, 2017 6:09 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Seems like there might be something there, but I think it would typically be more desirable to modify your algorithm after using this to get insights about it, rather than modifying the data it's operating on. One (remote) possibility is that it would infer an appropriate modification to the algorithm based on the way a user modified the visualized data—but that sounds... hard ;)
On Sunday, June 11, 2017 at 1:31:42 PM UTC-7, John Carlson wrote:
Where I'd like to see this go is manipulation of the visualized data.
On Jun 11, 2017 1:50 PM, "Weston Beecroft" <west...@gmail.com> wrote:
Hello! Since someone posted an older video of my abstract visual debugger here in the past, I thought you guys might be interested to see where it is a couple years later: https://youtu.be/KwZmAgAuIkY--The general research direction I'm interested in exploring with it is along these lines: we need instruments in software development that have some relation to instruments like, e.g., microscopes in physical science—but with certain important differences. First, the commonality is that in both cases we would like to observe the state of some system, but in order for it to be comprehensible to us, it has to be transformed in some way, and we have to understand the nature of this transformation. In both cases we want an instrument which provides automatic, consistent transformations of the state of some system into something comprehensible to us.In the case of software development, we'd like to observe the state of executing programs, but there is too much data, too rapidly evolving. So, I'm interested in exploring/developing things like 'abstractoscopes,' which automatically abstract away various subsets of program data in order to provide higher level, comprehensible views.The above demo takes the approach of only presenting data structures evolving in time via operations applied to them. I believe this 'view' is useful because programmers tend to think about algorithms in terms of operations on data structures, but traditional debugging approaches are tied to lower-level control flow views. (It's not mentioned in the video, but the software can also switch between 'temporal modes' for operation playback: real-time, uniformly spaced, proportionally spaced.)It's getting pretty close to being in 'alpha' state, so with any luck, a downloadable version will available before too long. (But I am just working on when I get lucky with free time now and again, so it's hard to say anything more definite.)Lemme know what you think!
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.
Is there anything we can borrow from our knowledge of human conceptual structure that might carry over to an effective program representation scheme? (i.e. what can cognitive models of human conceptual 'schemata' tell us?)
Humans are way too complex to analyze reliably in this way. We can only guess what our conceptual structure is, and how to leverage this in a programming representation. One thing is: we don't really think abstractly, we just slip in examples of abstract concepts; e.g. "a man walks into a bar" will conjure up some image of some many walking into some bar in your head, albeit a very un-detailed one.
Programming by demonstration has largely failed to make much of an impact given a focus on "demonstrating" and "automatic abstraction", two concepts that haven't turned out to work very well. But one could just imagine starting with a less general program (fixed data and lots of mocking) to a more general one via manual adjustments.
The easier problem is to do this in a more domain-specific way.
maybe you could embed something into applications which executes separate programs which were constructed by demonstration.
Could it be useful to give demonstrations of some intermediate representations rather than final output?
It seems like 'iterative' demonstration-based programming systems would need something like an 'output syntax' as well as an input syntax, so that in the same way an interpreter embodies the relationship between input syntax and program semantics, maybe we could build two-way interpreters that understand the relationship between 'output syntax' and program semantics as well (keeping in mind that this is for domain-specific programming systems only, so the potential output of a program is highly constrained). Also 'representation' would be better than syntax, since that implies certain unnecessary things.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsubscri...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-programming@googlegroups.com.
"a specific program (fixed data, lots of mocking) being transformed manually by a programmer into a general program"
The only reason you need syntax, I think, is to order operations and parameters. If operations and parameters are ordered by the GUI (or by the user), then syntax is not needed.
I think the model is the key. Collections of collections are similar to syntax, I agree.
I think updating syntax is a waste of time, but could possibly be doable. The key is to update values, and let the user deal with the order of operations and parameters.
(my 2 cents. This is likely not a very user friendly system).
John
Sent from Mail for Windows 10
From: Weston Beecroft
Sent: Monday, June 12, 2017 6:42 PM
To: Augmented Programming
Subject: Re: Abstract visual debugger (update)
The reason I was using the term syntax initially is because I think of it as: a representation with a definite relation to some semantics. So with an 'output representation' (aka output 'syntax'), I was thinking you could sort of re-parse the program's modified output in order to update the program semantics. There's some complication here if you think about it as literally syntax and parsing though: more likely there is a connection between program output and some abstract 'program model'; when the program output is modified, the program model is updated.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
Well, if you have a deep learning system, parsing a user interface may be possible. Whether it’s useful is another question. However, I don’t know if the deep learning system can operate the keyboard and mouse without a lot of training based on user traces. This is the many examples to abstraction problem again that Sean mentioned. They just now have a labelled collection of videos for human actions (DeepMind Kinetics). Collecting human actions cross-application is difficult, but is likely doable with a totally integrated system ala smalltalk, as long as events aren’t swallowed without creating an undo record/object. Within an application, you can design it however you want.
Caveat: I did not write the model interpreter for TWB/TE and am only slightly familiar with the difficulties of creating an interpreter (I suck at it). I believe it’s possibly one of the harder problems in computer science.
I am not sure how many people allow their traces to be captured. Probably not many. Labels can come from the application, I think.
The key is to capture the trace as the program itself, and have the user introduce control structures, possibly suggested by the software, possibly after the initial trace is done. The user may have to identify traces that they want “joined” if the control structures are to be found automatically, along the lines of NFA to DFA conversion.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
if you have a deep learning system, parsing a user interface may be possible.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-programming+unsub...@googlegroups.com.
I agree. There’s the AST, or program model in a non-live program on disk on a turned off computer, and the data model in a live program. Your debugger handles the data model. We’re trying to integrate the program model with the data model (the data model becomes the program model and vica versa), so that the program model is live and editable at the same time the data model is. There may be two levels of program model, the compiled program and the interpreted program. The trick is to remove the distinction between the compiled program and the interpreted program. TWB/TE compiled the interpreted program at one point, but I removed the feature, because I was essentially duplicating code, so the compiled program actually runs the “interpreter” (collections of method calls) without the GUI. It was slow (in 1993 on Sun 3/50), and could use JIT compiling, but I’m not sure what I lose in that case. The holy grail I refer to is talking about is totally integrating the compiled code (interpreter) with the interpreted code (Java 9?), so that compiled code can become interpreted code during runtime and vica versa. I did support both compiled code and interpreted code in the same output binary in TWB/TE for a short time, but I thought the compiled code would diverge from the interpreted code, so I removed the compiled code. I thought the interpreted code more closely followed the user’s intent—if I could take the compiled code and test it in the GUI as a live program model, that would be ideal. IF we could easily extend Java bytecode, people wouldn’t have to keep writing compilers and interpreters for their own (visual) languages, and can use the JVM (or CLR or whatever Apple provides) for both compiling to and interpreting. I believe LISP did most of this, except I don’t know about the decompiling bit. The trick behind extension is to provide ranges of type IDs for people, similar to what IANA does for TCP port numbers. I guess Java/CLR didn’t want to take that business on. There’s probably another way…
We may need a reasonable instruction set. Java bytecode is a reasonable instruction set I believe. I don’t have much experience with CLR, Lisp or Smalltalk. Having a common bytecode for all languages would be another holy grail. I am not sure if CLR has been put on a chip.
Note that I haven’t mentioned any text in the above.
TWB/TE was written in C++ in 1992-1993 before JVM, CLR, IDEs etc. Control Center was available, but not very fast. I wanted TWB/TE to become MOOSE (Multithreaded Object Oriented Stack Environment), but didn’t get support (industry was concerned we were competing with them).
I guess I can always download OpenJDK and start hacking.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at https://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
How do we get the ast/asg both as compiled and as a data structure (interpreted) and maintain them in sync? Do we have to test both now?
Do I have to maintain my own compiler? I’ve not been good at interpreters or compilers in the past. I do believe implementing operations separately from the base interpreter/compiler can be done. Someone has to implement (ifthenelse/loops/recursion/tail recursion) for me I hope. Do I have to implement undo operations for your code? There should be an object model standard for operation objects [ do(), undo(), getUndoRecord(), etc. ]
What is the standard ast/asg and can I comment on it? Should we put effort into defining a standard ast/asg?
John
Inquiring minds want to know.
From: Raoul Duke
Sent: Monday, June 12, 2017 8:54 PM
To: augmented-programming
Subject: Re: Abstract visual debugger (update)
we should for sure all be working on views if the ast/asg (IntentionalComputing), and the views should be personal and implemented across all tooling so there are no more curly brace positioning wars.
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.