Personal Programming Environment as Extension of Self

680 views
Skip to first unread message

David Barbour

unread,
Sep 20, 2013, 1:35:19 AM9/20/13
to reactiv...@googlegroups.com, Fundamentals of New Computing, augmented-...@googlegroups.com
Over the last month, I feel like I stumbled into something very simple and profound: a new perspective on an old idea, with consequences deeper and more pervasive than I had imagined.

The idea is simply this: every user action is an act of meta-programming.

More precisely:
(1) Each user event addends a tacit concatenative program.
(2) The output of the tacit concatenative program is another program.
(3) We can understand the former as rewriting parts of the latter.
(4) These rewrites include the user-model - navigation, clipboard, etc.

I will further explain this idea, why it is powerful, how it is different.

To clarify, this isn't another hand-wavy 'shalt' and 'must' proposal with no idea of how to achieve it. Hammering at a huge list of requirements for eight years got me to RDP. At this point, I have concrete ideas on how to accomplish everything I'm about to describe.

Users Are Programmers.

The TUNES vision is revived, and better than ever.

WHY TACIT CONCATENATIVE?

Concatenative programming is perhaps best known through FORTH. Most concatenative languages have followed in Charles Moore's forthsteps, sticking with the basic stack concept but focusing on higher-order programming, types, and other features.
 
A stack would be an extremely impoverished and cramped environment for a user; even many programmers would not tolerate it. Fortunately, we can move beyond the stack environment. And I insist that we do! Concatenative programming can also be based upon such structures as trees, Huet zippers, and graphs. This proposal is based primarily on tree-structured data and zippers, with just a little indirect graph modeling through shared state or explicit labels (details later).

A 'tacit' programming language is one that does not mention names for parameters or local variables. Many concatenative programming languages are also tacit, though the concepts don't fully intersect. 

A weakness of tacit concatenative programming is that, in a traditional text-based programming environment, users must visualize the environment (stack or other structure) in their head, and that they must memorize a bunch of arcane 'stack shuffling' words. By comparison, variable names in text are easy to visualize and review.

My answer: change programming environments!

Powerful advantages of tacit concatenative programming include:
1. the environment has a precisely defined, visualizable value
2. short strings of tacit concatenative code are easy to generate
3. concatenative code is sequential, forming an implicit timeline 
4. code also subject to learning, pattern recognition, and rewrites
5. every step, small and large, is precisely defined and typed

Instead of an impoverished, text-based programming environment, we should offer continuous automatic visualization. Rather than asking users to memorize arcane words, we should offer direct manipulation: e.g. take, put, slide, toss, drag, drop, copy, paste. Appropriate tacit concatenative code is generated at every step, for every gesture. This code is easy to generate because generators can focus on short vectors of learned 'known useful' words without syntactic noise; this is subject to a variety of well-known solutions (logical searches, genetic programming, hill-climbing). 

And then there are benefits that move beyond anything offered today, UI or PL.

Not only can we visualize the environment, we can animate it. Users can review and replay their actions, potentially from different perspectives or highlighting different objects. Since even the smallest dataflow steps are well defined, users can review at different temporal scales, based on the granularity of their actions - zooming in to see precisely what taking an object entails, or zooming out to see broad changes in an environment. 

Rewrites can be used to make these animations smoother, more efficient, and perhaps more aesthetically pleasing. And, in addition to undo, users can rewrite parts of their history to better understand a decision or to fix a mistake. 

The programming environment can also help users construct macros: pattern recognition is easy with tacit programming even if it were just in terms of sequences of words. However, patterns are augmented further by looking at context, the environment at the time a word was used. Proposed words can be refined with very simple decision procedures to account for slight context-sensitive variations. Discovered patterns can be used for simple compression of history, or be used for programming-by-example. 

An environment that recognizes a pattern might quietly and unobtrusively offer a constructed tool or macro, that the user might refine a little (e.g. clarifying the decision procedure) before using. The notion of 'dialecting' and 'DSLs' is replaced by problem-specific toolboxes and macros, where a tool may act a lot like a paintbrush.

Further, there are advantages from the formalism and typing!

For one example, it is to guide user actions relevant to the typeful context - i.e. making appropriate suggestions. Also, multiple actions can be assigned to a single gesture or voice command, so long as they are distinguishable in most typeful contexts. (When there seems to be ambiguity, the environment can ask for clarification. Not a problem so long as it's rare.)

By introspecting the environment, we can also create words that are 'smart' about their application, i.e. automatically performing a search of the local environment to find an appropriate target, and perhaps validate that it is a unique target. This ability to be selectively imprecise can greatly reduce the burden on users and developers. (Usefully, we can separate the 'search' and 'apply' patterns such that augmenting any action with search is a simple composition.)

Tacit concatenative programming is *safer* than names. 

With parameter based programming, the data-plumbing is untyped and ad-hoc. Further, captured names are almost never visible in the 'type' of a function or closure. This can lead to unsafe or inefficient behaviors, where names are captured in a closure that is then communicated, or shared by multiple threads. Essentially, the problem is that names are *too* expressive. We can use references in ways their referents cannot be used. We can put the "gorilla" in the mailbox, but not the gorilla. 

This safety issue is especially relevant for RDP. I make heavy use of both location types ('where' is the value) and substructural types (functions that cannot be dropped, or cannot be copied, or both). Tacit concatenative makes safety-by-construction much easier.

Tacit concatenative programming CAN model use of names, i.e. in terms of lookup in an association list. My proposal will use this technique on occasion. But there is a very strong, visibly obvious distinction between the reference and the referent - i.e. the reference is a text value, while the referent is a gorilla!

THE USER MODEL

The tacit concatenative program can be understood as an unbounded stream of pure `state -> state` rewrite operations. In addition to these operations, users have freedom to undo, review, replay, and even rewrite their recent history of actions. Undo can be accomplished by the normal snapshot-replay mechanisms.

But we don't model the user as awkwardly 'above' the state, apart from it. Instead, we model the user within the state. Literally.

    (world * user) -> (world * user)

One might think of the 'user' here as the hero of a video-game, and the 'world' as a complex environment that can be navigated or manipulated. The hero will have hands to carry things, an inventory of loot and weapons, perhaps a list of special skills. The hero is so important and central to our model, that navigation is actually modeled by rolling the world under the hero. 

Of course, a user environment isn't a video game. (Or at least it shouldn't be used that way at all times!) But the same ideas hold.

We may have 'take' and 'put' actions to move objects from the world to the user. Navigation is often modeled using zipper-like operations through a document structure, or occasionally by something closer to a hyperlink (searching for an object by index). Instead of special skills, we have macros and a powerbox. Instead of loot and weapons, we have projects and domain-specific toolkits (e.g. paintbrushes, geometry manipulators).  

In addition to 'hands', a rather interesting possibility is to have 'eyes' - programmable lenses that affect how we view, influence, and navigate the world. Through lenses we might introduce overlays, highlight important objects, gain x-ray vision for geometries, introduce a head's-up display, or collapse irrelevant structures. 

(NOTE: This hand-and-eye concept - where the hand is programmable by composable tools, and the eye is programmable by composable lenses, and this programmability is readily accessible to users - is one I've had in mind since about 2003.)

SHARE VALUES NOT ENVIRONMENTS

A user's environment is extremely personal and personalizable. 

Between pattern recognition, code generation, and programming by example, I imagine that the user and environment will tend to 'grow up' together, developing a private language specific to each human. In addition, the environment will acquire a great deal of private information about a user - e.g. relationships, financial information, pictures and messages.

So users won't want to share their personal environments at that granularity. And this is fortunate, because they can't. In general, there is no safe or sensible way to compose independent command streams from multiple users.

But users can share:

* values - numbers, text, and composites that may represent geometries, diagrams, documents, graphs, tables, recorded images, sounds, measurements
* behavior-specifying values - e.g. representing macros, lenses, tools, and authorities
* reactive values - normal or behavioral, time-varying with hidden dependencies

In RDP systems, sharing between agents occurs via an intermediate resource. Agents include other humans, but also sensors, databases, and actuators. The support for reactive time-varying values is a feature provided by RDP, and involves remaining attached to the value source to track updates. 

To share a value, we publish into some space shared with friends or customers, or a more global space (like a wiki). Private spaces can be established by a variety of protocols with trusted intermediaries, though they often must be bootstrapped in physical space. 

Not every user thinks about programming, or makes an effort to create something reusable. But I think most people will fiddle, find interesting ways to arrange lenses, rearrange documents, smash values together to create new value. Mashups will be the norm. And even people who aren't making any effort might be provided useful tools

Everyone is a programmer some of the time. 


ENVIRONMENT METAPHORS FOR USERS?

I haven't started on the details for a user environment metaphor. 

The environments I've developed so far are still aimed at programmers in a text-based environment. I would probably be focused on a single stack if RDP didn't have declarative concurrency properties. (A single stack is painful for modeling concurrent tasks or workflows that must join or synch at some steps.)

But, based on my interests, I would focus on the following features:

* zoomable user interfaces with live documents
* diagrams, geometries, images, graphs, scene-graphs
* animated non-reactive values (video, GIFs, sound, etc.)
* widgets, variations suitable for use in RDP
* augmented reality systems (visual fingerprints, etc.)

What I can say is this: expressiveness will not be the issue here. We could model hypermedia systems, desktop metaphor, or whatever else we decide. 

The main difference from today's design would be that these are now constructed of fine-grained values, subject to introspection and reorganization and mashup, accessible for macro programming, and coexisting in a common language-based security model.

ENVIRONMENT IS ALSO A LIVE PROGRAM

Macros, tools, and so on are designed for volatile manipulation of state. But that manipulation of state should be meaningful! And to provide meaning to state, we must use an interpreter. But this interpretation should be live: as we continue to maintain the state, the meaning should be propagated automatically. 

Here are a few principles that are guiding my thoughts on this subject.

(1) Users must be able to assign their own, private meanings to state in their personal environments. Each graph, diagram, document, geometry, and so on can have a different meaning. Some of those meanings will be realized by programmatic interpretation. 

(2) ALL long-running behaviors and policies should have corresponding state in the environment. Every relationship, shared value, observation on reactive state, and so on should be accessible in this manner. This is essential for visibility, maintenance, and for revocation.

(3) Failure is ideally very coarse-grained. Dealing with partial-success is painful, complicated, and error-prone; we would greatly benefit from precise atomic success/fail boundaries. 

It's addressing these principles where RDP really shines. RDP is based upon continuous influence and observation, and also has very nice properties for runtime update and revocation. For clean failure, RDP enables time-warp style 'undo' even in an open system. Of course, there are practical latency limits on this (can't always correct the past), but those are partially addressed: RDP also enables speculative evaluation, so we can tentatively feel out 'what would happen if'. 

So, how do we model this separation? 

My current thought is that, since meaning is private to the user, the association between meanings and objects in the environment should be maintained as part of the user-model. I'm currently envisioning a very KISS model:  there is an association list at a standard location in the user-model of a form similar to:

       ("@foo" * [block interpreting foo])

Then, in the environment, users will have ("@foo" * fooStructure) objects scattered around with no particular organization. If the whole foo object is inside some larger structure, like "@bar", then it would be the prerogative of the bar interpreter to either ask for a foo interpreter or provide its own interpretation.

In order to enforce the "ALL long running behaviors are modeled in visible state" principle, the initial program has no authority; it's ultimately just a sequence of pure state->state transforms. Capabilities are introduced only the second phase. The real argument to the block interpreting foo is a pair: (powerblock * fooStructure). 

(I'm not entirely satisfied here. In particular, I'd like to have more precise understanding of source-stable uniqueness for the powerblock.)

Potentially, this entire process might staged, e.g. if the *output* of interpreting foo contains an ("@baz" * bazStructure) 

In this design, text-based programming can still be supported, but certainly isn't necessary.




SERIALIZATION: ONE CODE TO BIND THEM

I propose that all values be shared by a pure, tacit concatenative bytecode. There are several reasons for this.

(1) a uniform serialization model will avoid a lot of redundant parser code and discontinuity spikes. And in practice, a tacit concatenative bytecode is likely to operate more efficiently than most parsers: it reduces to a simple series of table lookups (or even a switch expression) and a small state machine to deal with text and blocks.

(2) in a reactive model like RDP, we often have large structural values (like an array or scene-graph) where only a few values change. Rather than sending the whole structure to communicate a small change, this is easily modeled in terms of streaming more bytecode to operate on the original value. 

(3) we can gain a lot of efficiency by a very simple trick: instead of just a value, we can operate on a `(value*context)` pair. The context is a 'communication context' that can hold a small library of functions, some memoized computations, and so on. Functions in the context can be compiled by the recipient. 

(4) code can contain useful assertions, self-validation. 

(5) a high level of semantic compression can be achieved without any additional designs or layers. Though, if semantic compression isn't used, then regular streaming compression should work pretty well.

I am developing Awelon Bytecode (ABC) for this purpose.

ABC is a typed, tacit, concatenative bytecode for an idealized RDP system. ABC has very little structure; it is a UTF-8 stream with very few parse modes:

       slsls - (START) normal bytecode mode
       {text goes here}  - text mode
       \}, \\, etc. - escapes in text require a mode
       [slsls] - block mode, forming a function

In this case `slsls` means `swap assocl swap assocl swap` - a fairly common operation that I usually give the name `assocr`. 

ABC has a minimal set of primitives and very few types. It's up to a decent compiler or interpreter to simplify data-plumbing like slsls. The goal with ABC is not a most efficient direct-interpretation. I am more interested in keeping things minimal, easy to prove, easy to generate, and easy to optimize. 

ABC has only one syntax-layer value type, which is text. Numbers (rationals) are specified in ABC by first using text then translating it to a number. This isn't the ideal representation for efficiency, but I feel that legibility and simplicity has greater value. 

       {text} :: x -> (text*x)
       # :: (text*x)->num*x

       {Hello, World!}
       {42}#

Structured values can be formed by constructing elements and organizing them in a streaming fashion. There are some simple strategies to achieve this. 

       (42,108) => {108}#se{42}#

Text can also be used as a comment. I can think of a few reasons this might be done - e.g. to provide optimizer suggestions, record profiling information, or potential hints for a theorem prover. 

       % :: (text*x) -> x
       {this is a comment}%

ABC is designed for capability-based languages. I.e. there is no ambient authority (except for 'error'). Developers can't even create 'unique' objects (or local state) without a capability. 

       $ :: (text*x)->cap*x

The interpretation of the text within a serialized capability is entirely up to the provider of the capability. It could be encrypted code. It could be HMAC authenticated code. It could be a random GUID to a stored value. And so on. 

In RDP systems, all capabilities are implicitly revocable: to 'grant' a capability is a continuous action, so to revoke you simply stop granting. No state is required, and this can be implemented by a variety of strategies.

ABC doesn't track any pure/impure type. However, RDP has a concept of location, called 'partition type', which can be used to isolate some subprograms.

ABC can have spaces, tabs, newlines, and carriage returns. Those all have the same meaning: identity function.


WHAT FAILED BEFORE?

Similar efforts have come and gone, often with some small success that could not be scaled. My hypothesis is that the following have been points of failure: 

1. Did not model user/programmer. User sits awkwardly above model, no semantic-layer ability to manipulate it or program-by-example. Wall of syntax.

2. Second-class extensions. Brushes, tools, lenses, views are not first-class objects that can be carried and composed. Boiler-plate namespace management to reuse tools from one task to another.

3. Did not effectively address value sharing, independent maintenance, security properties. Programmers forced to "ship the IDE" to share behavior.

I believe that all three points must be addressed simultaneously to have any hope for success. If we address 1,2 we have isolated users - a powerful environment but no leverage. If we address 3, we have more effective programmers, but it's all arcane knowledge and hidden APIs.

In my design, points 1,2 are addressed by the tacit concatenative model of programmer manipulating environment. Point 3 is handled by RDP and capability security.

Matt McLelland

unread,
Sep 20, 2013, 1:46:48 PM9/20/13
to reactiv...@googlegroups.com
On Fri, Sep 20, 2013 at 12:35 AM, David Barbour <dmba...@gmail.com> wrote:
Over the last month, I feel like I stumbled into something very simple and profound: a new perspective on an old idea, with consequences deeper and more pervasive than I had imagined.

The language you're using here tells me you're excited about this, and so I would really like understand this, but I don't think I get it yet.

 
The idea is simply this: every user action is an act of meta-programming.

More precisely:
(1) Each user event addends a tacit concatenative program.
(2) The output of the tacit concatenative program is another program.
(3) We can understand the former as rewriting parts of the latter.
(4) These rewrites include the user-model - navigation, clipboard, etc.

I will further explain this idea, why it is powerful, how it is different.

To clarify, this isn't another hand-wavy 'shalt' and 'must' proposal with no idea of how to achieve it. Hammering at a huge list of requirements for eight years got me to RDP. At this point, I have concrete ideas on how to accomplish everything I'm about to describe.

Users Are Programmers.

The TUNES vision is revived, and better than ever.


Do you have a link that would tell me what TUNES is?

 
WHY TACIT CONCATENATIVE?

Concatenative programming is perhaps best known through FORTH. Most concatenative languages have followed in Charles Moore's forthsteps, sticking with the basic stack concept but focusing on higher-order programming, types, and other features.
 
A stack would be an extremely impoverished and cramped environment for a user; even many programmers would not tolerate it. Fortunately, we can move beyond the stack environment. And I insist that we do! Concatenative programming can also be based upon such structures as trees, Huet zippers, and graphs. This proposal is based primarily on tree-structured data and zippers, with just a little indirect graph modeling through shared state or explicit labels (details later).

A 'tacit' programming language is one that does not mention names for parameters or local variables. Many concatenative programming languages are also tacit, though the concepts don't fully intersect. 

A weakness of tacit concatenative programming is that, in a traditional text-based programming environment, users must visualize the environment (stack or other structure) in their head, and that they must memorize a bunch of arcane 'stack shuffling' words. By comparison, variable names in text are easy to visualize and review.

My answer: change programming environments!

Powerful advantages of tacit concatenative programming include:
1. the environment has a precisely defined, visualizable value
2. short strings of tacit concatenative code are easy to generate
3. concatenative code is sequential, forming an implicit timeline 
4. code also subject to learning, pattern recognition, and rewrites
5. every step, small and large, is precisely defined and typed

Instead of an impoverished, text-based programming environment, we should offer continuous automatic visualization. Rather than asking users to memorize arcane words, we should offer direct manipulation: e.g. take, put, slide, toss, drag, drop, copy, paste. Appropriate tacit concatenative code is generated at every step, for every gesture. This code is easy to generate because generators can focus on short vectors of learned 'known useful' words without syntactic noise; this is subject to a variety of well-known solutions (logical searches, genetic programming, hill-climbing). 

I have other comments that I could make about some of the other stuff you've written, but I think there's a good chance we'll be talking past each other if I don't really grok this first.

What is it about continuous automatic visualization that you think requires tacit or concatenative programming?  

My main question is:  what's the problem with text?

From my point of view an essential aspect of programming is that we are building up an artifact -- a "program"  -- that can be reasoned about independently of its edit history / method of construction.   Furthermore, I think it's a mistake to couple the ways of constructing / editing that artifact to its semantics as a program.  On the other hand, we don't want to couple these things to the IDE either, as is commonly done.  I wonder if you agree with any or all of those statements.

I never finished up with the LtU thread, but there you made the claim that there isn't a simple way to translate from tacit concatenative programming to named applicative.  But isn't that more to do with the difficulty in idiomatically translating from concatenative to applicative than it is to do with the difficult of translating to and from tactic to named?  

Concrete examples you can give would probably be more helpful than anything else.  Is there an example problem we could work through a couple of different ways?

David Barbour

unread,
Sep 20, 2013, 3:15:40 PM9/20/13
to reactiv...@googlegroups.com


On Sep 20, 2013 10:46 AM, "Matt McLelland" <mclella...@gmail.com> wrote:
>>
>> The TUNES vision is revived, and better than ever.
>
>
> Do you have a link that would tell me what TUNES is?

Try tunes.org

>
> What is it about continuous automatic visualization that you think requires tacit or concatenative programming?  

Not "requires". Just orders of magnitude better at it. Some reasons:

1. well defined environment structure at every step
2. well defined, small step movement operators
3. strong, local distinction between move and copy.
4. linear, incremental timeline built right in
5. much weaker coupling to underlying text

>
> My main question is:  what's the problem with text?

Manipulating diagrams, graphs, geometries, images via text is analogous to writing text through a line editor. It's usable for a nerd like me, but is still harder than it could be.

My goal is to lower the barrier for programming so that normal people do it as part of their every day lives.

>
> From my point of view an essential aspect of programming is that we are building up an artifact -- a "program"  -- that can be reasoned about independently of its edit history / method of construction.  

I agree!

That gets back to the "meta" in "user actions are an act of metaprogramming."

Yet there are many advantages to modeling the method of construction, and reasoning about it, too. It enables programming by example, formal macros, staged metaprogramming.

Even better, if execution is fully consistent with method of construction, then the intuitions users gain during normal use will effectively inform them for higher order programming.

>
> Furthermore, I think it's a mistake to couple the ways of constructing / editing that artifact to its semantics as a program.

I halfway agree with this.

The programmatic 'meaning' of a graph, geometry, diagram, text, or other artifact should be controlled by the user of that artifact. Yet, it is ideal that the ways of manipulating the artifact also move it from one consistent meaning to another.

In order to achieve both properties, it is necessary that the tools and macros for operating on a structure also be programmable by the user.

>
> you made the claim that there isn't a simple way to translate from tacit concatenative programming to named applicative.

Other way around!

Tacit concatenative to applicative is trivial (albeit, not idiomatic).

Matt McLelland

unread,
Sep 20, 2013, 5:57:24 PM9/20/13
to reactiv...@googlegroups.com
> Manipulating diagrams, graphs, geometries, images via text is analogous to writing text through a line editor. It's usable for a nerd like me, but is still harder than it could be.

OK, I'm not finding much to disagree with yet.   I would say that in my experience text is a much better construct form for most programs than those other forms, so I would expect text to be the 95% case.   I'm including in that more structured forms of text like tables. 

What I'm still not understanding is how viewing the editing of an image or graph as a tacit concatenative program is a big win.   I again plead for a concrete example.


> My goal is to lower the barrier for programming so that normal people do it as part of their every day lives.

If you mean that you intend for many application domains to be integrated into the IDE in such a way that the activities of those domains can be viewed as programming lite and seemlessly interoperate with a full and powerful programming language, then we still haven't found anything to disagree on.   But I'm skeptical that non-programmers will ever do serious programming.

>I halfway agree with this.  In order to achieve both properties, it is necessary that the tools and macros for operating on a structure also be programmable by the user.

No, I think we're in full agreement here.  Manipulations should be programmable, too, but separately from the core semantics of a construct.

>> you made the claim that there isn't a simple way to translate from tacit concatenative programming to named applicative.

> Other way around!

Oops, you're right.  But my question remains:  isn't the hard difference more the concatenative vs. applicative than named vs. tacit?






--
You received this message because you are subscribed to the Google Groups "reactive-demand" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reactive-dema...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

David Barbour

unread,
Sep 20, 2013, 9:28:17 PM9/20/13
to reactiv...@googlegroups.com, augmented-...@googlegroups.com, Fundamentals of New Computing
On Fri, Sep 20, 2013 at 2:57 PM, Matt McLelland <mclella...@gmail.com> wrote:

I would say that in my experience text is a much better construct form for most programs than those other forms, so I would expect text to be the 95% case.   I'm including in that more structured forms of text like tables. 

If you look at PL, text has been more effective. Graphical programming is historically very first-order and ineffective at addressing a variety of problems. 

If you look at UI, text input for control has been much less effective, and even text output is often augmented by icons or images. The common case is buttons, sliders, pointing, and so on. Some applications also use concepts of tooled pointers like brushes, or tooled views like layering. 

I've tried to explain this before: I see UI as a form of PL, and vice versa. Thus, to me, the 95% case is certainly not text. Rather, most users today are using a really bad PL (<- the 95% case), and most programmers today are using a really unnatural UI (<- and thus need to be really serious about it), and this gap is not essential.
 

What I'm still not understanding is how viewing the editing of an image or graph as a tacit concatenative program is a big win.  

Have you ever used a professional image editing tool? 

If you haven't, the process actually does involve quite a bit of automation. The artist implicitly constructs a small pipeline of layers and filters based on the actions they perform. This pipeline can often then be separated from the current image and applied to another. Essentially, you have implicit macros and a limited form of programming-by-example. 

But, with the way applications are designed today, this pipeline is trapped within the image editing application. You cannot, for example, casually apply a pipeline of filters to a view of a website. Conversely, you cannot casually incorporate an image search into layers or elements in an image. Either of these efforts would require a lot of file manipulation by hand, but that effort would not be reusable. 

If you are an artist with a hobby of hacking code, you could:
* don your programmer hat
* fire up your favorite IDE and textual PL
* go through the steps of starting a new project
* review the APIs for HTTP loading
* review the APIs for Filesystem operations
* review the APIs for your image-editing app
*    oh bleep! doesn't have one!
* review the APIs for image processing
* export your pipeline 
* begin implementing a parser/interpreter for them
*    who the bleep designed this pipeline language?!
* abort your effort to implement a general interpreter/parser
* re-implement your pipeline in your language
*    ugh... who designed these validation tools? 
*    'Image'? type system ain't worth a bleep here... 
*    unit tests on images? how does that even work?
*    it's a wall of text! I can't see what's going on!!
* review your API for image display
* integrate image display into your tests
* edit test edit test edit test edit test
* woohoo! It works!
*    but the test-case is hard-coded in :(
* contemplate building a configuration language
*    bleep that! I'm a hacker.
* just edit and hard-code in the next use case...
*    and the next, and the next

(years of bitrot later)
* why don't these APIs work anymore?! 
*    (I really don't want to review them again)
* where are those configuration variables scattered?

And that's only if you happen to be in the small intersection of artists who have a hobby of hacking. If you're a 'serious' programmer you might have gone the extra steps to import the pipeline and build a configuration file. But the overall experience wouldn't be that much different. 

If you're anyone else, you might contemplate hiring a programmer. But you think: that's expensive, and I don't off hand know any programmer with skills in image-processing who is also looking for work, and I don't want to pay a programmer to self-educate. So you end up addressing the specific problem by hand. Or you just abandon the effort. 

That experience shouldn't need to happen.

But because UIs are bad PLs today, it is very difficult to integrate the capabilities of different services, toolkits, and APIs. Conversely, because PLs are bad UIs today, we build these thick layers we call 'applications' between users and the underlying capabilities. Of course, most programmers don't think about UI as PL. And even if they do, most are bad PL designers. And even those who aren't, don't have access to or examples of UI toolkits designed to support UI as an effective PL. It's a vicious cycle that has only been broken by a few small niche communities (like REBOL). 



I'm skeptical that non-programmers will ever do serious programming.

I don't expect non-programmers to do "serious programming". I expect to lower barriers so that "serious programming" is very rarely needed, and such that when it is needed it can be handled as a tiny extension to an app, or a small composition, rather than a full new app. The goal: Serious programming is only needed 5% as often. And, when needed, costs only 5% as much.  

Programming should not be a career. 

Programming should be the most basic form of computer literacy - such that people don't even think about it as "programming".

A scientist who knows how to get big-data into one application and process it in another should be able to build a direct pipeline - one that optimizes away the intermediate loading - without ever peeking under the hood, without learning an API. 

A musician who knows how to watch YouTube videos, and who has learned of a cool new machine-learning tool to extract and characterize microsounds, should be able to apply the latter to the sounds from the former without learning about HTTP and video transfer and how to scrape sounds from a video. Further, it's better for both the artist and servers if this processing can automatically be shifted close to the resources and eliminates the irrelevant video rendering. 

Artists, scientists, musicians, anyone should be able to think in terms of capabilities: 

* I can get an X
* I can get a Y with an X
* therefore, I can get a Y

But today, because UIs are bad PLs, they cannot. Instead, we have this modal illogic:

* [app1] I can get X
* [app2] I can get Y with X
* ???
* profit

UI (and programming) is much more difficult today than it should be, or can be. 


isn't the hard difference more the concatenative vs. applicative than named vs. tacit?

(context: challenge of translating traditional PL to tacit concatenative)

No. The primary challenge is due to named vs. tacit, and the dataflows implicitly expressed by use of names. If you have an applicative language that doesn't use names, then there is a much more limited dataflow. It is really, literally, just Applicative.

  class Functor pl where  
    -- language supports pure functions
    fmap :: (a -> b) -> pl a -> pl b

  class Applicative pl where
    -- language supports pointy values
    pure :: a -> pl a

    -- language supports procedural sequencing
    ap :: pl (a -> b) -> pl a -> pl b

  -- (some thought has been given to separating pure and ap).

This much more constrained language is easy to express in a concatenative language. 

* `fmap` is implicit (you can express pure behaviors if you like)
* `pure` is modeled by literals (e.g. `42` puts a number on the stack)
* `ap` is a simple combinator. 

But introducing names moves expressiveness from `Applicative` to `Monad`, which supports ad-hoc deep bindings:

      foo >>= \ x -> bar (\y -> x+y)

Concatenative languages are often somewhere in between Applicative and Monad, since they require explicitly modeling the data-plumbing and hence can control the expressiveness of bindings. 

Regards,

Dave

Matt McLelland

unread,
Sep 21, 2013, 9:01:34 AM9/21/13
to reactiv...@googlegroups.com
I agree with much of what you're saying, but let's look closer at the image editor example.  The artifact being constructed in a naive image editor is an image and it's not particularly fruitful to view that image as a program.   As you're saying, you can view the operations of building an image as defining a programming language and more advanced image editors show you a pipeline of such operations and let you edit it in ways that are starting to feel like programming.  But when you look at things that way, the image is not the program -- the pipeline is the program.  And the best way to look at and understand that pipeline is probably structured text.  Think of how the pipeline is usually presented: a stack of text descriptions of each operation (probably combined with a thumbnail of the image after that stage).

In general, I think this is the case.  We can consider an arbitrary UI to be building a program, but you aren't really programming unless you're interested in the whole program at once.  And the best way I know of to visualize a whole program at once is structured text (with annotated visualizations and summaries, sure).   And so, while this is really just terminology we're hammering out, I would prefer to say that 95% of UI users aren't programming.  Programming is when you care about the whole program.

So now consider the special case where the UI we're interacting with is an IDE and the primary artifact we're building is a program.  I see no special relationship between that primary program and the other program we're implicitly building when we look at the IDE's UI as a programming language.   We do want certain language constructs to be able to provide functionality that the IDE can use, but I don't see that as much different other UIs with context sensitive menus.


No. The primary challenge is due to named vs. tacit, and the dataflows implicitly expressed by use of names. If you have an applicative language that doesn't use names, then there is a much more limited dataflow. It is really, literally, just Applicative.

This isn't the setup that captures what I was arguing.  My point is that if you start with a named compositional (values can only be composed) language, it looks like it would be easy to convert to tactic concatenative.  It's easy to replace uses of names with nonsense words that achieve the same data plumbing. 
 

David Barbour

unread,
Sep 21, 2013, 1:55:09 PM9/21/13
to reactiv...@googlegroups.com, augmented-...@googlegroups.com, Fundamentals of New Computing
On Sat, Sep 21, 2013 at 6:01 AM, Matt McLelland <mclella...@gmail.com> wrote:
The artifact being constructed in a naive image editor is an image and it's not particularly fruitful to view that image as a program.

It's very fruitful! 

An image could be interpreted as a high level world-map to support procedural generation with colors indicating terrain types and heights. Or the image could be interpreted as a template for a cellular automaton. And, of course, the image could be a simple icon or texture. Granted, I can't think of many uses for *raster* images. But if we extend this to the broader concept of image artifacts - diagrams, graphs, and so on there are many effective ways to interpret them as programs.

There's a whole bunch of low-hanging fruit here. Don't let go to waste.

 
As you're saying, you can view the operations of building an image as defining a programming language and more advanced image editors show you a pipeline of such operations and let you edit it in ways that are starting to feel like programming.  But when you look at things that way, the image is not the program -- the pipeline is the program.

The pipeline is ALSO a program, and a much more implicit one. Every user action is an act of meta-programming. Every artifact has behind it history of actions associated with its construction. If relevant parts of this history can be extracted, perhaps tuned a little for reuse, then developers can build many useful tools without ever explicitly thinking about it as programming. 

 
And the best way to look at and understand that pipeline is probably structured text. Think of how the pipeline is usually presented: a stack of text descriptions of each operation (probably combined with a thumbnail of the image after that stage).

I think structured text - in particular, a tacit concatenative program - is a great way to *formally* understand the pipeline. Having this structure is essential. It enables formal approaches to extracting and delimiting the history associated with an artifact. Irrelevant actions (anything not contributing to the artifact's value or type) can be eliminated. 

I would want the formal descriptions to also be there, kind of omnipresent in the background (like formula in a spreadsheet), if only so that regular users learn a little by osmosis and curious fiddling. As I said, programming should be basic computer literacy.

But I think presenting the pipeline would often be done in other terms of informal text (e.g. output of an "explain" method), in addition to the icons. (cf. Bret Victor's Drawing Dynamic Visualizations - http://vimeo.com/66085662).
 

We can consider an arbitrary UI to be building a program, but you aren't really programming unless you're interested in the whole program at once.

Users should be enabled to:
* easily think in terms of capabilities not apps
* extract and tune action into reusable tools
* organize, maintain, and share sets of tools
* easily compose tools when they wish
* understand artifacts as long-running behavior
* manipulate artifacts as acts of live programming
* treat artifact manipulation as UI widgets

There is no "the whole program", just myriad composable, sharable subprograms. 

What you today call "real programming" should be looked at in hindsight as caveman programming - i.e. those dark ages when programmers huddled into dank caverns and cubicles with a keyboard and monitor and those ancient towers rather than the ubiquitous, augmented reality. Much like we regard assembly language programming today. Related: http://xkcd.com/378/

The environment should help users a great deal:
* pattern recognition, programming-by-example
* automatic example-input generation for testing and tuning
* automatic search, genetic programming
* Markov models to unobtrusively predict user action

Visualization is not the only area where tacit concatenative helps a great deal. 


 
while this is really just terminology we're hammering out, I would prefer to say that 95% of UI users aren't programming.  Programming is when you care about the whole program.

To understand users as programmers - the 95% case - isn't "just terminology". It's a new way of thinking. A new perspective can be extremely valuable. As a concrete example, a recent physics article describes quantum interactions as a higher-dimensional timeless geometry. By doing so, it computes on a napkin interactions that took hundreds of pages with Feynman diagrams. To perceive users as programmers seems, to me, just as valuable for simplifying PL as that quantum jewel is for physics. I ask that you reconsider your current views, and whether what you call 'real programming' has real intrinsic value. 


So now consider the special case where the UI we're interacting with is an IDE

Why should this be a special case? The user environment I described is always an IDE. With a much broader 'I'.

 
I see no special relationship between that primary program and the other program
we're implicitly building when we look at the IDE's UI as a programming language. 

Some useful relationships:

(1) The value models are the same in both cases. 
(2) Subprograms extracted from the latter are first-class values in the former.
(3) Subprograms developed in the former can be used as tools in the latter. 
(4) The set of user capabilities is exactly equal to the set of program capabilities.
(5) Intuitions developed based on usage are directly effective for automation.

These shouldn't be "special" relationships.
 

My point is that if you start with a named compositional (values can only be composed) language, it looks like it would be easy to convert to tactic concatenative.  It's easy to replace uses of names with nonsense words that achieve the same data plumbing.  

That's only true if you *assume* every dataflow you can express with the names can also be expressed with concatenative 'nonsense' words. 

My point is that concatenative allows for degrees of precise, typeful control over dataflow that names do not. I can create tacit concatenative models for which your named applicative (aka monadic) is generally too expressive for translation, yet which are significantly more expressive than pure applicative. Further, it's quite useful to do so for correctness-by-construction involving substructural and modal types (affine, relevant, linear, regional, staging) which are in turn useful for security, resource control, and modeling heterogeneous and distributed systems. 

Concatenative with first-class names has the exact same problems. That's why I argued even against John Purdy's use of local names. Granted, his language doesn't have the greater expressiveness of substructural and modal types. But now he couldn't add them even if he wanted to. My own modeling of names using explicit association lists only avoids this issue due to its second-class nature, a formal indirection between reference and referent (such that capturing a reference does not imply capturing the referent). 

Anyhow, the issue here is certainly named vs. tacit, not applicative vs. concatenative. 

Matt McLelland

unread,
Sep 21, 2013, 3:29:31 PM9/21/13
to reactiv...@googlegroups.com
> An image could be interpreted as a high level world-map to support procedural generation with colors indicating terrain types and heights.

This is common practice in games, but it doesn't IMO make artists into programmers and it doesn't make the image into a program.  

I think there is a useful distinction between user and programmer that should be maintained. 


> Why should this [programing in an IDE] be a special case?

The special case is editing an honest program.   How can you view playing a game of Quake as programming?  Even if you're still within the same OS/IDE, what's to be gained by calling that programming?  Who would care to look at the "program"?

The idea of building powerful tools for artists, scientists, and other users that blur the lines between some domain and programming are a great idea.   As you've noted, applications like Photoshop, 3DS Max, AutoCAD, Excel, etc. already do some of these kinds of things, but there's certainly plenty of room for doing it more systematically, with great discoverability and interoperability.


Some useful relationships:

(1) The value models are the same in both cases. 
(2) Subprograms extracted from the latter are first-class values in the former.
(3) Subprograms developed in the former can be used as tools in the latter. 
(4) The set of user capabilities is exactly equal to the set of program capabilities.
(5) Intuitions developed based on usage are directly effective for automation.

These shouldn't be "special" relationships.

I agree with 1-3.  I'm not sure what you mean by 4.  If you only mean that we should be able in an IDE to delegate (explicitly) all of our capabilities to a program we're building, then sure.   I read 5 as saying that UIs should be shallow layers over capabilities, which seems like it should be generally a good idea.

I find myself agreeing with most of your intermediate reasoning and then failing to understand the jump to the conclusion of tactic concatenative programming and the appeal of viewing user interfaces as programs. 


> I can create tacit concatenative models for which your named applicative (aka monadic) is generally too expressive for translation

I assume you mean by that that I will occasionally have to give you an error message "usage of name is illegal in this context", right?   For example, violates substructural types.  I still count that as an easy translation unless it's hard to explain the error, but I don't think it usually would be.

I probably won't respond again over the weekend.  Again, most of your ideas sound pretty good to me, but I think there are a couple of sticking points that I'm still not on board with.  I'm certainly open to the possibility that I just haven't gotten it yet, and either way I wish you the best of luck in getting your system going.

Best,
Matt


--

David Barbour

unread,
Sep 21, 2013, 6:34:19 PM9/21/13
to reactiv...@googlegroups.com, augmented-...@googlegroups.com, Fundamentals of New Computing
On Sat, Sep 21, 2013 at 12:29 PM, Matt McLelland <mclella...@gmail.com> wrote:
> An image could be interpreted as a high level world-map to support procedural generation with colors indicating terrain types and heights.

This is common practice in games, but it doesn't IMO make artists into programmers and it doesn't make the image into a program.  

Not by itself, I agree. Just like one hair on the chin doesn't make a beard, or one telephone doesn't make a social network. 

But scale it up! One artist will eventually have dozens or hundreds of data-objects representing different activities and interacting. In a carefully designed environment, the relationships between these objects also become accessible for observation, influence, and extension. 

The only practical difference between what you're calling an 'artist' vs. 'programmer' is scale. And, really, it's your vision of an artist's role that's failing to scale, not the artist's vision.  Artists are certainly prepared to act as programmers if it means freedom to do their work (cf. Unreal Kismet, or vvvv, for example). But they have this important requirement that is not well addressed by most languages today: immediate feedback, concreteness.

A team of artists can easily build systems with tens of thousands of interactions, at which point they'll face all the problems a team of programmers do. It is essential that they have better tools to modularize, visualize, understand, and address these problems than do programmers today. 
 

I think there is a useful distinction between user and programmer that should be maintained. 

I think there should be a fuzzy continuum, no clear distinction. Sometimes artists are more involved with concrete direct manipulations, sometimes more involved with reuse or tooling, with smooth transitions between one role and the other. No great gaps or barriers.

Do you have any convincing arguments for maintaining a clear distinction? What precisely is useful about it? 

 
How can you view playing a game of Quake as programming? what's to be gained?

Quake is a game with very simple and immutable mechanics. The act of playing Quake does not alter the Quake world in any interesting ways. Therefore, we would not develop a very interesting artifact-layer program.  There would, however, be an implicit program developed by the act of playing Quake: navigation, aiming, shooting. This implicit program would at least be useful for developing action-scripts and Quake-bots so you can cheat your way to the top. (If you aren't cheating, you aren't trying. :)

If you had a more mutable game world - e.g. Minecraft, Lemmings, Little Big Planet 2, or even Pokemon Yellow - then there is much more to gain by comprehending playing as programming, since you can model interesting systems. The same is true for games involving a lot of micromanagement: tower defense, city simulators, real-time tactics and strategy. You could shift easily from micromanagement to 'programming' higher level strategies. 

Further, I believe there are many, many games we haven't been able to implement effectively: real-time dungeon-mastering for D&D-like games, for example, and the sort of live story-play children tend to perform - changing the rules on-the-fly while swishing and swooping with dolls and dinosaurs. There are whole classes of games we can't easily imagine today because the tools for realizing them are awful and inaccessible to those with the vision.  

To comprehend user interaction as programming opens opportunities even for games.

Of course, if you just want to play, you can do that.
 

I find myself agreeing with most of your intermediate reasoning and then failing to understand the jump to the conclusion of tactic concatenative programming and the appeal of viewing user interfaces as programs. 

Tacit concatenative makes it all work smoothly. 

TC is very effective for:
* automatic visualization and animation
* streaming programs
* pattern detection (simple matching)
* simple rewrite rules
* search-based code generation
* Markov model predictions (user anticipation)
* genetic programming and tuning
* typesafe dataflow for linear or modal

Individually, each of these may look like an incremental improvement that could be achieved without TC. 

You CAN get automatic visualization and animation with names, it's just more difficult (no clear move vs. copy, and values held by names don't have a clear location other than the text). You CAN do pattern recognition and rewriting with names, it's just more difficult (TC can easily use regular expressions). You CAN analyze for linear safety using names, it's just more difficult (need to track names and scopes). You CAN predict actions using names, it's just more difficult (machine-learning, Markov models, etc. are very syntax/structure oriented). You CAN search logically for applicative code or use genetic programming, it's just freakishly more difficult (a lot more invalid or irrelevant syntax to search). You CAN stream applicative code, it's just more difficult (dealing with scopes, namespaces). 

But every little point, every little bit of complexity, adds up, pushing the system beyond viable accessibility and usability thresholds. 

Further, these aren't "little" points, and TC is not just "marginally" more effective. Visualization and animation are extremely important. Predicting and anticipating user actions is highly valuable. Code extraction from history, programming by example, then tuning and optimizing this code from history are essential. Streaming commands is the very foundation. 

Stop cherry-picking your arguments; you've lost sight of the bigger picture, or maybe you haven't glimpsed it yet. Step back. Try to address ALL these points, simultaneously, in one system, while *keeping it simple*. If you can do so with a named applicative model, I'll be impressed and interested. 


I will occasionally have to give you an error message "usage of name is illegal in this context", right?   For example, violates substructural types.  I still count that as an easy translation

Under your proposal, the safety property is no longer compositional, no longer correct-by-construction (i.e. requiring only syntactically local analysis to validate); it now requires a non-local post-hoc analysis (not an easy one, if you do any sort of inference). And while this might not seem important for the concerns you've been tracking so far, I ask you to review how this might affect streaming, local rewrites, and similar.  

You call it an 'easy' translation. I call it a 'lossy' translation. 

Or perhaps a more fitting phrase is: trying to put the toothpaste back in the tube. 
 

most of your ideas sound pretty good to me, but I think there are a couple of sticking points that I'm still not on board with.  I'm certainly open to the possibility that I just haven't gotten it yet, and either way I wish you the best of luck in getting your system going.

Thanks. I imagine most people would be less open, more dismissive, and I appreciate how you've engaged me on this so far.  

Warm Regards,

Dave

David Barbour

unread,
Sep 22, 2013, 1:31:39 PM9/22/13
to Fundamentals of New Computing, augmented-...@googlegroups.com, reactiv...@googlegroups.com
Mark,

You ask some good questions! I've been taking some concrete actions to realize my vision, but I haven't much considered how easily others might get involved. 

As I've written, I think a tactic concatenative (TC) language is the key to making it all work great. A TC language can provide a uniformly safe and simple foundation for understanding and manipulating streaming updates. User actions must be formally translated to TC commands, though I can start at a higher level and work my way down. However, the artifacts constructed and operated upon by this language must be concretely visualizable, composable, and manipulable - e.g. documents, diagrams, graphs, geometries. Homoiconic this is not.

My own plan is to implement a streamable, strongly typed, capability-secure TC bytecode (Awelon Bytecode, ABC) and build up from there, perhaps targeting Unity and/or developing a web-app IDE for visualization. (Unity is a tempting target for me due to my interest in AR and VR environments, and Meta's support for Unity.) 

I would very much favor a lightweight toolkit approach, similar to what the REBOL/Red community has achieved -fitting entire desktops and webservices as tiny apps built upon portable OS/runtime (< 1MB).  BTW, if you are a big believer in tools, I strongly recommend you look into what the REBOL community has achieved, and its offshoot Red. These people have already achieved and commercialized a fair portion of the FoNC ideals through their use of dialects. They make emacs look like a bloated, outdated, arcane behemoth.

(If REBOL/Red used capability-based security, pervasive reactivity, live programming, strong types, substructural types, external state, and... well, there are a lot of reasons I don't favor the languages. But what they've accomplished is very impressive!)

I think the toolkit approach quite feasible. ABC is designed for continuous reactive behaviors, but it turns out that it can be very effectively used for one-off functions and imperative code, depending only on how the capability invocations are interpreted. ABC can also be used for efficient serialization, i.e. as the protocol to maintain values in a reactive model. So it should be feasible to target Unity or build my own visualization/UI toolkit. (ABC will be relatively inefficient until I have a good compiler for it, but getting started should be easy once ABC is fully defined and Agda-sanitized.)

Best,

Dave


On Sep 21, 2013 10:52 PM, "Mark Haniford" <markha...@gmail.com> wrote:
David, 

Great Writeup.  To get down to more practical terms for laymen software engineers such as myself,  what can we do in immediate terms to realize your vision?

I'm a big believer in tools( even though I'm installing emacs 24 and live-tool).  Is there currently a rich IDE environment core in which we can start exploring visualization tools? 

Here's what I'm getting at. We have rich IDEs (in relative terms), Intellij, Resharper, VS, Eclipse, whatever..  I think they are still very archaic in programmer productivity.  The problem I see is that we have a dichotomy with scripting ennviroments (Emacs) as opposed to "heavy" IDEs.  e.g.  we can't easily script these IDEs for expermination.

thought?  

 

I find myself agreeing with most of your intermediate reasoning and then failing to understand the jump to the conclusion of tactic concatenative programming and the appeal of viewing user interfaces as programs. 

Tacit concatenative makes it all work smoothly. 

TC is very effective for:
* automatic visualization and animation
* streaming programs
* pattern detection (simple matching)
* simple rewrite rules
* search-based code generation
* Markov model predictions (user anticipation)
* genetic programming and tuning
* typesafe dataflow for linear or modal

Individually, each of these may look like an incremental improvement that could be achieved without TC. 

Matt McLelland

unread,
Sep 23, 2013, 11:48:07 AM9/23/13
to reactiv...@googlegroups.com
On Sat, Sep 21, 2013 at 5:34 PM, David Barbour <dmba...@gmail.com> wrote:
On Sat, Sep 21, 2013 at 12:29 PM, Matt McLelland <mclella...@gmail.com> wrote:
> An image could be interpreted as a high level world-map to support procedural generation with colors indicating terrain types and heights.

This is common practice in games, but it doesn't IMO make artists into programmers and it doesn't make the image into a program.  

Not by itself, I agree. Just like one hair on the chin doesn't make a beard, or one telephone doesn't make a social network. 

But scale it up! One artist will eventually have dozens or hundreds of data-objects representing different activities and interacting. In a carefully designed environment, the relationships between these objects also become accessible for observation, influence, and extension. 

The only practical difference between what you're calling an 'artist' vs. 'programmer' is scale. And, really, it's your vision of an artist's role that's failing to scale, not the artist's vision.  Artists are certainly prepared to act as programmers if it means freedom to do their work (cf. Unreal Kismet, or vvvv, for example). But they have this important requirement that is not well addressed by most languages today: immediate feedback, concreteness.

I don't think the difference is just scale.   I consider Kismet to be programming, even in the small.   If you're worried about cases, flows, pipelines, etc., you're probably programming.  If you're just using a UI to build bitmaps that are supposed to represent heightmaps, I wouldn't call that programming, even at scale.  On the other hand, if you're defining the way that height maps are to be interpreted as a triangulated mesh, then that looks like programming to me.    If you want to say that programming is a spectrum and UI usage is programming lite, that's fine with me, but I certainly don't see that choice of terminology as important.

A team of artists can easily build systems with tens of thousands of interactions, at which point they'll face all the problems a team of programmers do. It is essential that they have better tools to modularize, visualize, understand, and address these problems than do programmers today. 

At which point they look back at their tangled web of Kismet script and wish they'd learned to be comfortable with text :).
 


I think there is a useful distinction between user and programmer that should be maintained. 

I think there should be a fuzzy continuum, no clear distinction. Sometimes artists are more involved with concrete direct manipulations, sometimes more involved with reuse or tooling, with smooth transitions between one role and the other. No great gaps or barriers.

Do you have any convincing arguments for maintaining a clear distinction? What precisely is useful about it? 

No, but I'm not the one claiming that my terminology choice is an important paradigm shift.    I think the important points of your viewpoint don't depend on calling ordinary UI usage programming.   

I find myself agreeing with most of your intermediate reasoning and then failing to understand the jump to the conclusion of tactic concatenative programming and the appeal of viewing user interfaces as programs. 

Tacit concatenative makes it all work smoothly. 

TC is very effective for:
* automatic visualization and animation
* streaming programs
* pattern detection (simple matching)
* simple rewrite rules
* search-based code generation
* Markov model predictions (user anticipation)
* genetic programming and tuning
* typesafe dataflow for linear or modal

Individually, each of these may look like an incremental improvement that could be achieved without TC. 

You CAN get automatic visualization and animation with names, it's just more difficult (no clear move vs. copy, and values held by names don't have a clear location other than the text). You CAN do pattern recognition and rewriting with names, it's just more difficult (TC can easily use regular expressions). You CAN analyze for linear safety using names, it's just more difficult (need to track names and scopes). You CAN predict actions using names, it's just more difficult (machine-learning, Markov models, etc. are very syntax/structure oriented). You CAN search logically for applicative code or use genetic programming, it's just freakishly more difficult (a lot more invalid or irrelevant syntax to search). You CAN stream applicative code, it's just more difficult (dealing with scopes, namespaces). 

I'll go through and say something about these bullets point-by-point, but first let me concede that many of those things are probably easier to implement on top of TC.   One principle I hold is that surface syntax design should be almost exclusively designed for consumption by humans.  A corollary is that there will be alternative syntaxes that are more amenable to automated processing.   So I'm willing to accept that they will be easier for you if you choose a syntax that, to my eye, requires the programmer to meet the machine halfway.  Another corollary is that meta-programming via the same interface to code that humans use is a bad idea.

* automatic visualization and animation -- I have plans for visualization and animation.  Not sure how automatic you'd consider it or how it compares to what you're doing.
* streaming programs -- The fact that UI inputs are best viewed as *streaming* programs is, to me, another good reason not to view UI inputs and programs as isomorphic.  A UI input is a special kind of program.
* simple rewrite rules -- Algebraic manipulation of expressions in the presence of custom constructs (similar to macros but without the things I dislike about macros) has been an interesting source of design problems.  I have no doubt that some of the algebra is simpler with TC, but see my principle above.
* typesafe dataflow for linear or modal -- Yes, this becomes a "non-local" problem in your sense of that term, just like many other type-inference problems.   I'd still call it local, though, because it just requires examining the current context.  Unless you're talking about global linear variables, which I'd call a design error.
* genetic programming and tuning -- I have some vague thoughts on automated tuning, but I'm not approaching it by rewriting at source level. 

The rest of these are things I'm not really interested in doing at all right now:
* search-based code generation
* Markov model predictions (user anticipation)
* pattern detection (simple matching)

I know it's possible to do that kind of thing... e.g. Google, Wolfram alpha.  It's just not my area of expertise.  I will say that if I were going to go down that road, the goal would be making the source code even *less* structured, toward natural language, so that it could be more amenable to non-programmers or programmers not familiar with the APIs they're using.  
 
Stop cherry-picking your arguments; you've lost sight of the bigger picture, or maybe you haven't glimpsed it yet. Step back. Try to address ALL these points, simultaneously, in one system, while *keeping it simple*. If you can do so with a named applicative model, I'll be impressed and interested. 

I guess we'll just have to wait until I can release something to find out whether you think it's impressive and interesting or overly complex :).
 
Best,
Matt


David Barbour

unread,
Sep 23, 2013, 2:46:31 PM9/23/13
to augmented-...@googlegroups.com, Fundamentals of New Computing, reactiv...@googlegroups.com
John, I'll explain a few points that lead me to favor a bytecode:

1) I think distribution in an intermediate language is inevitable. As a concrete example, if we distribute code in JavaScript, there will be people who build CoffeeScript or Elm or Fay or GWT, using JavaScript as a target. More broadly, I favor dialects, DSLs, user-defined syntax and notations for different concepts - i.e. what I described in the first article: the different documents, diagrams, graphs and geometries have different interpreters. In general, I suppose it is feasible to distribute a serialization of Elm/CoffeScript/DSL/etc. along with the interpreter for it, but I think the overhead might not be acceptable. 

2) I believe code distribution shouldn't be "shallow" like it currently is between client and server. Sometimes we distribute code that distributes code like a Matryoshka doll. Sometimes code is the output of collaboration between services - e.g. a service that models the binding of multiple other services together, or that interprets the result of querying a service. In many cases, code will be specialized just for the recipient's current queries. Optimization and typechecking are pervasive in the envisioned system. In this context, it's unclear with whom we should collaborate.

3) Often, sources are hidden for security reasons. And I'm not just talking about IP protection, but rather that sources may contain capabilities regarding which we wish to control distribution. This is one of the motivations for source code being the result of collaboration, or specialized to a recipient.

4) I am interested in Augmented Reality and (to a lesser degree) VR programming. In that such programming environments, the amount of metadata tends to greatly outweigh the semantic content. We have visual fingerprints, meshes, orientations. We have regions, rotation, navigation in both large and small. In many cases, AR or VR itself might serve as a partially shared space (formally, we might set our environment to observe constructs exported by friends, businesses, public spaces). However, the sheer amount of metadata involved suggests that distribution of source shouldn't be the only basis for distribution of its behavior. The ability to 'compile' a massive workspace into a small, reusable tool is quite valuable.

5) I am interested in programming-by-example, extracting safe, reusable code from user actions and toolsets. However, I think users creating this code won't often have names even for their own actions, and automatically creating names seems like nonsense. It seems wiser that code is 'documented' by use of examples and animations, and named post-hoc for patterns that come up again and again. 

6) The vast majority of people, even programmers, will never look under-the-hood at the source code for the applications they use. Looking at a massive hairball of code that forms an application is perhaps not the best basis for collaboration. As Sean McDirmid observes, use of a shared space seems a better basis for collaborative efforts.

...

But the idea of having a high-level distribution language is still tempting.

The 'Awelon' language is a simple expansion of the Awelon Bytecode. The main difference is: Awelon has a simple module system, and Awelon has defined words. A module for simple block-free data plumbing in Awelon's standard environment might look like:

    import abc
    assocr = swap assocl swap assocl swap  % ((a*b)*c) -> (a*(b*c))
    rot2 = intro1 rot3 intro1 rot3 elim1 elim1  % (a*(b*c))->(b*(a*c))
    zip2 = assocr rot3 rot2 assocl  %  ((a*b)*(c*d)) -> ((a*c)*(b*d))
    roll2  = swap rot3 rot2 swap  % rot2 on stack
    take = zip2 rot2 % from stack to hand
    put = rot2 zip2  % from hand to stack
    jugl2 = rot2 roll2 rot2  % rot2 in hand
    
One line of comma-separated imports is allowed. Any word not prefixed with `_` is exported. Any word prefixed with `test` or `_test` is executed as an application in a confined testing environment. In the context of a wiki-like environment I support export of `this = ` such that I can treat modules as having value as software components. 

Awelon's standard environment is essentially a structure designed for capability-secure text-based programming:

    (stack * (hand * (powerblock * (stackName * namedStacks)))))

All authority, even access to exclusive state,  flows from the powerblock. Controlling which powerblock reaches a subprogram provides a relatively simple mechanism to extend, control, and audit the 'deep' behavior of (potentially distrusted) subprograms. However, since the powerblock has a standard location in the environment, programmers get all the syntactic conveniences associated with ambient authority. 

The named stacks are actually modeled as an association list, so lookup is with text. But one can goto a stack by name, swapping it with the current one (keeping everything in the current hand). Or one may store/load from a named stack (which is basically a goto take goback put). Named stacks thus operate like a mix of locations, keyword inventory, and potential basis for environment extensions. With an assumption that the content on a stack is often a document-like structure (document, diagram, graph, geometry, etc.), I can also build a library for zipper navigation of this document. In this sense, each stack becomes a 'workspace' for artifact manipulation. 

Awelon is still designed under the paradigm of text programming in a larger IDE, albeit potentially augmented with greater visualization of the environment. But alternative environments could be developed, e.g, for augmented reality. Software components built for one environment would generally be compatible with others. 

To get this back on subject: Why not just distribute Awelon a lot like JavaScript?

It seems the higher level distribution language doesn't really gain me much. It just becomes relatively complex, expensive bytecode. Every intermediate processor would explicit code to deal with names. Rewrites would result in code that doesn't have a clear name. Pattern matching and machine-learning would need a bunch of extra logic to eliminate boundaries between human names and build their own models of meaningful words.

I've spent a lot of time pondering that question. Making Awelon the new JavaScript appeals to me. But ultimately it seems a streaming bytecode has better characteristics for my goals. 

In the programming environment I describe above, Awelon would be fully subsumed, just one more artifact to be interpreted. Maybe that's how it should be.


On Mon, Sep 23, 2013 at 1:54 AM, John Nilsson <jo...@milsson.nu> wrote:
A thought about bytecode: One problem with distributing things in a compiled version is that it doesnt really afford collaboration. If the primary means of distribution instead would be the source as such it's much easier to debug and fix issues discovered in imported modules. If something like a github fork was the standard way of importing it would be even better, then you are basically just a pull request away from contributing tweaks too.
Things like minification as in the javascript world would have to be discouraged though, so compilation and caching should be designers so as not to be a performance problem.
But, like git, this should be easy if things are based on immutable content adressable fragments. See f.ex. Datomic and/or camlistore as an approach to this. ( and docker for that matter)


BR
John
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at http://groups.google.com/group/augmented-programming.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at http://groups.google.com/group/augmented-programming.

David Barbour

unread,
Sep 23, 2013, 4:12:26 PM9/23/13
to reactiv...@googlegroups.com
On Mon, Sep 23, 2013 at 8:48 AM, Matt McLelland <mclella...@gmail.com> wrote:

If you want to say that programming is a spectrum and UI usage is programming lite, that's fine with me, but I certainly don't see that choice of terminology as important.

It's important that the PL and UI designers think "a good UI should have characteristics of a good PL, and vice versa". Currently, they don't. You seem to believe people will look past the distinction in terminology. Historically, they won't. 
 

look back at their tangled web of Kismet script and wish they'd learned to be comfortable with text :).

I'm not so sure about that conclusion. Isn't it better that a tangled web actually LOOK like a tangled web, rather than look like 'text' but require hopping from one textual process to another without any visible connection between them?

Of course, it would be ideal that the environment can help them by fading out parts of the web irrelevant to a particular query or concern.

 
I think the important points of your viewpoint don't depend on calling ordinary UI usage programming.

I think that, if I still viewed ordinary UI usage and programming as distinct activities, I would have been unable to develop and refine my points. The value of a viewpoint is how it shapes the viewer. You also have a viewpoint, one that shapes your behavior. As is, you clearly plan to maintain the wall of syntax and arcane types between users and programmers.
 


One principle I hold is that surface syntax design should be almost exclusively designed for consumption by humans.

Isn't "surface syntax design" just another phrase for "UI"?

I certainly agree there should be a visible surface for human interaction. In the environment I described, this visible program is the artifacts humans produce - images, widgets, documents, diagrams, graphs, geometries. These broad categories may have a variety of structures and notations, DSLs and dialects. Ultimately, these objects are interpreted by code in the user-model (formally, consumption by the human).  

I seem to be holding this principle, even if through a different philosophy.

 
meta-programming via the same interface to code that humans use is a bad idea

It's a great idea. 

Every *user action* is an act of meta-programming. That doesn't mean the surface humans see is user-actions, but rather that every action they take is explained by a metaprogram and that this metaprogram can be accessed formally - e.g. for undo, review, history rewriting, and extracting reusable meta-programs. Those extracted meta-programs might then be applied to directly manipulate structure, or may be preserved as a form of latent manipulation. Regardless, I think they will have a more intuitive meaning to their users, since they correspond to concrete manipulations of meaningful artifacts.
 

The fact that UI inputs are best viewed as *streaming* programs is, to me, another good reason not to view UI inputs and programs as isomorphic.  A UI input is a special kind of program.

The full stream might seem a little special (it's unbounded, and the history is GC'd). But ultimately the UI input for any given period of time is a safe, pure, `state->state` function. It isn't clear what is special about that.


I'd still call it local, though, because it just requires examining the current context.  Unless you're talking about global linear variables, which I'd call a design error.

The 'current context' can be quite large once you account for closures and nested scopes.

And there's nothing wrong with linear globals. We usually call them 'singletons' or similar, though.


John Nilsson

unread,
Sep 23, 2013, 4:54:38 AM9/23/13
to augmented-...@googlegroups.com, Fundamentals of New Computing, reactiv...@googlegroups.com
A thought about bytecode: One problem with distributing things in a compiled version is that it doesnt really afford collaboration. If the primary means of distribution instead would be the source as such it's much easier to debug and fix issues discovered in imported modules. If something like a github fork was the standard way of importing it would be even better, then you are basically just a pull request away from contributing tweaks too.
Things like minification as in the javascript world would have to be discouraged though, so compilation and caching should be designers so as not to be a performance problem.
But, like git, this should be easy if things are based on immutable content adressable fragments. See f.ex. Datomic and/or camlistore as an approach to this. ( and docker for that matter)


BR
John


Den söndagen den 22:e september 2013 skrev David Barbour:
--
You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at http://groups.google.com/group/augmented-programming.

Roly Perera

unread,
Sep 21, 2013, 4:07:17 AM9/21/13
to reactiv...@googlegroups.com
You couldn't put it better ;)

David Barbour

unread,
Sep 23, 2013, 4:47:36 PM9/23/13
to Fundamentals of New Computing, reactiv...@googlegroups.com, augmented-...@googlegroups.com
Chris,

You offer a lot of good advice. I agree that dog-fooding early would be ideal. 

Though for UI, I currently favor one of two directions:
* web apps
* OpenGL (perhaps just a subset, the WebGL API)

I also want to address these in a manner more compatible with reactive programming. Fortunately, UI is a relatively good fit for both pipelining and reactive programming. I think I can make this work, but I might be using GPipe or LambdaCube as bases for the GL API.

Best,

Dave

On Mon, Sep 23, 2013 at 2:59 AM, Chris Warburton <chris...@googlemail.com> wrote:
David Barbour <dmba...@gmail.com> writes:

> My own plan is to implement a streamable, strongly typed, capability-secure
> TC bytecode (Awelon Bytecode, ABC) and build up from there, perhaps
> targeting Unity and/or developing a web-app IDE for visualization. (Unity
> is a tempting target for me due to my interest in AR and VR environments,
> and Meta's support for Unity.)

When bootstrapping pervasive systems like this I think it's important to
'dog food' them as early as possible, since that makes it easier to work
out which underlying feature should be added next (what would help the
most common irritation?), and allows for large libraries of 'scratch an
itch' scripts to build up.

I would find out what worked (and what didn't) for other projects which
required bootstrapping. Minimalist and low-level systems are probably
good examples, since it's harder for them to fall back on existing
software. I suppose I have to mention self-hosting languages like
Smalltalk, Self and Factor. I'd also look at operating systems
(MenuetOS, ReactOS, Haiku, etc.), desktop 'ecosystems' (suckless, ROX,
GNUStep, etc.), as well as Not-Invented-Here systems like Unhosted. What
was essential for those systems to be usable? Which areas were
implemented prematurely and subsequently replaced?

If it were me, I would probably bootstrap via a macro system (on Linux):
 * Log all X events, eg. with xbindkeys (togglable, for password entry)
 * Write these logs as concatenative programs, which just call out to
   xte over and over again
 * Write commands for quickly finding, editing and replaying these
   programs

With this in place, I'd have full control of my machine, but in a very
fragile, low-level way. However, this would be enough to start
scratching itches.

When controlling Ratpoison via simulated keystrokes becomes too tedious,
I might write a few Awelon words to wrap Ratpoison's script API. I might
hook into Selenium to make Web automation easier. As each layer starts
to flake, I can go down a level and hook into GTK widgets, Imagemagick,
etc. until some tasks can be achieved by composing purely 'native'
Awelon components.

It would be very hacky and non-ideological to begin with, but would be
ever-present and useful enough to get some real usage.

Cheers,
Chris
_______________________________________________
fonc mailing list
fo...@vpri.org
http://vpri.org/mailman/listinfo/fonc

David Barbour

unread,
Sep 23, 2013, 5:27:05 PM9/23/13
to Fundamentals of New Computing, reactiv...@googlegroups.com, augmented-...@googlegroups.com
Pavel, 

I'm interested in collaborators. But the very first help I'd need is administrative - figuring out how to effectively use collaborators. ;)

Regarding names: I think it best if names have an explicit lookup mechanism. I.e. names aren't documentation, they're more like an index in a map. If we don't automate the use of names, they won't do us very much good. But by making the automation explicit, I think their fragility and the difficulties surrounding the names (e.g. with respect to closures, messaging, drag-and-drop, etc.) also becomes more obvious and easier to analyze.

In Awelon at the moment, I use 'named stacks' that enable load/store/goto. But these are formally modeled within Awelon - i.e. as an association list. True external names and capabilities require more explicit lookups using capabilities or a powerblock.

I agree with your point that many programmers probably aren't very motivated to eliminate the boundary. Fortunately, we don't need the aide of every programmer, just enough to get the project moving and past critical mass. :)

Regards,

Dave


On Mon, Sep 23, 2013 at 3:21 AM, Pavel Bažant <pba...@gmail.com> wrote:
Dear David,

I am seriously interested in collaborating with you!

I especially like the following points:
1) Programming by text manipulation is not the only way to do programming
I actually tend to have the more "iconoclastic" view that text-based programming is "harmful" -- see my previous rant on FONC, but you mentioned what should be done, whereas I only managed to point out what should not be done.
2) I like the tacit idea. I always considered the omnipresent reliance on names as means of binding things together as extremely fragile. Do you think one could treat the names as annotations with documentation purpose, without them being the binding mechanism?
3) Last but not least: There is no fundamental difference between programmers and users. Both groups are just using computers to create some digital content. Any sharp boundary between the way the two groups work is maybe unnatural. I think psychology is an important factor here. I actually do think that many programmers actually like the existence of such boundary and are not motivated to make it disappear, but this is really just an opinion.



_______________________________________________
fonc mailing list
fo...@vpri.org
http://vpri.org/mailman/listinfo/fonc

David Barbour

unread,
Sep 23, 2013, 9:10:45 PM9/23/13
to augmented-...@googlegroups.com, reactiv...@googlegroups.com, Fundamentals of New Computing
Okay, so if I understand correctly you want everyone to see the same thing, and just deal with the collisions when they occur. 

You also plan to mitigate this by using some visual indicators when "that word doesn't mean what you think it means".  This would require search before rendering, but perhaps it could be a search of the user's personal dictionary - i.e. ambiguity only within a learned set. I wonder if we could use colors or icons to help disambiguate.

A concern I have about this design is when words have meanings that are subtly but significantly different. Selecting among these distinctions takes extra labor compared to using different words or parameterizing the distinctions. But perhaps this also could be mitigated, through automatic refactoring of the personal dictionary (such that future exposure to a given word will automatically translate it). 

I titled this "Personal Programming Environment as Extension of Self" because I think it should reflect our own metaphors, our own thoughts, while still being formally precise when we share values. Allowing me to use your words, your meanings, your macros is one thing - a learning experience. Asking me to stick with it, when I have different subtle distinctions I favor, is something else.  

Personally, I think making the community "see" the same things is less important so long as they can share and discover by *meaning* of content rather than by the words used to describe it. Translator packages could be partially automated and further maintained implicitly with permission from the people who explore different projects and small communities. 

Can we create systems that enable people to use the same words and metaphors with subtly different meanings, but still interact efficiently, precisely, and unambiguously?

Best,

Dave


On Mon, Sep 23, 2013 at 5:26 PM, Sean McDirmid <smc...@microsoft.com> wrote:

The names are for people, and should favor readability over uniqueness in the namespace; like ambiguous English words context should go a long way in helping the reader understand on their own (if not, they can do some mouse over). We can even do fancy things with the names when they are being rendered, like, if they are ambiguous, underlay them with a dis-ambiguating qualifier. The world is wide open once you’ve mastered how to build a code editor! Other possibilities include custom names, or multi-lingual names, but I’m worried about different developers “seeing” different things…we’d like to develop a community that sees the same things.

 

The trick is mastering search and coming up with an interface so that it becomes as natural as identifier input.

 

From: augmented-...@googlegroups.com [mailto:augmented-...@googlegroups.com] On Behalf Of David Barbour
Sent: Tuesday, September 24, 2013 5:10 AM
To: augmented-...@googlegroups.com


Subject: Re: Personal Programming Environment as Extension of Self

 

It isn't clear to me what you're suggesting. That module names be subject to... edit-time lookups? Hyperlinks within the Wiki are effectively full URLs? That could work pretty well, I think, though it definitely favors the editor over the reader. 

 

Maybe what we need is a way for each user to have a personal set of PetNames.

 

 

This way the reader sees xrefs in terms of her personal petname list, and the writer writes xrefs in terms of his.

 

I was actually contemplating this design at a more content-based layer:

 

* a sequence of bytecode may be given a 'pet-name' by a user, i.e. as a consequence of documenting or explaining their actions. 

* when an equivalent sequence of bytecode is seen, we name it by the user's pet-name.

*    rewriting can help search for equivalencies.

* unknown bytecode can be classifed by ML, animated, etc. to help highlight how it is different.  

* we can potentially search in terms of code that 'does' X, Y, and Z at various locations. 

* similarly, we can potentially search in terms of code that 'affords' operations X, Y, and Z.

 

I think both ideas could work pretty well together, especially since '{xref goes here}{lookup}$' itself could given a pet name.

 

 

On Mon, Sep 23, 2013 at 1:41 PM, Sean McDirmid <smc...@microsoft.com> wrote:

Maybe think of it as a module rather than a namespace. I'm still quite against namespaces or name based resolution in the language semantics; names are for people, not compilers (subtext). Rather, search should be a fundamental part of the IDE, which is responsible for resolving strings into guids. 

 

It will just be like google mixed in with Wikipedia, not much to be afraid of. 


On Sep 24, 2013, at 4:32, "David Barbour" <dmba...@gmail.com> wrote:

Sean,

 

I'm still interested in developing a code wiki! Had that idea in mind since 2007-ish. 

 

But I might favor a more DVCS-style approach, where edits are cherry-picked into each user's/group's private view of the wiki, and where shared code is simply published to spaces where other people can find it easily. (I'd really like some sort of content-based search, i.e. find me functions relevant to this input that will produce outputs with a given property.)

 

I think forcing people to use a global Wikipedia repo will (reasonably) scare too many people off. But I also think there should be one of them, as a central collaboration point to help flatten the namespaces, and perhaps another one for each large business, and another for each project, and another for each user, with different groups finding niches for themselves.

 

The main thing is to avoid deep namespaces like Java. There are enough words for everyone.

 

(Hmm. I wonder if genetic programming with TC code might be an interesting way to have little wiki-babies. ;)

 

Best,

 

Dave

 

 

On Mon, Sep 23, 2013 at 2:04 AM, Sean McDirmid <smc...@microsoft.com> wrote:

Imagine a language that comes with one shared namespace that all language users can import from and export into, let’s call it the “code wiki.”  Search is built into the IDE so programmers can find things from the code wiki easily. Only one branch of versioning is supported, and like Wikipedia, vandalism is handled quickly via editors who care. At any rate, programmers are expected to vet code that they are interested in reusing, and ensure that changes to the code are reasonable (edit wars might result in explicit forking), aided by very good diff tooling.

 

 

--

You received this message because you are subscribed to the Google Groups "Augmented Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to augmented-progra...@googlegroups.com.
To post to this group, send email to augmented-...@googlegroups.com.
Visit this group at http://groups.google.com/group/augmented-programming.
For more options, visit https://groups.google.com/groups/opt_out.

David Barbour

unread,
Sep 23, 2013, 10:24:52 PM9/23/13
to augmented-...@googlegroups.com, reactiv...@googlegroups.com, Fundamentals of New Computing
Ambiguity in English is often a problem. The Artist vs. Cowboy example shows that ambiguity is in some cases not a problem. I think it reasonable to argue that: when the context for two meanings of a word is obviously different, you can easily disambiguate using context. The same is true for types. But review my concern below: "when words have meanings that are subtly but significantly different". In many cases the difference is subtle but important. It is these cases where ambiguity can be troublesome. 

Person.draw(object) <-- What do I mean by this? Am I drawing a picture? a gun? a curtain?

Regarding conversations with coworkers:

I think in the traditional KVM programming environment, the common view does seem important - e.g. for design discussions or over-the-shoulder debugging. At the moment, there is no easy way to use visual aides and demonstrations when communicating structure or meaning. 

In an AR or VR environment, I hypothesize this pressure would be alleviated a great deal, since the code could be shown to each participant in his or her own form and allow various meaning-by-demonstration/exploration forms of communication. I'm curious whether having different 'ways' of seeing the code might even help for debugging. Multiple views could also be juxtaposed if there are just a few people involved, enabling them to more quickly understand the other person's point of view.

Best,

Dave

On Mon, Sep 23, 2013 at 6:21 PM, Sean McDirmid <smc...@microsoft.com> wrote:

Ambiguity is common in English and it’s not a big problem: words have many different definitions, but when read in context we can usually tell what they mean. For “Cowboy.Draw(Gun)” and “Artist.Draw(Picture)”, we can get a clue about what Draw means; ambiguity is natural! For my language, choosing what Draw is meant drives type inference, so I can’t rely on types driving name lookup. But really, the displayed annotation goes in the type of the variables surrounding the Draw call (Cowboy, Gun) rather than the Draw Call itself.

 

Language is an important part of society. Though I can use translation to talks to my Chinese speaking colleagues, that we all speak in English at work and share the names for things is very important for collaboration (and suffers when we don’t). For code, we might be taking about it even when we are not reading it, so standardizing the universe of names is still very important.

David Barbour

unread,
Sep 24, 2013, 12:14:52 AM9/24/13
to Fundamentals of New Computing, reactiv...@googlegroups.com, augmented-...@googlegroups.com

I think it's fine if people model names, text, documents, association lists, wikis, etc. -- and processing thereof.

And I do envision use of graphics as a common artifact structure, and just as easily leveraged for any explanation as text (though I imagine most such graphics will also have text associated).

Can you explain your concern?

On Sep 23, 2013 8:16 PM, "John Carlson" <yott...@gmail.com> wrote:

Don't forget that words can be images, vector graphics or 3D graphics.  If you have an open system, then people will incorporate names/symbols.  I'm not sure you want to avoid symbolic processing, but that's your choice.

I'm reminded of the omgcraft ad for cachefly.
John

_______________________________________________
fonc mailing list
fo...@vpri.org
http://vpri.org/mailman/listinfo/fonc

David Barbour

unread,
Sep 24, 2013, 1:24:20 AM9/24/13
to Fundamentals of New Computing, augmented-...@googlegroups.com, reactiv...@googlegroups.com
Oh, I see. As I mentioned in the first message, I plan on UTF-8 text being one of the three basic types in ABC. There is text, rational numbers, and blocks. Even if I'm not using names, I think text is very useful for tagged values and such.

      {Hello, World!}

Text is also one of the problems I've been banging my head against since Friday. Thing is, I really hate escapes. They have this nasty geometric progression when dealing with deeply quoted code:

     {} -> {{\}} -> {{{\\\}\}} -> {{{{\\\\\\\}\\\}\}} -> 
        {{{{{\\\\\\\\\\\\\\\}\\\\\\\}\\\}\}}

I feel escapes are too easy to handle incorrectly, and too difficult to inspect for correctness. I'm currently contemplating a potential solution: require all literal text to use balanced `{` and `}` characters, and use post-processing in ABC to introduce any imbalance. This could be performed in a streaming manner. Inductively, all quoted code would be balanced.

Best,

Dave





On Mon, Sep 23, 2013 at 9:28 PM, John Carlson <yott...@gmail.com> wrote:

I don't really have a big concern.  If you just support numbers, people will find clever, but potentially incompatible ways of doing strings.  I recall in the pre-STL days supporting 6 different string classes.  I understand that a name is different than a string, but I come from a perl background.  People don't reinvent strings in perl to my knowledge.

Matt M

unread,
Sep 24, 2013, 10:16:50 AM9/24/13
to augmented-...@googlegroups.com, reactiv...@googlegroups.com, Fundamentals of New Computing
9:15 AM (less than a minute ago)
>  Person.draw(object) <-- What do I mean by this? Am I drawing a picture? a gun? a curtain?

And the way this works in conversation is that your partner stops you and says "wait, what do you mean by 'draw'"?  Similarly an IDE can underline the ambiguity and leave it to the user to resolve, either explicitly or implicitly by continuing to write more (often ambiguity is removed with further context).

I completely agree with Sean's quote (of Johnathan Edwards?) that names are for people, and that name resolution should almost never be part of the dynamics of a language.  Names should be resolved statically (preferably at edit time).

Matt

David Barbour

unread,
Sep 24, 2013, 10:43:18 AM9/24/13
to Fundamentals of New Computing, augmented-...@googlegroups.com, reactiv...@googlegroups.com
Thanks for the ref, Chris. I'll take some time to absorb it.

On Tue, Sep 24, 2013 at 1:46 AM, Chris Warburton <chris...@googlemail.com> wrote:
David Barbour <dmba...@gmail.com> writes:

> Text is also one of the problems I've been banging my head against since
> Friday. Thing is, I really hate escapes. They have this nasty geometric
> progression when dealing with deeply quoted code:
>
>      {} -> {{\}} -> {{{\\\}\}} -> {{{{\\\\\\\}\\\}\}} ->
>         {{{{{\\\\\\\\\\\\\\\}\\\\\\\}\\\}\}}
>
> I feel escapes are too easy to handle incorrectly, and too difficult to
> inspect for correctness. I'm currently contemplating a potential solution:
> require all literal text to use balanced `{` and `}` characters, and use
> post-processing in ABC to introduce any imbalance. This could be performed
> in a streaming manner. Inductively, all quoted code would be balanced.

The geometric explosion comes from the unary nature of escaping. It
wouldn't be too difficult to add a 'level', for example:

{} -> {{\0}} -> {{{\1}\0}} -> {{{{\2}\1}\0}} ->
         {{{{{\3}\2}\1}\0}}

The main problem with escaping is that it is homomorphic: ie. it is
usually "String -> String". This is basically the source of all code
injection attacks. It wouldn't be too bad if escaping were idempotent,
since we could add extra escapes just in case, but it's not so we end up
keeping track manually, and failing.

There's a good post on this at
http://blog.moertel.com/posts/2006-10-18-a-type-based-solution-to-the-strings-problem.html

It would be tricky to implement a solution to this in a way that's open
and extensible; if we're be passing around first-class functions anyway,
we could do Haskell's dictionary-passing manually.

Cheers,
Chris

David Barbour

unread,
Sep 24, 2013, 10:58:14 AM9/24/13
to augmented-...@googlegroups.com, reactiv...@googlegroups.com, Fundamentals of New Computing

I have nothing against name resolution at edit-time. My concern is that giving the user a list of 108 subtly different definitions of 'OOP' and saying "I can't resolve this in context. Which one do you mean here?" every single time would be insufferable, even if the benefit is that everyone 'sees' the same code.


On Sep 24, 2013 7:15 AM, "Matt M" <mclella...@gmail.com> wrote:

>  Person.draw(object) <-- What do I mean by this? Am I drawing a picture? a gun? a curtain?

And the way this works in conversation is that your partner stops you and says "wait, what do you mean by 'draw'"?  Similarly an IDE can underline the ambiguity and leave it to the user to resolve, either explicitly or implicitly by continuing to write more (often ambiguity is removed with further context).

I completely agree with Sean's quote (of Johnathan Edwards?) that names are for people, and that name resolution should almost never be part of the dynamics of a language.  Names should be resolved statically (preferably at edit time).

Matt


Matt McLelland

unread,
Sep 24, 2013, 2:44:43 PM9/24/13
to reactiv...@googlegroups.com, augmented-...@googlegroups.com
I agree that if you got to the point of having 108 identically named and subtly different definitions in scope, that would be troublesome.   My advice would be: don't do that ;).   But I think you intend this as a potential problem with Sean's idea of a global namespace.   I don't know what he intends, but my thinking is that a flat global namespace is mostly a good idea, but we'll still want tools for scope management, bringing in groups of related symbols at once.





--
You received this message because you are subscribed to the Google Groups "reactive-demand" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reactive-dema...@googlegroups.com.

David Barbour

unread,
Sep 24, 2013, 7:55:26 PM9/24/13
to Fundamentals of New Computing, augmented-...@googlegroups.com, reactiv...@googlegroups.com
Hmm. Indentation - i.e. newline as a default escape, then using spacing after newline as a sort of counter-escape - is a possibility I hadn't considered. It seems a little awkward in context of a bytecode, but I won't dismiss it out of hand. I'd need to change my open-quote character, of course. I'll give this some thought. Thanks.

On Tue, Sep 24, 2013 at 4:19 PM, Loup Vaillant-David <l...@loup-vaillant.fr> wrote:
One way of escaping is indentation, like Markdown.

    This is arbitrary code
        This is arbitrary code *in* arbitrary code.
            and so on.

No more escape sequences in the quotation.  You just have the
inconvenience of prefixing each line with a tab or something.

Loup.


On Mon, Sep 23, 2013 at 10:24:20PM -0700, David Barbour wrote:
> Text is also one of the problems I've been banging my head against since
> Friday. Thing is, I really hate escapes. They have this nasty geometric
> progression when dealing with deeply quoted code:
>
>      {} -> {{\}} -> {{{\\\}\}} -> {{{{\\\\\\\}\\\}\}} ->
>         {{{{{\\\\\\\\\\\\\\\}\\\\\\\}\\\}\}}
>
> I feel escapes are too easy to handle incorrectly, and too difficult to
> inspect for correctness. I'm currently contemplating a potential solution:
> require all literal text to use balanced `{` and `}` characters, and use
> post-processing in ABC to introduce any imbalance. This could be performed
> in a streaming manner. Inductively, all quoted code would be balanced.

David Barbour

unread,
Sep 25, 2013, 7:23:07 PM9/25/13
to Fundamentals of New Computing, augmented-...@googlegroups.com, reactiv...@googlegroups.com
If we're just naming values, I'd like to avoid the complexity and just share the value directly. Rather than having "foo" function vs. "bar" function, we'll just have a block of anonymous code. If we have a large sound file that gets a lot of references, perhaps in that case explicitly using a content-distribution and caching model would be appropriate, though it might be better to borrow from Tahoe-LAFS for security reasons. 

For identity, I prefer to formally treat uniqueness as a semantic feature, not a syntactic one. Uniqueness can be formalized using substructural types, i.e. we need an uncopyable (affine typed) source of unique values.  I envision a uniqueness source is used for:

1) creating unique sealer/unsealer pairs.
2) creating initially 'exclusive' bindings to external state. 
3) creating GUID-like values that afford equality testing. 

In a sense, this is three different responsibilities for identity. Each involves different types. It seems what you're calling 'identity' corresponds to item 2.

If I assume those responsibilities are handled, and also elimination of local variable or parameter names because of tacit programming, the remaining uses of 'names' I'm likely to encounter are:

* names for dynamic scope, config, or implicit params
* names for associative lookup in shared spaces
* names as human short-hand for values or actions

It is this last item that I think most directly corresponds to what Sean and Matt call names, though there might also be a bit of 'independent maintenance' (external state via the programming environment) mixed in. Regarding shorthand, I'm quite interested in alternative designs, such as binding human names to values based on pattern-matching (so when you write 'foo' I might read 'bar'), but Sean's against this due to out-of-band communication concerns. To address those concerns, use of an extended dictionary that tracks different origins for words seems reasonable. 

Regarding your 'foo' vs. 'bar' equivalence argument, I believe hashing is not associative. Ultimately, `foo bar baz` might have the same expansion-to-bytecode as `nitwit blubber oddment tweak` due to different factorings, but I think it will have a different hash, unless you completely expand and rebuild the 'deep' hashes each time. Of course, we might want to do that anyway, i.e. for optimization across words. 


If I were to enter 3 characters a second into a computer for 40 years, assuming a byte per character, I'd have generated ~3.8 GiB of information, which would fit in memory on my laptop. I'd say that user input at least is well worth saving.

Huh, I think you underestimate how much data you generate, and how much that will grow with different input devices. Entering characters in a keyboard is minor compared to the info-dump caused by a LEAP motion. The mouse is cheap when it's sitting still, but can model spatial-temporal patterns. If you add information from your cell-phone - you've got GPS, accelerometers, temperatures, touch, voice. If you get some AR setup, you'll have six-axis motion for your head, GPS, voice, and gestures. It adds up. But it's still small compared to what devices can input if we kept a stream of microphone input or camera visual data.

I think any history will inevitably be lossy. But I agree that it would be convenient to keep high-fidelity data available for a while, and preferably extract the most interesting operations.




On Wed, Sep 25, 2013 at 2:45 PM, Sam Putman <atman...@gmail.com> wrote:
Well, since we're talking about a concatenative bytecode, I'll try to speak Forthfully.

Normally when we define a word in a stack language we make up an ASCII symbol and say "this symbol refers to all these other symbols, in this definite order". Well and good, with two potential problems: we have to make up a symbol, and that symbol might conflict with someone else's symbol. 

Name clashes is an obvious problem. The fact that we must make up a symbol is less obviously a problem, except that the vast majority of our referents should be generated by a computer. A computer generated symbol may as well be a hash function, at which point, a user-generated symbol may as well be a hash also, in a special case where the data hashed includes an ASCII handle for user convenience.

This is fine for immutable values, but for identities (referents to a series of immutable values, essentially), we need slightly more than this: a master hash, taken from the first value the identity refers to, the time of creation, and perhaps other useful information. This master hash then points to the various values the identity refers to, as they change. 

There are a few things that are nice about this approach, all of which derive from the fact that identical values have identical names and that relatively complex relationships between identifies and values may be established and modified programmatically. 

As an example, if I define a "foo" function which is identical to someone else's "bar" function, they should have the same "name" (hash) despite having different handles. With a little work, we should be able to retrieve all the contexts where a value appears, as well as all the handles and other metadata associated with that value in those contexts. 

[continued to second e-mail]
 
What we gain relative to URLs is that a hash is not arbitrary. If two programs are examining the same piece of data, say a sound file, it would be nice if they came to the same, independant conclusion as to what to call it.

Saving total state at all times is not necessary, but there are times when it may be convenient. If I were to enter 3 characters a second into a computer for 40 years, assuming a byte per character, I'd have generated ~3.8 GiB of information, which would fit in memory on my laptop. I'd say that user input at least is well worth saving.

Eran Meir

unread,
Sep 26, 2013, 8:21:08 AM9/26/13
to reactiv...@googlegroups.com, Fundamentals of New Computing, augmented-...@googlegroups.com
"This is my personal programming environment. There are many like it, but this one is mine."

With regard to naming (that's a lot of naming discussion for a tacit programming environment - don't you think?), I like the idea of personal sets of PetNames. After all, we're discussing personal programming environment as an extension of self. It should assist the person and extend their personal capabilities. I believe most users of such enhancing system will appreciate communicating with their personal assistant in their own language, even if it's just a slightly modified dialect of some common language.

And when I re-read the original post, I wonder if debates of ambiguity are not going the wrong way. So I'd like to offer my own incomplete metaphor: Recall that "every user action is an act of meta-programming". And user actions are inherently unambiguous - at least in the personal frame of reference. Thus, the problem is actually a problem of change in coordinates systems. As an example, consider how one's notion of "naming" is another's shifted notion of "identity".

This "relativity of semantics" can perhaps be practically reconciled using some rewriting protocols (transformations), helping communicating parties find common ground. On the other hand, a foundational problem with name reconciliation is that it's basically a unification problem -and this problem is undecidable for some logic theories/type systems.

I'm not sure understand enough of David's idea (or substructural logic) to tell if this is a real problem or not, but I wanted to chime in, since I find the thread fascinating.

Best regards,
Eran.


On Thu, Sep 26, 2013 at 2:23 AM, David Barbour <dmba...@gmail.com> wrote:
 
... 

David Barbour

unread,
Sep 26, 2013, 12:03:54 PM9/26/13
to augmented-...@googlegroups.com, reactiv...@googlegroups.com, Fundamentals of New Computing
On Thu, Sep 26, 2013 at 5:21 AM, Eran Meir <eran...@gmail.com> wrote:
"This is my personal programming environment. There are many like it, but this one is mine."

Indeed. That's the same way I feel about my smart phone, and my Ubuntu desktop. :)

Except those aren't nearly as casually personalizable as I want, due to the coarse granularity for code distribution and maintenance. :(

Regarding the deep discussion of names seeming out of place for a tacit model: yeah, I thought so too.  My own vision involves programming-by-example extraction or workspace compilation into an inventory of reusable AR/VR/GUI tools (mattock, wand, menus, etc.) or macro assignments that will often be just as tacit and nameless as the objects upon which they operate. Sharing values, even behaviors, should rarely involve use of names.

But Sean and Matt are envisioning a very text-based programming environment, due to their own experiences and their own development efforts. I'm not going to take that away from them (it would be futile to try). Also, text-based programming is undoubtedly more convenient for a subset of  domains. I'm still interested in supporting it (perhaps via pen-and-paper and AR), and text-based artifacts (documents, diagrams) are easily represented in the model I propose. At least for these cases, I can usefully discuss written names.

I agree with your position on pet names. But I can also understand Sean's position; technology hasn't quite reached the point where we can easily discuss code while pointing at it in a shared environment supporting multiple views. I keep looking forward to Dennou Coil and other visions of ubiquitous computing and an AR future. The technology is getting there very quickly.

There will always be some common ground for people to meet, e.g. due to formal structure, initial visualizers, and the sharing of values. But I'd love to see different communities evolve, diverging and merging at different points. I'd love to see children picking up metaphors, tools, and macros from their parents . The formal structure can still support a lot of integration and translation.

Warm Regards,

Dave

David Barbour

unread,
Sep 26, 2013, 2:12:23 PM9/26/13
to Fundamentals of New Computing, augmented-...@googlegroups.com, reactiv...@googlegroups.com

On Thu, Sep 26, 2013 at 10:03 AM, Sam Putman <atman...@gmail.com> wrote:
The notion is to have a consistent way to map between "a" large sound file and "the" large sound file. From one perspective it's just a large number, and it's nice if two copies of that number are never treated as different things. 

If we're considering the sound value, I think you cannot avoid having multiple representations for the same meaning. There are different lossless encodings (like Flac vs. Wav vs. 7zip'd Wav vs. self-extracting JavaScript) and lossy encodings (Opus vs. MP3). There will be encodings more or less suitable for streaming or security concerns. If we 'chunkify' a large sound for streaming, there is some arbitrary aliasing regarding the size of each chunk.

So when you discuss a sound file, you are not discussing the value or meaning but rather a specific, syntactic representation of that meaning. 

(A little philosophy.) 

In my understanding, the difference between information (or data) and pure mathematical values is that the information has origin, history, context, inertia, physical spatial-temporal representation, and even physical mass (related to Boltzmann's constant and Laundauer's principle). Information is something mechanical, and much of computer science might be more accurately described as information mechanics. From this perspective (which is the usual one I hold) copies of a number really are different. They have different locations, different futures. Further, they can only be considered 'copies' if there was an act of copying (at a specific spatial-temporal location). A large number constructed by two independent computations isn't a copy and may have unique meaning.



For identity, I prefer to formally treat uniqueness as a semantic feature, not a syntactic one.

I entirely agree! Hence the proposal of a function hash(foo) that produces a unique value for any given foo, where foo is an integer of arbitrary size (aka data). We may then compare the hashes as though they are the values, while saving time.

How often do we compare very large integers for equality?

I agree that keeping some summary information about a number, perhaps even a hash, would be useful for quick comparisons for very large integers (large enough that keeping the hash in memory is negligible). But I imagine this would be a rather specialized use-case.


Hashing is not associative per se but it may be made to behave associatively through various tweaks:



Even a Merkle tree or a tiger tree hash has the same problems with aliasing and associativity of the underlying data.

Best,

Dave




Reply all
Reply to author
Forward
0 new messages