Recommended video

45 views
Skip to first unread message

Kent Tenney

unread,
Feb 20, 2012, 3:29:38 PM2/20/12
to leo-editor
http://flowingdata.com/2012/02/20/live-coding-and-inventing-on-principle/

I found this quite amazing, a glimpse at the future of programming environments,
and a lovely description of a good way to live.

Bret Victor

The guy has some cred: a _tiny_ fraction of his CV:
"Apple. I designed the initial user interface concepts for iPad, iPod
Nano, and half a
dozen experimental hardware platforms."

http://worrydream.com

I can't recommend it highly enough, though it seems I just tried.

Matt Wilkie

unread,
Feb 20, 2012, 6:10:18 PM2/20/12
to leo-e...@googlegroups.com
On Mon, Feb 20, 2012 at 12:29 PM, Kent Tenney <kte...@gmail.com> wrote:
> http://flowingdata.com/2012/02/20/live-coding-and-inventing-on-principle/

That is truly awesome! I'm so happy to see something I've long desired
in real proof of concept form. (Here's a snippet from something I
wrote in 2000 or so, web page now gone to the great bit-bin in the
sky: "It is time for the programming paradigm of code > compile >
results to disapear. Wherever possible we should be able to directly
interact with our data. The symbolic image is a potter throwing away
cumbersome utensils and placing their hands directly on the clay. ")

Thanks for sharing!

--
-matt

Edward K. Ream

unread,
Feb 21, 2012, 7:31:30 AM2/21/12
to leo-editor
On Feb 20, 2:29 pm, Kent Tenney <kten...@gmail.com> wrote:
> http://flowingdata.com/2012/02/20/live-coding-and-inventing-on-princi...
>
> I found this quite amazing, a glimpse at the future of programming environments,
> and a lovely description of a good way to live.

Many thanks for this link. I enjoyed the video immensely. It's quite
a "whack on the side of the head".

Here are my reactions.

First, the critical bits, just to show that I've slept on the video
and haven't fallen completely under his spell :-) Obviously,
motivation and drive are important, but adding a moral (or moralistic)
dimension to the motivation is debatable. Perhaps pain motivates him
more than pleasure, but I doubt it. Adding Richard Stallman to the
list of "crusaders" could be called truth in advertising, I'll give
him that.

But enough of the critical bits. This is a great demo of a truly
important idea. I would put it this way--we have just begun to
explore what computers can do for us. Personal computers today are
supercomputers, and for the most part these supercomputers are simply
waiting for our keystrokes. The demo challenges us (designers) to do
more, much more, with the available computing power.

The principle of instant feedback for actions is truly important.
That's all there is to it.

For many years I have been saying that I am not looking for anything
better than Python. Now I can't say that! Instantly, I am completely
dissatisfied with my programming tools!

I loved his throw-away comment about unit tests. It shows how feeble
they really are.

As I think about the implications of Bret Victor's work, I am struck
by how easy it would be to make excuses for why our tools are not as
spectacular as his. You can probably think of several, right off the
top of your head.

But that would be a huge mistake. Instead, we must follow the
evidence, and admit that our present tools really suck :-) For
example, I have spent a week cleaning Leo, and at no time did I begin
to get any kind of instant feedback that the cleanup was making a
difference. Yes, eventually the outline became simpler, and so did
some of the code within it, but at all times I was "playing designer"
to paraphrase Bret.

The danger of making excuses is that it allows us to ignore the
opportunities. That would be truly stupid. And that is what is
likely to happen for most people, myself included. We'll get excited,
and then (quickly or not) go right back to our old ways of doing
things. If you disagree, think of how many excuses you have already
made for our present tools, including Leo.

I loved the programming demo. In fact, the right side of the demo is
an "instant debugger".

Could we do that in Leo? I think we could. The "instant debugger"
could be a rendering of the body text! It wouldn't be that hard: do
nothing unless the body text is syntactically correct, which is most
of the time while we are typing. But if it is correct, the debugger
will evaluate the body pane and show the results.

But this is just the tip of the opportunity iceberg. We could make
excuses, saying that Bret's principle is compelling only for toy
demos, but instead, let us consider how we could scale up his ideas to
larger and larger domains.

It seems to me that he is calling forth the creation of what I will
call "deep structure". Leo was born because I don't consider computer
programs to be merely a collection of random text. My mentor, Bob
Fitzwater, said that design happens in a different space, different
from the world of bits (or words). But our tools don't exist in that
world, they treat programs *and* their design as nothing but text.

This deep structure would, in some sense, *be* the real program.
Leo's outlines are *not*, by themselves, this deep structure--they are
only an explicit structure imposed on text. In the present
programming world (excepting Bret's), there is simply no way to
manipulate anything other than program text. This leads to the edit/
test cycle, which is hugely different from the world of immediate
feedback.

Questions immediately arise: What would immediate feedback in other
domains look like? On what kind of deep structures would it operate?
How can we use computers to validate our programs and designs
automatically and instantly?

Edward

P.S. If I would choose a "moral" imperative that speaks to me it
would be this: we humans have got to do a better job at honoring
evidence. We are simply not at liberty to ignore inconvenient facts.
And yes, there truly are facts in this world. This has revolutionary
implications, and the world is in full backlash mode at present, with
tragic consequences.

To see if you understand the power of evidence, answer the question,
"What is the most revolutionary science?" Imo, there is one, and only
one, correct answer. This is not a trick question. You either know
the answer or you don't. If you don't know the answer, you are
ignorant, in a deep sense, of the modern world. In this sense, you
might as well be living 300 years ago.

EKR

Edward K. Ream

unread,
Feb 21, 2012, 7:45:28 AM2/21/12
to leo-editor
On Feb 21, 6:31 am, "Edward K. Ream" <edream...@gmail.com> wrote:

> what is likely to happen for most people, myself included.  We'll get excited, and then (quickly or not) go right back to our old ways of doing things.  If you disagree, think of how many excuses you have already made for our present tools, including Leo.

It will take work to hold open the space of possibilities. A lot of
work.

Leo needs to be much more interactive. Here is another of Bret's
projects:

http://worrydream.com/MagicInk/bart_widget_demo.html Notice how the
title of the map changes.

On a programming level, the challenge is creating this level of
interactivity. Is JavaScript fundamentally more interactive than Qt?
Hard to believe...

Edward

Kent Tenney

unread,
Feb 21, 2012, 9:47:08 AM2/21/12
to leo-e...@googlegroups.com
It's hard to watch and not be consumed by envy of his tools.
So the challenge is scaling that desire down to my capabilities.

So, can I create a task list which feels aligned with, (at my scale)
the advantages he demonstrated.

For me it comes back to hooking events.

The stuff he demo'd looked like frameworks of event hooks, the keyboard
and mouse watched carefully informing a layer of interpretation which
drove graphics,
and fed back to the text.

We have some of that in the rendering family of tools.

There is room for improvement in Leo in the event tracking scaffold.

As far as the pure Wow factor, and the danger of my world looking dreary after
a glimpse of such beauty, I'm reminded of a friends opinion on the similarity
of pornography and the Cosby show: each is a fantasy tending to promote
dissatisfaction with reality.

> --
> You received this message because you are subscribed to the Google Groups "leo-editor" group.
> To post to this group, send email to leo-e...@googlegroups.com.
> To unsubscribe from this group, send email to leo-editor+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/leo-editor?hl=en.
>

Edward K. Ream

unread,
Feb 21, 2012, 10:19:21 AM2/21/12
to leo-editor
On Feb 21, 8:47 am, Kent Tenney <kten...@gmail.com> wrote:
> It's hard to watch and not be consumed by envy of his tools.
> So the challenge is scaling that desire down to my capabilities.

Haha.

> So, can I create a task list which feels aligned with, (at my scale)
> the advantages he demonstrated.

Yes. We have to start somewhere. Probably with a prototype, rather
than with Leo itself!

> For me it comes back to hooking events.

An interesting point of view. What I like about the first demo is
that the events happen in both directions: from the code to pictures,
and from pictures to code. Neither direction seems at all easy to me.

> The stuff he demo'd looked like frameworks of event hooks, the keyboard and mouse watched carefully informing a layer of interpretation which drove graphics, and fed back to the text.

This is indeed deep structure.

> We have some of that in the rendering family of tools.

Are you talking about Leo's rendering pane?

> There is room for improvement in Leo in the event tracking scaffold.

Well, that's an understatement.

I know from experience just how difficult it is to have bi-directional
interaction with Leo's outline. The demo is simply mind-blowing in
this regard.

Edward

zpcspm

unread,
Feb 21, 2012, 10:31:42 AM2/21/12
to leo-editor
On Feb 21, 2:31 pm, "Edward K. Ream" <edream...@gmail.com> wrote:
> Adding Richard Stallman to the
> list of "crusaders" could be called truth in advertising, I'll give
> him that.

RMS is rather a prophet to me.

Ironically, I failed to watch the video because my OS is flash free
and Firefox doesn't seem to support h264 by default. After wasting
like 30 minutes googling I gave up.

Kent Tenney

unread,
Feb 21, 2012, 10:40:38 AM2/21/12
to leo-e...@googlegroups.com
On Tue, Feb 21, 2012 at 9:19 AM, Edward K. Ream <edre...@gmail.com> wrote:
> On Feb 21, 8:47 am, Kent Tenney <kten...@gmail.com> wrote:
>> It's hard to watch and not be consumed by envy of his tools.
>> So the challenge is scaling that desire down to my capabilities.
>
> Haha.
>
>> So, can I create a task list which feels aligned with, (at my scale)
>> the advantages he demonstrated.
>
> Yes.  We have to start somewhere.  Probably with a prototype, rather
> than with Leo itself!
>
>> For me it comes back to hooking events.
>
> An interesting point of view.  What I like about the first demo is
> that the events happen in both directions: from the code to pictures,
> and from pictures to code.  Neither direction seems at all easy to me.
>
>> The stuff he demo'd looked like frameworks of event hooks, the keyboard and mouse watched carefully informing a layer of interpretation which drove graphics, and fed back to the text.
>
> This is indeed deep structure.
>
>> We have some of that in the rendering family of tools.
>
> Are you talking about Leo's rendering pane?

Right. I input text, a layer of code watches what I do, translates
it, writes it to the rendering pane.

if a single arrow is a 'watcher' and double arrow is a 'changer'
a triple arrow is a 'watcher' plus a 'changer'

render watches me, writes to rendered

me <- render ->> rendered

in the demo, render watches me, writes to rendered,
also watches rendered, uses what it sees to change me

me <<<- render ->>> rendered

>
>> There is room for improvement in Leo in the event tracking scaffold.
>
> Well, that's an understatement.
>
> I know from experience just how difficult it is to have bi-directional
> interaction with Leo's outline.  The demo is simply mind-blowing in
> this regard.
>
> Edward
>

HansBKK

unread,
Feb 21, 2012, 11:09:03 PM2/21/12
to leo-e...@googlegroups.com
I bookmarked this a week ago and am embarrassed to admit I had no idea it was that revolutionary.

From my end-user POV I related it to the rendering of pretty-text from source markup (as in reST, markdown et al). It's a pretty tame example domain, but perhaps that makes it a realistic one for you programming gurus to tinker with, whether in Leo or separately to start with.

Currently rich-text-via-markup editors have a range of interactivity models, I'll list a couple I've currently been working with in my toolchain quest:

Old-school is of course edit plaintext in one window then "press a button" (run a script etc.) to check the result - "good enough" for most but can be a challenge to set up for end-users.

One step up from this is within the same app, but still switching between edit-plaintext and view-rendered.

Rednotebook is an interesting in-between, it's edit mode shows the markup in light-grey, semi-renders bold and headlines etc.

Zim-wiki is a good example of WYSIWYG live rendering while editing, but requires jumping to an external editor to work directly in the plaintext markup.

I'd really love to see one that offered the ability to choose either mode, or even show two panes side-by-side, letting you edit the plaintext in one with live rendering in the other **and** full editing in the WYSIWYG with live display of the changes to the plaintext in the other.

And while I'm dreaming, have a plugin architecture to allow per-document choice of syntax - reST, LaTeX, the various extended markdowns (Pandoc first), maybe txt2tags, obviously raw html.

Ah if I were rich I'd actually make this happen. . .
Reply all
Reply to author
Forward
0 new messages