Early sharing of some ideas

502 views
Skip to first unread message

CJJ

unread,
Jul 27, 2011, 1:47:16 AM7/27/11
to TUNES Project
Hi Group,

As a fresh new member who has browsed (and read attentively enough, I
hope) most if not all of TUNES project's content several times, first
intrigued then seduced by its unusual ambition, just thought I should
just go ahead with this below instead of trying hard to make up an
unobjective but mostly not-very-interesting intro about myself (that
is, what I did or what I do for a living, much irrelevant to TUNES,
though strongly coding-related, of course)

So, I'm just sharing here some of my ideas, though still very
intuitive (maybe too much?) but that I think relate quite a bit to the
aforementioned TUNES' ambition (again, that I find very respectable,
AFAIC/if one asks me).

As a background/high level note introductory though: I'm especially
interested in the TUNES topics surrounding what it contemplates for
PL(s), and the ever related, more or less well-known PLT results
(Godel's, Turing's, etc). I have much, much less to say, if anything
wrt the TUNES' envisioning of the design of the "lower level
services", the TCB etc, in the OS per se.

Without further ado here it is.

As I see it, much, much as happened in the PLs communities in the late
couple decades: even the ever-profit-driven software industry has been
impacted, eventually, by the "rediscovery" of long known clean
programming practices -- functional programming paradigm now inserted
in "trendy" mainstream PLs, the realization that OOP is certainly not
the "one to rule them all" paradigm, the benefits of valuing proofs,
correctness, verifiability, etc.

Still, the same industry, interested in shorter-rather-than-longer
term progress (and to some large extent, the research, too) is
basically kind of stuck facing the same high order issue than software/
hardware markets: in the latter, it's called "technology/platforms
fragmentation" (e.g. the too-much-choice mess in mobile apps and
frameworks and OSes and ... devices) and in the former one could maybe
call it the "PLT" fragmentation.

Had we once thought that Turing had said it all, once for all? Maybe
we did but useful contributing communities such as LtU see their
forums threads constantly populated with the same good old debates:
multiparadigm vs. monoparadigm PLs? Which one to prefer/privilege?
From what rationale? Functional vs. reactive? Can one embed the other
in parts or all of it? etc, etc.

I had the following intuition:

since (1) abstraction is one of our most powerful lever to fight back
against accidental complexity in design, implementation, and since (2)
with that dear RESTflavored core of a WWW+DNS (yes, from day one
actually, read Tim BL back in early 90s even before Fielding's thesis/
assessment 10 years later) we can now make, in reasonably delimited
knowledge use cases, at least for a good number of applications, make
the [CWA], why not go one step farther and make *languages* themselves
first class citizens *within* the platform/infrastructure ?

Hence, more here below, which hasn't attracted much of attention/
feedback so far; maybe will it make sense to the broader vision-
oriented folks at TUNES?

Cheers,
CJJ

[CWA] Closed World Assumption
http://en.wikipedia.org/wiki/Closed_world_assumption

[LtU post] Towards a T-based formalism to look at and think of
language transforms/rewriting artefacts/computation as first class
citizens to identify/name/compose/infer about
http://lambda-the-ultimate.org/node/4136#comment-63098

[Imagining a regard URI scheme] Dropping our exclusive focus on type
systems and looking at language systems instead
http://www.ysharp.net/the.language/design/?p=93

[PEG-derived experiment on code google] Starting with the syntax;
evolving PEGs, at input parse time
http://code.google.com/p/ysharp/

Thomas de Grivel

unread,
Jul 27, 2011, 9:18:39 AM7/27/11
to tunes-...@googlegroups.com
Le 27/07/11 07:47, CJJ a �crit :

Thanks for taking time to write your intuitions, I enjoy your writing
style and others will surely do too. But what are you trying to say ?

For instance UNIX platforms have compilers for almost every useful
language out there. Are they first class citizens enough for your
definition ?

The so-called short-termed industry is still using UNIX and a wide
variety of languages covering most use cases and paradigms, only problem
is high maintainance cost, from the fragmentation you described. How
would "first class citizen languages" solve this problem and not make it
worse ?

And if you want my counter-intuitive point of view on the software
industry :

I believe the fact that most of the industry is high on java and crappy
designs is mostly a cultural problem, not a technical one : companies do
not have the knowledge to make good design decisions and have a hard
time hiring good developers and replace them. I think a good rationale
(if rationale is any good) for that is in this paper :
http://www.winestockwebdesign.com/Essays/Lisp_Curse.html

I guess the solution lies in open source and despite its low activity
TUNES is a remarkable project. Lisp communities are slowly gaining mass,
and the number of people publicly scratching their itch is growing
thanks to github and others. Crazy, unifying designs might become
something we can implement some day, but much hacking is needed. There
is a lot of crazy work to do, and we're all short on time.

Tom Novelli

unread,
Jul 27, 2011, 1:41:52 PM7/27/11
to tunes-...@googlegroups.com
I'm just going to reply to a few points below...

Le 27/07/11 07:47, CJJ a écrit :

I had the following intuition:

since (1) abstraction is one of our most powerful lever to fight back
against accidental complexity in design, implementation, and since (2)
with that dear RESTflavored core of a WWW+DNS (yes, from day one
actually, read Tim BL back in early 90s even before Fielding's thesis/
assessment 10 years later) we can now make, in reasonably delimited
knowledge use cases, at least for a good number of applications, make
the [CWA], why not go one step farther and make *languages* themselves
first class citizens *within* the platform/infrastructure ?

I'm not quite sure what you're suggesting (treat all languages as DSLs?), but I'm beginning to think that the WWW and especially JavaScript are the best hope for reducing fragmentation in the foreseeable future, as millions of new programmers continue to marginalize the experienced ones.  This could go on for the rest of our lives.  It feels like defeat (ahve you seen "Tunes vs. the WWW"?) but if JS becomes dominant, it could at least vindicate the old Tunes vision.

On Wed, Jul 27, 2011 at 9:18 AM, Thomas de Grivel <bill...@gmail.com> wrote:

And if you want my counter-intuitive point of view on the software industry :

I believe the fact that most of the industry is high on java and crappy designs is mostly a cultural problem, not a technical one : companies do not have the knowledge to make good design decisions and have a hard time hiring good developers and replace them. I think a good rationale (if rationale is any good) for that is in this paper :
 http://www.winestockwebdesign.com/Essays/Lisp_Curse.html

Thanks, I hadn't seen that essay yet.  (For the record, it's by Rudolf Winestock, 15 Apr 2011, and his point is that the expressive power of Lisp has fragmented it, as a community and as a technology.)

JavaScript might have the same internal fragmentation problem as Lisp, even if the core JS language is set in stone (unlike the many Lisp dialects).  That's probably unavoidable.  Even Python, whose "batteries included" and "there's only one way to do it" philosophies were meant to prevent it, suffers from fragmentation among 'frameworks' etc.  The higher the level of abstraction level, the harder it is for people to agree on standards... so, JS stands a much better chance than 'Real Lisp'.

This idea (cultural problems) reminds me of something else I've noticed: The software industry/world is still rooted in the era of large central mainframes and minicomputers.  Some 90% of the software work I've done (for hire) has been driven by government bureaucracy -- tax codes, social welfare (e.g. university financial aid), and corporate welfare (e.g. military pork projects).  I hate it.  I exaggerate only slightly when I say that if our society stopped funding useless software projects, 99% of the jobs would disappear and software would be considered an art form.  So, today's software systems hardly seem relevant in this era, when cheap internetworked personal computing devices are everywhere, and there's a growing social movement to harness technology as an antidote to the age-old plague of centralized authority.  (Yes, I've been reading "Guns, Germs, and Steel" :-)

I guess the solution lies in open source and despite its low activity TUNES is a remarkable project. Lisp communities are slowly gaining mass, and the number of people publicly scratching their itch is growing thanks to github and others. Crazy, unifying designs might become something we can implement some day, but much hacking is needed. There is a lot of crazy work to do, and we're all short on time.

Yep.  Don't hold your breath, folks :)

-Tom

Cyril Jandia

unread,
Jul 27, 2011, 7:39:51 PM7/27/11
to TUNES Project

In turn, thanks for your time and feedback, Thomas.

(I'm going for replying to yours and Tom separately, here)

On Jul 27, 6:18 am, Thomas de Grivel <billi...@gmail.com> wrote:
> Thanks for taking time to write your intuitions, I enjoy your writing
> style and others will surely do too. But what are you trying to say ?
>
> For instance UNIX platforms have compilers for almost every useful
> language out there. Are they first class citizens enough for your
> definition ?

Short answer: no, they aren't, per this specific view I give to "first
class citizens" wrt languages (i.e., in the context of this
intuition).

Longer answer: they aren't because, from my pov, it's worth to try
something new and investigate on how to step up to a higher level of
abstraction wrt approaching "competing" languages/paradigms (I'd
rather say, more positively: "collaborating" ones).

There specifically, I'm talking about something much like what you
encounter in PLs offering closures (or methods pointers, or delegates,
whatever you call it) to make functions "first class" (as opposed to
PLs which don't), thus able to be passed as just another kind of value
between environments of other functions' activation precisely, etc,
but with the (somewhat bizarre/adventurous/unusual, yes, I suppose)
"twist" of actually being willing to speak, there, of formal languages/
notations.

But yes, I also anticipate this also comes with the cost of providing
some actual, concrete support to this programme/agenda: if one wants
to speak of/look at languages' (in their syntactic and semantic defs)
as first class values, one also has to provide enough of an
infrastructure (WWW-like, in my contemplation) to be able, likewise:
to identifiy/name/compose/reuse these (computing or modeling-related)
languages' definitions. In the case of first class value functions,
such infra is conveniently, practically provided by the underlying
type systems (oft-VM-based, such as Java, the CTS/CLR, JS engines,
etc) and it's not so suprising as we are then talking about first
class citizen values of this specific abstraction: computation.

Thus, I'm just trying to figure out how to step up a little(?), I hope
not unreasonably, further:

since my core belief (and here, to take literally: I can't provide any
more rationale than that; take it just as my "axiom" bet I need to
make if the topic I'm concerned with is a long term one, as it seems
it's also the case for TUNES) is we won't have any time soon "one PL/
paradigm to rule them all", then I prefer to "humbly" accept the idea
that imaginative/innovative new PLs and paradigms will keep breeding
no matter what we try to do to unify them, and therefore my concern
becomes:

"ok, then, languages will keep being invented on top of Turing's
theorems, concretized in everyday computers, how can we at least make
sure we do better (than so far) wrt to have them, their design, and
implementations, processing tools, applications built with them, scale
better, compose better, be reused better, pipelined better, etc?"

That's how I figure you then "just need" to look at it (that ever
growing population of languages and usages of them) from a higher
point of view, not just at the grounded level of the computation they
embed.

Hope it makes better sense to you, but I confess I myself have hard
time to convey these ideas without the support of many predecessor
attempts (obviously).

> The so-called short-termed industry is still using UNIX and a wide
> variety of languages covering most use cases and paradigms, only problem
> is high maintainance cost, from the fragmentation you described. How
> would "first class citizen languages" solve this problem and not make it
> worse ?

I think I've actually replied to this above, already; but just in
case, in short: by not trying to unify anything, but by not trying to
let things go mostly unmanaged as it's still the case for now. A
simple summary of the agenda I have, I hope related/useful to TUNES,
is: devise a way to manipulate languages definitions (repositories
registries, thereof, etc) and their derived artifacts after
transforming their phrases into others' via computations and allow the
tools for language L1 have an optional, simple, standardized way of
learning about "meta linguistic" properties of the language L2: are
all L2 implementations conformant to its definition? is it strict or
lazy evaluation, can I embed L1 in this or that syntactic construct of
L2 with the benefit of reusing this or that of L2's semantic (as found
in its reference implementation, for instance) etc, etc?

'Hth

Brian Rice

unread,
Jul 27, 2011, 7:44:19 PM7/27/11
to tunes-...@googlegroups.com
I am at Emerging Languages at OSCON so will have to get back to you on this thread in detail later. :-)

--
You received this message because you are subscribed to the Google Groups "TUNES Project" group.
To post to this group, send email to tunes-...@googlegroups.com.
To unsubscribe from this group, send email to tunes-projec...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/tunes-project?hl=en.




--
-Brian T. Rice

Hendrik Boom

unread,
Jul 27, 2011, 7:51:38 PM7/27/11
to tunes-...@googlegroups.com
On Wed, Jul 27, 2011 at 04:39:51PM -0700, Cyril Jandia wrote:
>
> In turn, thanks for your time and feedback, Thomas.
>
> (I'm going for replying to yours and Tom separately, here)
>
> On Jul 27, 6:18�am, Thomas de Grivel <billi...@gmail.com> wrote:
> > Thanks for taking time to write your intuitions, I enjoy your writing
> > style and others will surely do too. But what are you trying to say ?
> >
> > For instance UNIX platforms have compilers for almost every useful
> > language out there. Are they first class citizens enough for your
> > definition ?
>
> Short answer: no, they aren't, per this specific view I give to "first
> class citizens" wrt languages (i.e., in the context of this
> intuition).
>
> Longer answer: they aren't because, from my pov, it's worth to try
> something new and investigate on how to step up to a higher level of
> abstraction wrt approaching "competing" languages/paradigms (I'd
> rather say, more positively: "collaborating" ones).

Is the #lang keyword in the Racket implementation of Scheme a but closer
to what you have iin mind?

-- hendrik

Cyril Jandia

unread,
Jul 27, 2011, 8:27:43 PM7/27/11
to TUNES Project


On Jul 27, 10:41 am, Tom Novelli <tnove...@gmail.com> wrote:
> I'm just going to reply to a few points below...

Thanks Tom.

> Le 27/07/11 07:47, CJJ a écrit :
>
> I'm not quite sure what you're suggesting (treat all languages as DSLs?),

Sorry if I wasn't clear. Indeed I reckon, I wasn't actually. I have
tried to make a more down to earth answer about my rationale, in reply
to Thomas.

"To treat all languages as DSLs" is maybe a bit too strong, but it's
not so distant from what I'm thinking of either, modulo we both agree
on the strongly relativity of labelling this or that language as a
"DSL" in some sense vs. a claimed general purpose PL.

I'm just not very fond of labels in general, especially when the
ontologies you're dealing with for your problem domain have non-
trivial relationships, and I believe it's the case in regard with PLs
and PLT in general, though some concepts, happily, also have a strong,
clear and unambiguous definition.

I reckon, though: I first did have these ideas in a much more
restricted/specific context where I was contemplating to improve on a
specific Microsoft technology that I was finding sort of
"incomplete" (well, to my tastes anyway) in that respect. Btw, that
which technology wasn't vaporware but seems to have been put back in
standby, by MS, and given less prioritiy anyway (aka "Oslo").

It occurred to me only "recently" (6 or so months ago) that most of
the ideas I had to improve on Oslo's very ideas (that I was finding
interesting but not yet equipped with enough of the scalability
qualities I was personally wishing for to use it) were likely to be
largely applicable to a broader scope than just modeling notations
used for "dry" artifacts which are most oft-encountered either as
input or output of transformation tools: that they were possibly
relevant to most PLs, actually.

> but I'm beginning to think that the WWW and especially JavaScript are the
> best hope for reducing fragmentation in the foreseeable future, as millions
> of new programmers continue to marginalize the experienced ones.  This could
> go on for the rest of our lives.  It feels like defeat (ahve you seen "Tunes
> vs. the WWW"?) but if JS becomes dominant, it could at least vindicate the
> old Tunes vision.

You probably guessed already that I disagree with this. Or, anyway,
that I'm not willing to bet on Javascript (and associated
technologies) alone, for the long term, even though, I do enjoy using
it every day and it's certainly not on my list of languages I'd
complain the most about...

But I won't try to convince you otherwise as to whether or not
Javascript can still be a good horse to bet upon for the next X-years
to come. I'm just saying: my intuition/belief just tells me "no", as
appealing as the current research/development work on ES.next from
Brendan Eich (and others) can be. The fundamental cause of software
fragmentation will still be current, I think, whether Javascript or a
more modern descendant of it is around or not. Note this remark
applies to any language, per the assumption I make, precisely: my
point is it's not so much useful to try hard to unify PLs into one or
a small subset of today's offer, as it is (again, in a broader vision)
to bounce on the scalability lessons we got from the making of a human-
targeted www, pretty successful so far (for human-targeted content at
least, while the need for a "semantic web" eventually appeared quick
later as something overlooked) and to integrate some of its
architectural ideas where:

we stop focusing about *locally-made* computations (type systems on
the N-processor we have on our machine) producing *distributed
artifacts* (C/S payloads over HTTP, via JSON and whatnots)

and instead:

start having a glance/giving a try at a new point of view of what we
mean by interoperability, composability, reuse: not any longer at the
"ground level" of the type system of language X or Y alone and what it
can compute and for which cost, but rather at one step higher, i.e., a
level of languages the definitions and identities of which are higher
order values, and finally sharing the same space, on the
infrastructure they rely upon, as their processing tools' artifacts: a
global space, with chains of authorities, registries, private or
public, with promotion from private to public if needed, chains of
verifiers, conformance checkers, etc.

Could be TUNES (or the 'OS' part of it) a fundamental part of this
agenda? I have no idea yet apart from the ambition aspect of it to
look at really new ways to solve (some) real world problems with the
help of computer languages.

> [...]
> -Tom

Cyril Jandia

unread,
Jul 27, 2011, 8:50:58 PM7/27/11
to TUNES Project

I don't know Scheme, nor Racket, sorry. That #lang keywords seems like
a nice feature in that language, though.

I'd say it's vaguely related to my idea, yes: at the syntactic form/
sugar surface level, yes, adaptable grammars (or rather, adaptable
parsers is a better) are among the things I'm contemplating, yes. But
again, I suspect I'm wishing to address a bigger picture. [Starting
with syntax]

In this specific instance, for example: how could we have *the
infrastructure/platform* provide us support for the same kind of
thing, in, let's say: F# vs. C# as well, without having to go thru
most of the reimplementation hassle of the former to embed in the
latter?

(Thus, not just as a specific/ad hoc asset of some Scheme/Racket
implementations alone, but something a tool or application implementor
could "instruct" the respective legacy compilers/implementors to do on
his/her behalf if, via the same infrastructure, such implementations
report they do support this a priori unexpected usage).

'Hth,
[Starting with syntax]
http://lambda-the-ultimate.org/node/4082

Thomas de Grivel

unread,
Jul 28, 2011, 6:54:49 AM7/28/11
to tunes-...@googlegroups.com
Sorry, i'll keep trying to match the discussion to the real world :

Maybe that's a dream I dream too : "let's formalize the semantics of all
languages and compile any of them into any other". Maybe we should
determine and dump the semantics of all first-class languages into
Prolog. Is that an hypothetical implementation of your idea ? That's a
heap of work though, and a heavy weight as you have a very large dataset
to reason upon.

Or "let's have all the compilers compile to a good internal
representation", like Lisp or Shen when it's ready. Actually languages
as first-class citizens do remind me of something which gives its power
to Lisp : macros, and reader macros. In Lisp we could define a reader
macro "C++" which would read and parse C++ converting it into Lisp code.
Now this is not a special construct of Lisp, macros are the core of Lisp.

If we can also translate back to the language then we can do pretty
funny things, but that almost doubles the work. Actually you can almost
do this with Javascript in Common-Lisp with Parenscript.

This is first-class enough for me in the context of an abstract idea.
But that means all compilers for complex languages have to be ported,
will not be optimized.

Problem is when you try to compile most languages to something other
than x86 you realize they're completely ill-defined and produce broken
abstractions etc. Or produced code is impossible to read or to optimize.


Le 28/07/11 02:50, Cyril Jandia a �crit :

Cyril Jandia

unread,
Jul 28, 2011, 3:10:10 PM7/28/11
to TUNES Project
Hi Thomas,

On Jul 28, 3:54 am, Thomas de Grivel <billi...@gmail.com> wrote:
> Sorry, i'll keep trying to match the discussion to the real world :

Sure. That's what I aim at, too.

> [...] "let's formalize the semantics of all
> languages and compile any of them into any other". Maybe we should
> determine and dump the semantics of all first-class languages into
> Prolog. Is that an hypothetical implementation of your idea ?

No, it isn't. I do not intend, at all, to formalize the semantics of
all (nor of a significant subset thereof) languages to enable later on
our go towards some hypothetical rewriting of their phrases into
another (or a few others) representation(s)/language(s). That would
basically mean I'm doing what I said I'm not interested in/don't
believe it's worth it: to try unify them.

> That's a heap of work though, and a heavy weight as you have a very large dataset
> to reason upon.

Agreed, if it was the case of my intent, there, and if ever possible,
which I seriously doubt. The current state of affairs in proof
assistants, theorem provers, verifiers, etc is barely starting to
understand the scope of the problem domain wrt to provability of
correct code for a limited set of "shapes" of the current type systems
implemented here and there in the mainstream VMs and/or native
compilers interpreters, and AFAIK, mostly in the case of non-
concurrent programming language usage scenarios. It would indeed be a
formidable amount of work left to do to try unify the semantics of
whatever set of PLs you have chosen to keep, into one, and verify that
such a unification is eventually faithful to every single one language
after another. And that wouldn't even take yet into account what's
also going on when those languages are used in concurrent programming
situations (when they are supported).

I have discarded such an approach a long time ago.

> Or "let's have all the compilers compile to a good internal
> representation", like Lisp or Shen when it's ready.

I'm sorry if I confused you, but this is NOT a strategy I'm thinking
of EITHER, really.

If I understand correctly your point that's roughly equivalent to say
I would be trying, this time, to unify the various languages' virtual
execution environments into one VM / intermediary language only, let's
say, e.g., dropping the JVM once in favor of the CLR/CIL or the other
way round.

That's definitely not something I would consider reasonable either,
for all the obvious reasons related to the portability and/or
efficiency of this or that VM implementation's source code over this
or that target processor architecture. Plus, imho, it's very much a
matter of tastes/personal preferences more than of objective argument
that this VM or intermediary language addresses better the portability
to/the optimal use of a given processor than that other VM or
intermediary language.

> Actually languages as first-class citizens do remind me of something which gives its power
> to Lisp : macros, and reader macros. In Lisp we could define a reader
> macro "C++" which would read and parse C++ converting it into Lisp code.
> Now this is not a special construct of Lisp, macros are the core of Lisp.

So, it seems I need to rephrase the core idea maybe differently, one
more time to clarify. I'll give it a try a later/soon.

> [...]

Thomas de Grivel

unread,
Jul 29, 2011, 10:25:48 AM7/29/11
to tunes-...@googlegroups.com
Le 07/28/11 21:10, Cyril Jandia a �crit :

I think we can imagine these datasets come to life partially in the
semantic web. I guess what lacks is motivation to produce formalized
semantics into RDF. We usually don't doubt that we can compile one PL
into another, only we usually do it in an informal way : it's a huge
enough work not to take the time needed to expose the underlying
semantics in a formalized representation.


>> Or "let's have all the compilers compile to a good internal
>> representation", like Lisp or Shen when it's ready.
>
> I'm sorry if I confused you, but this is NOT a strategy I'm thinking
> of EITHER, really.

Sorry for my naive interpretation, I'm just trying to understand.

> If I understand correctly your point that's roughly equivalent to say
> I would be trying, this time, to unify the various languages' virtual
> execution environments into one VM / intermediary language only, let's
> say, e.g., dropping the JVM once in favor of the CLR/CIL or the other
> way round.

No, that's not what I meant. I should explain a bit more : I was not
really talking about execution but *internal representation* as used by
a compiler, providing an ad-hoc formal representation for all languages.
The advantage of using Lisp is that we already have a good compiler for
this representation, and that it is meta-programmable, meaning we can
reuse the imported languages to support further ones.


> That's definitely not something I would consider reasonable either,
> for all the obvious reasons related to the portability and/or
> efficiency of this or that VM implementation's source code over this
> or that target processor architecture. Plus, imho, it's very much a
> matter of tastes/personal preferences more than of objective argument
> that this VM or intermediary language addresses better the portability
> to/the optimal use of a given processor than that other VM or
> intermediary language.

Lisp can be compiled to any VM. We can think of internal representation
as in LLVM. Multiple languages, multiple targets.


>> Actually languages as first-class citizens do remind me of something which gives its power
>> to Lisp : macros, and reader macros. In Lisp we could define a reader
>> macro "C++" which would read and parse C++ converting it into Lisp code.
>> Now this is not a special construct of Lisp, macros are the core of Lisp.
>
> So, it seems I need to rephrase the core idea maybe differently, one
> more time to clarify. I'll give it a try a later/soon.

Your idea sounds interesting, I'm trying to imagine what you mean to say
and I definitely hope to read more about it.

Cheers,

--
Thomas de Grivel
http://b.lowh.net/billitch

"I must plunge into the water of doubt again and again."

Hendrik Boom

unread,
Jul 29, 2011, 3:58:29 PM7/29/11
to tunes-...@googlegroups.com
On Thu, Jul 28, 2011 at 12:54:49PM +0200, Thomas de Grivel wrote:
>
> Or "let's have all the compilers compile to a good internal
> representation", like Lisp or Shen when it's ready. Actually
> languages as first-class citizens do remind me of something which
> gives its power to Lisp : macros, and reader macros. In Lisp we
> could define a reader macro "C++" which would read and parse C++
> converting it into Lisp code. Now this is not a special construct of
> Lisp, macros are the core of Lisp.

This is the kind of thing the #lang line in Racket is good for. (Racket
is the new name for PLT Scheme; I don't know if this feature is part of
Scheme now, or just part of Racket). You put a line like

#lang fooo

at the start of a file of Racket code, and it tells Racket that the file
is written in the language 'fooo' It fetches the langauge definition
from wherever it keeps them, and uses it for the rest of the file.
Multimodule programs can easily be written in multiple languages.

Though I doubt the mechanism is up to the rather insecure semantics of
C++ -- probably a good thing.

-- hendrik

Cyril Jandia

unread,
Jul 31, 2011, 2:20:00 AM7/31/11
to TUNES Project
Hi,

As a sign of gratitude to the TUNES group for showing some interest to
my ideas, which I think relate to TUNES and can benefit from it/with
it,
I'm posting here an elaboration of them I haven't even posted yet on
my own pages.

It's also because I have done some progress lately on a sort of, say,
"abstract algebra", and the early rudiments of a (experimental!)
syntax, that I have been needing to inform enough the design of a
language to implement them (those ideas); again, hopefully thanks to-/
for the benefit of- the TUNES project, as a novel OS.

On Jul 29, 7:25 am, Thomas de Grivel <billi...@gmail.com> wrote:
> [...]
> Your idea sounds interesting, I'm trying to imagine what you mean to say
> and I definitely hope to read more about it.

Thanks Thomas. Read on below. (I likely won't post as lengthy messages
after this one, no worries.)

> Cheers,
>
> --
> Thomas de Grivelhttp://b.lowh.net/billitch
>
> "I must plunge into the water of doubt again and again."

Please note these ideas don't come out of the blue. Like I said
before, I had the first ones two years ago already and I've been
ruminating them since every other day,
but by lack of more time to dedicate to them, have constantly
postponed their expression online.

Also, I am not from academia though I believe I have kept myself
informed enough with the most important results known today wrt
computability and formal systems: Turing's, Church's, Rosser's, etc.

As a reminder, I am just your "average" 15+ years programming
practitioner who had to deal, like most (if not all) of you, with
programming language implementations fragmentation, tooling support
fragmentation, platforms fragmentation, etc, almost every day, and in
order to code more or less interesting applications for his clients/
employers.

I have the weakness to believe that what follows is, neither,
"revolutionary" or "the most stupid idea ever". Instead, I believe it
is just rather pragmatic, it will be simple enough once concretized,
and useful to investigate further, though the high level rationale may
seem very abstract first.

Anyhow I will lay out a rough sketch of the formalism I am back at
working on, prior to reuse it in the design of the language to help
support/implement the scheme I contemplate, there. Right after this:

**The (computer) language design traditional state of affairs**

I had the "startling" idea to look at the problem the other way round:
the ease of invention, composability, reuse, and scalability of a
computer language's design and implementation (or improvement thereof)
should be all what matters FIRST, thus before, second, what one aims
to replace or improve upon the legacy ways of computing things via
this new language (or improvement).

AFAIK, this is not, by far, today's situation we can observe.

As I see it, computer language designers have traditionally neglected
(or downright ignored) this point of view for decades, consciously or
not.

That said I believe it's not so surprising as a natural human
psychology flaw. If/when we get interested in some computer language
design issue or goal, to integrate our personal ideas thereof, we tend
to approach our effort like everything else we care about in life and
which takes our time at the same moment as our intellectual
investment:

while we don't completely ignore the work from our predecessors
(unless we're arrogant enough for that) we end up with really caring,
at the most, about integrating well enough the future implementations
about our design with the formers, but we care much less about having
our design, precisely, integrate as well with theirs.

Or, "my ideas are better and shall replace the old ones" (as one can
think secretly), sort of.

Thus, to rephrase my point: computer languages should be allowed to
continue breeding and composing with each other, and by doing so, to
shape the future useful type systems, instead of being shaped by the
legacy ones (or variants thereof) we can imagine "as an excuse" for
our new language inception.

Early, fundamental type system design choices lay at the heart of what
language designers' contemplate for their language implementations,
beforehand of everything else.

Even one who thinks he/she is only playing around with a new syntax or
new syntactic sugar first is actually, really making conscious or
inconscious assumptions about what kind of legacy type system his/her
notation innovation is building upon.

Rephrased once more. As far as computer language design is concerned,
my intuition is the effectiveness of computation made via a language
in the process of being designed should benefit from asking ourselves
first:

WHAT will be the pre-existing, legacy forms of the computation input/
output we'll want to have the computer perform for us.

That is, instead of asking of ourselves first: HOW we will have it
(the computer) do so wrt these input/output forms, in a somewhat
unreasonably too strong abstraction, made a priori, about the expected
purposes of that computation; i.e., the language's design intent, from
the point of view of the language's designer.

I believe it's actually, mostly common sense to commit to a focus on
expected input/output forms, first, for a language designer:

even though a language designer can make easily enough the needed
provisions for the legacy forms of computation input/output he/she
wants to be supported in the future via his/her language, he/she
cannot as easily ALSO predict what will be the future larger/largest
computational contexts within which his/her language will be used
(more or less relevantly).

Hence, the extreme difficulty (if not downright de-facto impossible),
IMHO, for the language designer to make "the right bet" on the
computation model(s), or programming paradigm(s), he/she wants the
language's design capture as a priority in its implementations.

Sure, there are already several, well-known, useful type systems and
programming paradigms with their respective implementations and
tooling support out there, which, in some way or another, depending on
the class of computation problem they aim at, almost always compete
with each other eventually.

Then, by the more or less constructive debates those bring around the
table, one can easily observe how they can have your daily programming
practitioner either be usefully, better informed about the best use,
which is nice, or lose his/her precious time, which is less nice.

Now, a recurring issue computer language designers keep facing is the
accidental, recurring dilemma they indeed encounter in having to
choose between this or that language feature set which best supports
an oft-needed combination of these type systems and/or programming
paradigms/schemes.

This ambient language design issue of prioritizing what we wish for
being supported first and foremost in computer languages once run-time
occurs (i.e., the type system semantic details), is, I believe, slowly
but surely leading us to "forget" what probably matters the most, in
the end, to the human we are:

all we really care about from getting computers do this or that for us
or on our behalf is the shape of the transient or persistent,
intermediary or definitive artifacts which result from computations,
more than the knowledge about the full rewriting story that has
happened in our computers' volatile memory or persistent storage
media.

Of course, I am NOT saying that the full, definitive type system
semantics doesn't matter to be known/be accurately defined somewhere.
Of course it will have to be known, under (ideally, verified) control
in implementations in the end, even better: provably correct wrt its
higher order specification.

What I am only trying to say, which I reckon is likely very new, very
"out of the box" usual thinking, is: the overall language design
should not be shaped out of computational semantic consideration,
first, only, or foremost.

However, even though we still lack more stronger evidence (at best,
sometimes, only suspecting we have some recent "clues" about that[*])
some have thus already expressed their (more or less strong) doubts or
concerns about a type system-centric language design approach.

So, this is here only the humble contribution of a programming
practitioner towards a hopefully possible improvement in this state of
affairs in computer language design and implementation:

1. This is about valuing the diversity of computer language design and
implementation invention first, only to ease, as a side-benefit of it
afterwards, the inception of new useful type systems and computation
schemes (thus, from a notation-centric first point of view).
2. This is about valuing the shaping of a computer language's type
system after the form of expected input/output artifacts and the sort
of transformations that the language implementations intend to
support.
3. This is about valuing the abstraction composability in computer
language design, thanks to (2), instead of engaging in a would-be
(believed unrealistic) abstraction unification effort thereof.
4. This is about valuing the loosely-coupled liaison between computer
language tools, by looking at, and referring to, their languages as
first-class citizens, upfront, instead of gluing the tools in the
current ongoing, tightly-coupled approach (careless to/unaware of
first-class citizens languages).
5. This is about valuing the reuse and scalability of computer
language implementations, thanks to (4).

Y# - Ontology

A computer language (programming language or modeling notation) has an
ambient, implicit or explicit "design intent", as devised by its
designer(s).

This is what the language's designer has been thinking his/her
language to be best used for.

We aim to provide a scheme where the platform (underlying OS, and/or
VM, and/or chains of authorities found on WWW) can make the expression
of a language's design intent, in all or parts, explicit and available
to tools/programs (as they process a phrase of this language as
primary input and/or with others').

The language design intent is a context-free generalization of the
language' (phrases-) "purposes".

Or, equivalently: the language design intent is the time-insensitive,
space-sensitive abstract set of phrase purposes for a language's
realization, as being processed by one or more tools/programs;
i.e., the purpose of concrete phrases instances (transient or
persistent) of this language, as they are being processed.

Thus, a time-and-space-sensitive, language phrase purpose is the
projection of a part of the language's design intent, expressed in
that phrase of the language, onto the run-time execution context of
the tool/program that processes it.

Y# - Supporting this ontology / Abstract algebra (rough sketch only,
at this point) of computation for a first-class citizen languages PoV

Tombstone-/aka T-diagram notation flattened; example, impl. a C#
compiler (where "<T>" is the abstract algebra operator symbol):

Assuming someone has written a C# compiler in C, and we have a C
compiler available on the x86 architecture, as well as a x86-based
machine:

(C_# C CIL) <T> (C x86 x86) = (C_# x86 CIL)

Note:
i. here, C, CIL and x86 are considered "legacy"; we could for instance
assume they are known/registered at some authority (so-called
"[REGARD]") our platform/OS has access to (either as local cache or
online)
ii. as a language identity, C_# may (or may not, yet) be known to an
authority

What we need to represent further:

(C_# C CIL) and (C x86 x86) can be actually, most likely be
resources serialized somewhere: we can associate them a URI each
(possibly even, a URL)

URI assignment feasibility is less clear in the case of (C_# x86
CIL) which is actually the expected result of our compilation here,
maybe ongoing.

Q: who are we? From which point of view are we looking at this
reification of a compilation process?

Foundation idea 1, for a given "source" language S:

Design intent(S) = DI(S) = { (I S O) with (I, O) in REGARD x
REGARD } (i.e., the design intent of S is the abstract set of all (I S
O) where S is the free variable/constant, and I, O are independently
picked up from the REGARD, thus, relatively to S)

Shorthand defining syntax for DI(S) :

DI(S) = (* S *)

Then, we come up with foundation idea 2:

(I S O) can be interpreted as denotating :

"the purpose of this phrase (I S O) of S", or further:

"(I S O) is a phrase of S useful to transform phrases of I into
phrases of O, under some context-bound conditions in the program
processing it (this phrase of S)"

More to come soon, with an imaginary syntactic application sample in
the context of a C# fragment interoperating the platform to query
about a foreign language phrase embedded in it...

Thanks for reading.

CJJ


[T-diagram notation]
http://en.wikipedia.org/wiki/Tombstone_diagram

[*]
Types Considered Harmful, Benjamin C. Peirce
http://www.cis.upenn.edu/~bcpierce/papers/harmful-mfps.pdf

[Design Intent/purpose]
http://www.ysharp.net/the.language/rationale/#idea

[REGARD]
http://www.ysharp.net/the.language/design/?p=93

Alex Gryzlov

unread,
Aug 1, 2011, 6:44:41 AM8/1/11
to TUNES Project
Hi Cyril,

Would you say if this paper on higher-order contracts:

http://www.cs.uchicago.edu/files/tr_authentic/TR-2006-10.fdf

is related to your idea?

Alex

Cyril Jandia

unread,
Aug 1, 2011, 2:05:51 PM8/1/11
to TUNES Project
Hi Alex,

On Aug 1, 3:44 am, Alex Gryzlov <skyrod...@gmail.com> wrote:
> Hi Cyril,
>
> Would you say if this paper on higher-order contracts:
>
> http://www.cs.uchicago.edu/files/tr_authentic/TR-2006-10.fdf

Thanks for the link: yes, I think I remember reading this paper, among
others.

> is related to your idea?

Yes, this is somewhat close to my idea, I would say from a higher
order algebraic point of view, but it's also quite different in at
least two ways:

1. they provide a precise denotational semantics for their scheme of
describing language embeddings in a type-sound, type-safe manner,
while I don't.
I find it very interesting work of theirs per se, and as one can see,
they thus intend to provide a precise a semantic of what will occur at
run-time once these embeddings have been handled thru their framework.
But the first class citizen being manipulated, there, is still the
same "as the usual": the environment terms of their denotational
semantic thus described are from a typed lambda calculus over a common
type system, somewhat the "lingua franca" for both languages
participating in the embedding.

2. while I'd like to be one step up: ideally, I would like NOT to have
to provide any denotational semantic because I don't intend to
describe computationally-precise language embeddings at a typed-lambda
calculus level: instead, in my scheme, I want to leave most of the
semantic decision power of what a phrase of a language embedded into
another's mean at run-time to these three actors:
* the language designer's design intent he makes formally available
somewhere
* the underlying platform and the REGARD part of my scheme to access
the preceding
* the language embedding user whom I want to remain free, during the
expression of the embedding, to query latter or not

Of course, I expect to bump at some point into a type-safety decision
to enable a REGARD/Y# user to make in full awareness when some of the
forms of the language embeddings I plan to support involve type
expressions or type names. But my aim is also to delay/deferred those
as much as possible and enable the REGARD/Y# (and the embedding) user
to see these typing issues, in context, as just a special case of
querying to the platform about the reified purposes of the syntactic
forms involved: in that specific case, which happens to be
reifications from either one language's type system, for the two
languages that interop in the embedding.

Actually, the previous work, seemingly unrelated but which is actually
quite similar to the scheme I'm thinking of (though with very
different end use objectives otherwise), and which I think is the
closest to the degree liberty I intend to provide for both the
language design intent side and language phrase purpose side, is NVDL:
http://nvdl.org/

> Alex

CJJ

Cyril Jandia

unread,
Aug 1, 2011, 9:53:18 PM8/1/11
to TUNES Project
(Update on this)

On Jul 29, 12:58 pm, Hendrik Boom <hend...@topoi.pooq.com> wrote:
> [...]
>
> This is the kind of thing the #lang line in Racket is good for.  (Racket
> is the new name for PLT Scheme;  I don't know if this feature is part of
> Scheme now, or just part of Racket).  You put a line like
>
> #lang fooo
>
> at the start of a file of Racket code, and it tells Racket that the file
> is written in the language 'fooo'  It fetches the langauge definition
> from wherever it keeps them, and uses it for the rest of the file.  
> Multimodule programs can easily be written in multiple languages.
> [...]

(Found earlier today) I knew I couldn't possibly have had this
thinking process alone. I'm very glad to see that others, from
respected acadamia, have recently (2010, it seems) already achieved
great progress in the same direction I've been contemplating (though,
still, from a slightly different angle, as I hope to post more about
as soon as I can).

So, after getting back (once more) at applying my google-fu to dig
further and try find previous work similar to the ideas I have, I've
finally stumbled upon this impressive research work I had been
overlooking so far:

http://scg.unibe.ch/archive/phd/renggli-phd.pdf

Quoting (p. 127):

"9.1 Contributions of the Dissertation
We set out to address the shortcomings of existing approaches to
language embedding. We argued that **an explicit first-class model for
language extensions is needed** to support context-dependent embedded
languages that do not break existing tools[...]"

(**emphasis mine)

Also, a nice, quick-to-browse presentation that makes their point
right away can be found here:

http://www.slideshare.net/renggli/dynamic-language-embedding-with-homogeneous-tool-support

I find their work impressive especially after reading about the bunch
of examples they give in Appendix B, implementing different sorts of
language embeddings via their Helvetia framework, and thanks to the
powerful concept of "language boxes" they had to introduce first-off
as a modeling construct to represent the host language<->embedded
language interlinguistic relation. Unsurprisingly, they seem to have
come up with an implementation which needs to make profit of the most
recent (and powerful) mechanisms known to-date to manipulate grammars;
notably, scannerless parsers, PEGs, parser combinators.

> -- hendrik
CJJ

Cyril Jandia

unread,
Aug 3, 2011, 2:17:47 PM8/3/11
to TUNES Project
Hi,

On Jul 29, 7:25 am, Thomas de Grivel <billi...@gmail.com> wrote:
> Le 07/28/11 21:10, Cyril Jandia a crit :
> [...]
> > So, it seems I need to rephrase the core idea maybe differently, one
> > more time to clarify. I'll give it a try a later/soon.
>
> Your idea sounds interesting, I'm trying to imagine what you mean to say
> and I definitely hope to read more about it.

Finally could put this together for a first shot attempt at it (after
2+ years of more foundational reflections, with almost total
abstraction made of what the actual concrete syntax might look like
eventually... but one cannot stay stuck forever within the domains of
abstract syntax and models only, sure):

http://www.reified.info/~/

Warning: still pretty "rough", though. Nothing definitive of course,
but should hopefully provide hints on the kind of semantic and
syntactic interoperations I envision useful to have more structurally
manageable first-class citizens languages allowed to breed and compose
beyond the strict original design intent of their inventors.
"Manageable", that is, via the cooperation between implementations,
the underlying platform and, to some arbitrary extent also, via the
related web-based related infrastructure. Again, all this assuming one
is willing to look at languages from this "new" perspective.

'Hth,

> Cheers,
>
> --
> Thomas de Grivelhttp://b.lowh.net/billitch
>
> "I must plunge into the water of doubt again and again."

CJ

Hendrik Boom

unread,
Jan 31, 2019, 8:31:03 AM1/31/19
to TUNES Project
You may find the #lang feature of #Racket to be a step in the direction of fitst-class languages.

-- hendrik

Hendrik Boom

unread,
Jan 31, 2019, 10:22:23 AM1/31/19
to TUNES Project
On Thu, Jan 31, 2019 at 05:31:03AM -0800, Hendrik Boom wrote:
> You may find the #lang feature of #Racket to be a step in the direction of fitst-class languages.
>
> -- hendrik
>

Sorry. I was replying on my phone and didn't notice I had already pointed
this out years ago in thei same thread!

-- hendrik

Reply all
Reply to author
Forward
0 new messages