ANNOUNCE: Lexicons 2.0 (beta) released

35 views
Skip to first unread message

Ron Garret

unread,
Dec 5, 2008, 11:21:00 AM12/5/08
to
I am happy to announce a new version of my lexicons code. This version
fixes a rather serious bug in the previous version, and adds new
features, most notably semi-hygienic lexical macros and lexical classes
and methods. The implementation is also significantly cleaned up (it
consists of a single file of portable ANSI Common Lisp) and is more
seamlessly integrated into Common Lisp (lexicons are now implemented
using packages).

Code is here:

http://www.flownet.com/ron/lisp/lexicons.lisp

Documentation is here:

http://www.flownet.com/ron/lisp/lexicons.pdf

Lexicons are first-class lexical environments for Common Lisp, what in
other languages would be called modules or namespaces. They are
designed to be an adjunct, or potentially a replacement (depending on
your needs) for packages. Packages map strings on to symbols at
read-time. Lexicons map symbols onto bindings at compile-time. For
writing modular code, lexicons can be significantly more convenient than
packages. In particular, lexicons are unaffected by read-time
side-effects, and there is no need to manually maintain lists of
exported symbols.

This version has been tested only on Clozure Common Lisp, but it is
written in portable ANSI Common Lisp so it should run under any
conforming implementation.

Comments and feedback are welcome.

rg

Kaz Kylheku

unread,
Dec 5, 2008, 2:03:58 PM12/5/08
to
On 2008-12-05, Ron Garret <rNOS...@flownet.com> wrote:
> http://www.flownet.com/ron/lisp/lexicons.lisp

Not a proper DOS or Unix text file. Lines separated by carriage returns only,
no linefeeds.

Ron Garret

unread,
Dec 5, 2008, 2:35:49 PM12/5/08
to
In article <200812201...@gmail.com>,
Kaz Kylheku <kkyl...@gmail.com> wrote:

Sorry, that's the (old) Mac end-of-line convention. (I was editing the
file using MCL.) I've uploaded a new version that uses linefeeds
instead.

rg

budden

unread,
Dec 5, 2008, 3:31:09 PM12/5/08
to
Hi!
I can not pay much attention for lexicons now. I think they
introduce too many new constructs to lisp, and that's why I think I
won't use it. I too tried to solve package problems. I developed a
rather minimalistic approach. This approach produced some dispute
here. It is the time to give a link here: http://norvig.com/python-lisp.html
Piter Norvig says that package system is hard to use and says it is a
disadvantage of CL. I agree completely to this.
My solution is to add macro
(in-packages package &rest more-packages) which acts as following:
- when this macro is in effect, reader behaviour changes. When
reading, unqualified symbol is sought for in all
packages listed. It can be found as internal symbol in first package
and as external symbol only in the rest of the packages. If it is
found once, it is used. If it is found more than once, error is
produced. If it is not found, it it interned into package. Maybe (I
think) one more rule is needed: if this symbol is in shadowing symbols
list of the "package", it can be used without qualifier even if it is
external in some of other packages.
- you may use qualified symbols to specify exactly what you mean.
- optionally, package locks are imposed to some packages (syntax not
shown).
Additionally, read-macro of kind
#with-packages (package &rest packages) form
is added so that next form is read in scope of (with-packages) form.

This approach allows minimize both import and use-package operations
and it is relatively safe. It does not remove the need to export some
symbols manually when you want, say, have a custom + or - in your
package. Idea is borrowed from sql syntax:

select t1.id, t2.id, t1.unique_field_of_t1 from t1, t2 where
t1.id=t2.t1_id

Which proved in my practice to be fairly reliable and convinient.
I say it just in case you didn't read the topic with keywords
"symbol clashes - how to avoid them"

For this to work we need either to change implementation's "find-
package" function (which is not portable), or use portable lisp reader
written in CL (it can be done rather easily based on some of open-
source implementations, there is such a reader already, written by
Pascal J. Bourguignon, but it is under GPL and it has bugs). I managed
to substitute lisp reader completely with other lisp reader, for all
activity including compiler. So, for now all what is left to do is to
take some open-source reader and replace implementation-specific
reading primitives with CL ones. Don't know when I get time for this.

Ron Garret

unread,
Dec 5, 2008, 3:53:23 PM12/5/08
to
In article
<54b8cf3e-3bf9-47e4...@j35g2000yqh.googlegroups.com>,
budden <budd...@gmail.com> wrote:

> Hi!
> I can not pay much attention for lexicons now. I think they
> introduce too many new constructs to lisp, and that's why I think I
> won't use it.

I don't know what you consider a "new" construct, but lexicons only
introduce lexical equivalents of existing CL defining forms (defun,
defvar, defmacro, defclass and defmethod). The lexical equivalent of
each of these has the same name but preceded by an L (ldefun, ldefvar,
ldefmacro, ldefclass and ldefmethod). One could easily shadow the
existing defining forms so that programming with lexicons is no
different from programming in regular CL, but I wasn't going to do this
until I had time to do some really thorough testing.

> I too tried to solve package problems. I developed a
> rather minimalistic approach. This approach produced some dispute
> here. It is the time to give a link here: http://norvig.com/python-lisp.html
> Piter Norvig says that package system is hard to use and says it is a
> disadvantage of CL. I agree completely to this.

Which is in fact the primary motivation for lexicons.

> My solution is to add macro
> (in-packages package &rest more-packages) which acts as following:

[snip]

IMHO, the central problem with packages is that they try to resolve
namespaces at read time, when this ought to be done at compile-time.
The appropriate information for resolving namespaces is simply not
available at read-time. So the problems with packages cannot be solved
by changing the reader.

Note that this issue is in some sense the converse of the one
investigated by Kent Pitman a long time ago:

http://www.nhplace.com/kent/PS/Ambitious.html

rg

budden

unread,
Dec 5, 2008, 4:05:16 PM12/5/08
to
Well, maybe I'll take a look at it later, but it is too much.
Some packages overwrite defun too. One of them is ap5 which is very
cool.
Other is screamer which is cool to. I afraid there will be
problems...
And, say, I want to make a wrapper for defclass with standard names of
accessors, initargs, etc.
If I accept lexicons, I need to redefine ldefclass too.

Anyway, I think I'll take a closer look at them in a short just to
find what ideas are there.

Ron Garret

unread,
Dec 5, 2008, 4:19:05 PM12/5/08
to
In article
<cbe188e9-a2d4-4db3...@l42g2000yqe.googlegroups.com>,
budden <budd...@gmail.com> wrote:

> Well, maybe I'll take a look at it later, but it is too much.
> Some packages overwrite defun too. One of them is ap5 which is very
> cool.
> Other is screamer which is cool to. I afraid there will be
> problems...

It depends on what the shadowing forms in these other packages expand
into. If, for example, AP5:DEFUN expands into CL:DEFUN then you can
just muck with the package system to have it expand into LEXICONS:DEFUN
instead. It should Just Work. But this is the sort of thing I want to
actually try before unleashing it on the world.

> And, say, I want to make a wrapper for defclass with standard names of
> accessors, initargs, etc.
> If I accept lexicons, I need to redefine ldefclass too.

Nope, you'll just need to change your wrapper to expand into ldefclass
instead of defclass. All the LDEF* forms have the exact same syntax as
their corresponding non-lexical forms. Alternatively you can shadow
DEF* and just run your (unchanged) wrapper definition in the package
with the shadowed symbols.

The ultimate goal is to be able to take existing code, remove (nearly)
all the package-related stuff, move everything into the (not yet
implemented) LEXICONS package, and have it all Just Work. This may be a
pipe dream, but the fact that there is now a very close correspondence
between lexicons and packages makes me optimistic.

rg

David Golden

unread,
Dec 5, 2008, 4:30:41 PM12/5/08
to
Ron Garret wrote:


> The implementation is also significantly cleaned up (it
> consists of a single file of portable ANSI Common Lisp) and is more
> seamlessly integrated into Common Lisp (lexicons are now implemented
> using packages).
>

I'm not sure I like this, at least not the way a lexicon named "l1" now
seems to "use up" a package named l1. Haven't really played with it
enough yet to judge if it's all that much of an issue, but maybe
packages used for the implementation should be called %lex-l1 or
something.

> They are
> designed to be an adjunct, or potentially a replacement (depending on
> your needs) for packages. Packages map strings on to symbols at
> read-time. Lexicons map symbols onto bindings at compile-time. For
> writing modular code, lexicons can be significantly more convenient
> than packages.

Yeah, I'm one of the people who uses packages for stuff other than
writing modular code - they're independently useful for sorting
symbols, so it'd be nice if both lexicons and packages would work with
minimal interference with eachother...

Kenny

unread,
Dec 5, 2008, 4:33:12 PM12/5/08
to
budden wrote:
> Hi!
> I can not pay much attention for lexicons now. I think they
> introduce too many new constructs to lisp, and that's why I think I
> won't use it. I too tried to solve package problems. I developed a
> rather minimalistic approach. This approach produced some dispute
> here. It is the time to give a link here: http://norvig.com/python-lisp.html
> Piter Norvig says that package system is hard to use and says it is a
> disadvantage of CL. I agree completely to this.

Well then I do believe it is time to go shopping.

Only there is nothing hard about the package system unless your cup is
full and you want it to be like some other thing but everything is like
that.

Good lord, don't you have any application programming to do?

kxo

Ron Garret

unread,
Dec 5, 2008, 6:07:35 PM12/5/08
to
In article <ghc6i2$v5d$1...@aioe.org>,
David Golden <david....@oceanfree.net> wrote:

> Ron Garret wrote:
>
>
> > The implementation is also significantly cleaned up (it
> > consists of a single file of portable ANSI Common Lisp) and is more
> > seamlessly integrated into Common Lisp (lexicons are now implemented
> > using packages).
> >
>
> I'm not sure I like this, at least not the way a lexicon named "l1" now
> seems to "use up" a package named l1. Haven't really played with it
> enough yet to judge if it's all that much of an issue, but maybe
> packages used for the implementation should be called %lex-l1 or
> something.

That would be very easy to change.

> > They are
> > designed to be an adjunct, or potentially a replacement (depending on
> > your needs) for packages. Packages map strings on to symbols at
> > read-time. Lexicons map symbols onto bindings at compile-time. For
> > writing modular code, lexicons can be significantly more convenient
> > than packages.
>
> Yeah, I'm one of the people who uses packages for stuff other than
> writing modular code - they're independently useful for sorting
> symbols, so it'd be nice if both lexicons and packages would work with
> minimal interference with eachother...

Lexicons have always been orthogonal to (and therefore compatible with)
packages from the beginning. It's just that now packages are used in
the implementation whereas before they weren't.

Out of curiosity, what kind of "symbol sorting" are you using packages
for?

rg

David Golden

unread,
Dec 5, 2008, 8:57:07 PM12/5/08
to
Ron Garret wrote:


> That would be very easy to change.
>

I assumed as much, but you did ask for feedback... :-)



> Out of curiosity, what kind of "symbol sorting" are you using packages
> for?
>

If you're actually programming with symbols as pseudo-objects
(hey, they are first-class in lisp), then it's just handy to be able to
stash them into separate packages so that ones with the same name have
separate identities but are easy to refer to.

You have symbols with the same symbol-name. You want them to be
different objects still, and you don't want the emacs lisp "solution"
(um, make them have different names...). I mean, not much more to it.

Personally, I've used it for an (unreleased and half-assed) glsl and c
translators usable in one image, in combination, that used e.g.
glsl:main and c:main. Probably more stuff.

Mostly convenience for humans writing/reading code, not for the
computer - Nothing you _couldn't_ do with a different implementation
strategy, without packages (or even first-class symbols), just a naming
convention or a bunch of hashtables you look up different things in in
different contexts. But you wind up implementing half a package system
anyway, shrug. OTOH, people wouldn't go around calling your
(non-performance-critical, simple, short, works) code "old
fashioned"...

Leslie P. Polzer

unread,
Dec 6, 2008, 12:49:33 PM12/6/08
to
I have a minor gripe with the function MAKE-LEXICON:

It modifies some global object (i.e. the list of lexicons).

This is not consistent with the usual semantics of MAKE-* functions
and should probably be called DEFLEXICON instead.

Leslie

Ron Garret

unread,
Dec 6, 2008, 5:04:31 PM12/6/08
to
In article
<c92308a2-7b39-4f65...@y1g2000pra.googlegroups.com>,

I agree with you that this is a bit of a wart in the design, but there's
a reason for it. The design of the lexicons API was intended to mirror
the API for packages as much as possible while still achieving the
design goals. Accordingly, MAKE-LEXICON is intended to be the
counterpart to MAKE-PACKAGE, which also modifies a global. DEFLEXICON,
if it were to exist, would be the counterpart to DEFPACKAGE. But the
whole point of lexicons is to get rid of all the complexity that makes
DEFPACKAGE necessary, so DEFLEXICON would in some sense be an oxymoron.

In an ideal world, MAKE-LEXICON would not modify a global. It would
also allow you to make anonymous lexicons, and multiple lexicons with
the same name. But it is not possible to make anonymous packages, nor
is it possible to make multiple packages with the same name, since both
of these would undermine the whole point of having packages. (Note that
these features would not necessarily undermine the utility of lexicons,
which is IMHO another indication that lexicons are the Right Way to
achieve modularity.)

I did it this way because I thought that it would make package fans feel
warmer and fuzzier about lexicons to know that the binding of a symbol S
in lexicon L was invariably the symbol L::S. It would be easy enough to
abandon this invariant and just use uninterned symbols for the bindings
instead. But personally, although I am no fan of packages, the current
design has kind of grown on me. It's far from clear that enforcing the
global uniqueness of lexicon names is a bad idea, even if it may not be
logically necessary.

rg

David Golden

unread,
Dec 6, 2008, 11:15:48 PM12/6/08
to
Ron Garret wrote:


> I did it this way because I thought that it would make package fans
> feel warmer and fuzzier about lexicons to know that the binding of a
> symbol S
> in lexicon L was invariably the symbol L::S.

Hmm. I guess that does sound handy. Perhaps my clash complaint is
adequately addressed by the user just adopting some explicit naming
convention for lexicons, rather than lexicon code doing any
name-munging. i.e. a lexicon named =lyekka= with an implementation
package named =lyekka= isn't going to clash with a package named
lyekka.

Slobodan Blazeski

unread,
Dec 7, 2008, 7:30:05 AM12/7/08
to
May somebody please explain me with simple words why cl packages are
hard to use, 'couse I really don't get it?
The best is probably to hear from somebody who just learned lisp or
somebody who teaches lisp (do such people exist anymore?) and his/her
students has problems with packages.

bobi

Pascal J. Bourguignon

unread,
Dec 7, 2008, 8:34:15 AM12/7/08
to
Slobodan Blazeski <slobodan...@gmail.com> writes:

> May somebody please explain me with simple words why cl packages are
> hard to use, 'couse I really don't get it?

Well, I'd say you're the best placed to explain with simple words why CL
packages are hard to use, because for us they are not hard to use.

But if you cannot explaint it with simple words, you may also use
complicated words, we've got good dictionaries.


> The best is probably to hear from somebody who just learned lisp or
> somebody who teaches lisp (do such people exist anymore?) and his/her
> students has problems with packages.

I don't find CL packages hard to use. On the contrary, they're a very
simple concept, and at the same time they're very powerful.


Let's start without packages, and try to solve some simple practical
problems.

Assume we create two symbols with the same name, "ABC", and "bind" them
to different values:

[11]> (list (list (make-symbol "ABC") 1) (list (make-symbol "ABC") 2))
((#:ABC 1) (#:ABC 2))


Then we can find a symbol from a name, for example when we read the
string "ABC" from the user, we wan to find what symbol it corresponds
to, and retrieve its value:

[12]> (let ((syms (list (list (make-symbol "ABC") 1) (list (make-symbol "ABC") 2))))
(second (find "ABC" syms :key (function first) :test (function string=))))
1

But how could we find instead the other symbol named "ABC", with value
2? The solution proposed is to put the symbols in different
lists, named "packages", having different names. So now, we can find a
symbol named "ABC" in a current 'package' refered to by *pack*, or a
symbol named "ABC" qualified by a package name such as "YOUR::ABC".

[14]> (let ((packs (list (list "MINE" (list (make-symbol "ABC") 1))
(list "YOUR" (list (make-symbol "ABC") 2))))
(*pack* "MINE"))
(list (second (find "ABC" (rest (find *pack* packs
:key (function first) :test (function string=)))
:key (function first) :test (function string=)))
(second (find "ABC" (rest (find "YOUR" packs
:key (function first) :test (function string=)))
:key (function first) :test (function string=)))))
(1 2)


CL:IMPORT puts a symbol on the list of symbols of a package.
CL:FIND-SYMBOL finds a symbol in the list of symbols of a package.
CL:FIND-PACKAGE finds a package in the list of packages.

CL:INTERN does both CL:FIND-SYMBOL, and if not found, CL:MAKE-SYMBOL and CL:IMPORT.


With CL:USE-PACKAGE, there's a little technicality, in that the symbols
from the used packages are not really "interned" or "imported" into the
using package, but it works most of the time exactly the same.

Each package maintains also a list of "exported" symbols, and this is
used only to signal an error when you try to read a qualified symbol
that is not exported with only one colon.

SYMB is exported from PACK SYMB is not exported from PACK
PACK:SYMB ok error
PACK::SYMB ok ok

--
__Pascal Bourguignon__

Kenny

unread,
Dec 7, 2008, 12:42:34 PM12/7/08
to

Ron Garret

unread,
Dec 7, 2008, 12:50:07 PM12/7/08
to
In article
<2c967dc5-a2f7-482d...@j11g2000yqg.googlegroups.com>,
Slobodan Blazeski <slobodan...@gmail.com> wrote:

Packages aren't "hard to use", they just do the Wrong Thing ;-)
Packages operate at read-time, but most programmer's intuitions about
modularity relate to global bindings, which is a compile-time concept.

The biggest practical problem with packages, and the central motivation
for lexicons, is that exported symbol lists have to be maintained
manually. This violates the DRY principle, and makes maintenance more
difficult than it needs to be, rather like the burden imposed by C on
the programmer to maintain the .c file and the .h files in parallel.
Lexicons let you write modular code without manually maintaining an
export list.

A secondary problem with packages is that because they do what they do
at read time, merely parsing the program has potential side-effects
(interning symbols). If you attempt to parse a program in an
environment where your packages have not been set up properly, these
side effects can end up messing up your environment to the point where
recovery is very difficult because you suddenly have a zillion symbol
interned in a package where they should not be interned. Lexicons avoid
this problem because they operate at compile time, not read time. They
also have a deferred binding mechanism that allows recovery from
unresolved bindings cause by missing libraries to be done at run-time,
so if you don't have your libraries set up properly you can recover by
simply importing the right library. You don't have to recompile the
client code.

rg

Brian Adkins

unread,
Dec 7, 2008, 12:57:47 PM12/7/08
to

Ron Garret

unread,
Dec 7, 2008, 1:08:31 PM12/7/08
to
In article <8763lwr...@informatimago.com>,

p...@informatimago.com (Pascal J. Bourguignon) wrote:

> I don't find CL packages hard to use. On the contrary, they're a very
> simple concept, and at the same time they're very powerful.

That's true, but...

> Let's start without packages, and try to solve some simple practical
> problems.
>
> Assume we create two symbols with the same name, "ABC", and "bind" them
> to different values:
>
> [11]> (list (list (make-symbol "ABC") 1) (list (make-symbol "ABC") 2))
> ((#:ABC 1) (#:ABC 2))

With all due respect, this is not the kind of "practical problem" that
most programmers deal with on a day-to-day basis, particularly not when
they are starting out. Using MAKE-SYMBOL is a pretty advanced concept,
and using it to make multiple symbols with the same name is a pretty
esoteric technique. In fact, the whole concept of an uninterned symbol
is pretty advanced. Peter Seibel's book, for example, doesn't even
mention them until chapter 21, and then only in passing. MAKE-SYMBOL is
not mentioned at all.

The vast majority of symbols that beginning and work-a-day Lispers deal
with are interned by the reader, and most non-advanced Lisp programmers
think about modular programming in terms of exporting functions,
classes, macros, and methods, not symbols. If you doubt this, do a
Google search in c.l.l. for "How do I export a function?" and see how
many hits you get.

rg

Ron Garret

unread,
Dec 7, 2008, 1:10:57 PM12/7/08
to
In article <m2d4g3d...@gmail.com>,
Brian Adkins <lojic...@gmail.com> wrote:

Ah what the hell.

http://www.flownet.com/ron/packages.pdf

rg

Pascal Costanza

unread,
Dec 7, 2008, 1:28:23 PM12/7/08
to
Ron Garret wrote:
> In article
> <2c967dc5-a2f7-482d...@j11g2000yqg.googlegroups.com>,
> Slobodan Blazeski <slobodan...@gmail.com> wrote:
>
>> May somebody please explain me with simple words why cl packages are
>> hard to use, 'couse I really don't get it?
>> The best is probably to hear from somebody who just learned lisp or
>> somebody who teaches lisp (do such people exist anymore?) and his/her
>> students has problems with packages.
>>
>> bobi
>
> Packages aren't "hard to use", they just do the Wrong Thing ;-)
> Packages operate at read-time, but most programmer's intuitions about
> modularity relate to global bindings, which is a compile-time concept.

And therein lies the answer: Only people who stick to their intuition
and try to use packages as if they were about bindings will find it hard
to use packages. You have to adjust your expectations, and then packages
become very easy. (Some people don't need this kind of adjustment,
because the view packages provide comes natural to them.)

IMHO, packages, i.e. managing the mapping from strings to symbols, solve
the problem better than modules, i.e. managing the mapping from
identifiers to concepts. So I wouldn't agree that they do the wrong
thing, to the contrary, I think they do the right thing. But it's good
that you put a smiley there... ;)


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Ron Garret

unread,
Dec 7, 2008, 1:54:08 PM12/7/08
to
In article <6q2iu8F...@mid.individual.net>,
Pascal Costanza <p...@p-cos.net> wrote:

> IMHO, packages, i.e. managing the mapping from strings to symbols, solve
> the problem better than modules, i.e. managing the mapping from
> identifiers to concepts.

Why?

rg

Pascal Costanza

unread,
Dec 7, 2008, 2:23:11 PM12/7/08
to

Because they can make distinctions that you cannot make with module
systems. For example, if you had only a module system, a class would not
be able to inherit from two different classes with slots of the same
name and be able to keep them as two different slots (unless you come up
with funky renaming mechanisms, like in the R6RS module system, for
example).

With a CL-style package system, the choice is always unambiguous: If you
use the same symbol as somebody else, you definitely mean the same
concept. If you want to denote a different concept, you have to use a
different symbol. It's much harder to make such distinctions with module
systems.

[Maybe having both packages and modules, like your lexicons, in the same
system is beneficial in that it provides the functionalities of both
worlds - but I cannot tell because I haven't tried this.]

David Golden

unread,
Dec 7, 2008, 3:05:17 PM12/7/08
to
Ron Garret wrote:


> I did it this way because I thought that it would make package fans
> feel warmer and fuzzier about lexicons to know that the binding of a
> symbol S
> in lexicon L was invariably the symbol L::S.


No, wait. This isn't good.

CL-USER> (defpackage :a)
#<PACKAGE "A">
CL-USER> (defpackage :b)
#<PACKAGE "B">
CL-USER> (make-lexicon :=l=)
#<Lexicon =L=>
CL-USER> (in-lexicon :=l=)
#<Lexicon =L=>
CL-USER> (ldefun a::x () "one")
=L=::X
CL-USER> (ldefun b::x () "two")
STYLE-WARNING: redefining =L=::X in DEFUN
=L=::X
CL-USER> (a::x)
"two"

a::x and b::x are different symbols supporting
different bindings absent lexicons, IMO lexicons should
just allow them to be lexical, not merge them, if lexicons
are truly on another axis to packages.


Kaz Kylheku

unread,
Dec 7, 2008, 3:24:33 PM12/7/08
to
On 2008-12-07, Pascal J. Bourguignon <p...@informatimago.com> wrote:
> Slobodan Blazeski <slobodan...@gmail.com> writes:
>
>> May somebody please explain me with simple words why cl packages are
>> hard to use, 'couse I really don't get it?
>
> Well, I'd say you're the best placed to explain with simple words why CL
> packages are hard to use, because for us they are not hard to use.
>
> But if you cannot explaint it with simple words, you may also use
> complicated words, we've got good dictionaries.

I.e. we have lexicons to deal with hard-to-use packages of verbiage. :)

Kaz Kylheku

unread,
Dec 7, 2008, 3:54:41 PM12/7/08
to

Because we have symbols as a first-class data type. So we need packages to work
everywhere, including in data:

(setf foo::y 42)

(quote foo::x)

Lis packages have to work they way they do because in general, you do not know
what symbols in a form are evaluated normally, which symbols are evaluated
according to special rules, and which ones not at all.

Packaging symbols provides the right separation of concerns, for instance in OO
programming.

We can define a class in which the slots belong to different packages. We can
write methods that are named by symbols in different packages, yet specialize
on the same object.

This is very nice; given some existing class, I can extend that class in my own
module, using identifiers from my namespace.

(in 'my-namespace)

(defclass my-class (other-namespace:base)
((my-slot :accessor my-slot ...)))

All of the slots of other-namespace:base are named using other-namespace
symbols, and so are the existing generic functions.

I can make my own slots and generic functions in my own namespace.

This does not cause a problem when we are referencing slots:

(slot-value instance 'my-slot)
(slot-value instance 'other-namespace:foo)

The alternative solution would be to make namespacing a part of slot lookup
(i.e. the mapping of the symbol to the concept). If symbols don't have
namespaces, you have to invent qualified identifiers to deal with this:

Suppose that you inherit from two base classes A and B that both define a slot
FOO. Compound identifier would have to be introduced to solve the
disambiguation problem:

(slot-value instance '(a foo)) ;; the foo from base A

(slot-value instance '(b foo)) ;; the foo from base B

Also keep in mind that slot-value is a function, not a special operator, so it
cannot pull a namespace context from lexical scope. Suppose that in some scope
you want you want (slot-value instance 'foo) to behave like (slot-value
instance '(a foo)). You need now need a dynamically scoped mapping of
unqualified identifiers to qualified ones. It smells like big-time suckage!

Kaz Kylheku

unread,
Dec 7, 2008, 4:23:26 PM12/7/08
to
On 2008-12-07, Ron Garret <rNOS...@flownet.com> wrote:
> In article
><2c967dc5-a2f7-482d...@j11g2000yqg.googlegroups.com>,
> Slobodan Blazeski <slobodan...@gmail.com> wrote:
>
>> May somebody please explain me with simple words why cl packages are
>> hard to use, 'couse I really don't get it?
>> The best is probably to hear from somebody who just learned lisp or
>> somebody who teaches lisp (do such people exist anymore?) and his/her
>> students has problems with packages.
>>
>> bobi
>
> Packages aren't "hard to use", they just do the Wrong Thing ;-)
> Packages operate at read-time, but most programmer's intuitions about
> modularity relate to global bindings, which is a compile-time concept.
>
> The biggest practical problem with packages, and the central motivation
> for lexicons, is that exported symbol lists have to be maintained
> manually.

Exporting symbols is braindamaged. The solution is: don't manage exported lists.

What is the purpose of exporting symbols?

One purpose is documentary: by exporting certain symbols, you are asserting
that they are yoru public interface.

But exported symbols are not just documentary. What does the machine actually
let us do with exported symbols?

It lets us apply USE-PACKAGE, (or the :USE clause in DEFPACKAGE). This is
called package inheritance: all of the exported symbols of another package
become visible in this one.

USE-PACKAGE is a braindamaged concept borrowed from other languages.

The proper engineering solution is to maintain explicit import lists: pick
every single identifier from a foreign package that you wish to use locally as
an unqualified name, and explicitly install it into your current scope.

Can you come up with a hack that lets me choose exactly which identifiers I
pull in from some namespace (regardless of what that namespace says it
exports), without having to maintain import lists?

That would be useful. But I don't see how; thep roblem implicitly calls for
list maintenance. I mean, you have to write down the symbols that you want.
That's a list.

Now as far as export lists go, the difficulty of maintaining them isn't due to
any fundamental property of Lisp packages. The automation of export lists could
be added to Lisp packages without changing the design.

Like with import lists, the programmer ultimately has to indicate to the
machine that he wants certain symbols exported. And that constitutes a list!
Your complaint is that you have to write down this list in some place (like
your DEFPACKAGE) in addition to all the other definitions elsewhere that assign
meaning to those symbols (like DEFMETHOD-s, DEFUN-s, DEFCLASS-es, etc).

One way to solve that might be to have some kind of special state in a package
so that you could do this:

;; top level
(in-package 'foo)

(public)

(defun foo ...)

(private)

(defun bar ...)

PUBLIC and PRIVATE would set and reset some flag, and defining constructs could
use it to export or not export the name which they are defining.

The problem with this is that a package definition is often put into a separate
file which is loaded by everyone. That file has to do the complete job of
establishing what is exported and what isn't. You cannot leave it to a bunch of
load-time side effects.

With the above mechanism, if module A depends on B, B has to be loaded first,
so that all of these effects in its toplevel forms are elaborated before
anything is done with A.

In other words, mantaining export lists is simply a consequence of applying ad
ependency inversion to decouple module dependencies.

> This violates the DRY principle, and makes maintenance more
> difficult than it needs to be, rather like the burden imposed by C on
> the programmer to maintain the .c file and the .h files in parallel.

If you do not maintain an interface and implementation separately (either as
separate files or as sections of the same file, as is the case with some
langauges), it means that in before processing module B that depends on A, the
language implementation has to process all of A, so that it could pull out the
interface information that is implicitly distributed throughout B.

C header files, crude as the mechanism is, do allow us to compile some foo.c
that uses bar.h, without having access bar.c, or any representation of it.

In order to have an efficient interface/implementation mechanim (i.e. not like
that of C) you'd need some kind of special support. I.e. there wuold have to be
a way of processing a file, similar to COMPILE-FILE, which would digest all of
the interface information and spit it into some kind of efficient form: perhaps
some special interface file: like a fasl, but with no code. Or maybe a special
section of a fasl.

When a module B uses A, the compiler would pull out A's digested interface info
(refreshing it first, if the source of A is newer). In this way, B could be
compiled without doing a complete compilation of A.

You know, you could develop an annotation system for Lisp modules which would
allow such processing. Code is data right?

Let's say we have this syntax:

(module foo)

:public

(defun x ...)

:private

(defun y ...)

You could cob together a Lisp build system which will munge source code and
generate the defpackage definitions, with export lists and all.

The above would result in there being a (DEFPACKAGE FOO ...) somewhere
which exports the symbol X.

Ron Garret

unread,
Dec 7, 2008, 4:29:55 PM12/7/08
to
In article <6q2m50F...@mid.individual.net>,
Pascal Costanza <p...@p-cos.net> wrote:

> Ron Garret wrote:
> > In article <6q2iu8F...@mid.individual.net>,
> > Pascal Costanza <p...@p-cos.net> wrote:
> >
> >> IMHO, packages, i.e. managing the mapping from strings to symbols, solve
> >> the problem better than modules, i.e. managing the mapping from
> >> identifiers to concepts.
> >
> > Why?
>
> Because they can make distinctions that you cannot make with module
> systems. For example, if you had only a module system, a class would not
> be able to inherit from two different classes with slots of the same
> name and be able to keep them as two different slots (unless you come up
> with funky renaming mechanisms, like in the R6RS module system, for
> example).

That's true, but how often do you actually do multiple inheritance from
two different classes that were separately developed?

BTW, because the current implementation of lexicons is tightly coupled
with packages it is fairly straightforward to "fix" this. I put "fix"
in scare quotes because it's not at all clear to me that this is
actually a problem, but if it really is then you can just have LDEFCLASS
lexify the slot names along with the superclasses, e.g.:

(defmacro ldefclass (name superclasses slots &rest stuff &environment
env)
`(progn
(lexify ,name)
(defclass ,(lbind name (env-lexicon env))
,(mapcar (lambda (c) (or (find-cell (env-lexicon env) c)
(error "There is no class named ~S in
the current lexical environment." c)))
superclasses)
,(mapcar (lambda (s) (lbind s (env-lexicon env))) slots)
,@stuff)))

(defmacro lslot-value (instance slot-name &optional lexicon &environment
env)
`(slot-value ,instance ,(ref-form slot-name (or (find-lexicon lexicon)
(env-lexicon env)))))

> With a CL-style package system, the choice is always unambiguous: If you
> use the same symbol as somebody else, you definitely mean the same
> concept. If you want to denote a different concept, you have to use a
> different symbol. It's much harder to make such distinctions with module
> systems.

That is not quite a fair argument. It's true, but only because CL uses
symbols as slot names. That stacks the deck in favor of packages. One
could easily envision a minor variation on CLOS that used abstract slot
identifier objects instead of symbols as slot names, and had lexically
scoped bindings of identifiers onto abstract slot identifiers. This
would provide the exact same functionality but with the slot-identifier
bindings taking place at compile time rather than read time.

But there's a more serious critique of your argument as well: It's true
that "If you use the same symbol as somebody else, you definitely mean
the same concept." However, because *package* is a global resource
which can be manipulated in various ways, then unless you fully qualify
every single symbol in your code (which no one ever does) you have to
carefully manage *package* in order to insure that all the references to
FOO that are supposed to be the same are in fact the same, and
vice-versa. This is certainly doable, but I would claim that it's a
non-trivial burden on the programmer, particularly an inexperienced one.

rg

Ron Garret

unread,
Dec 7, 2008, 5:07:09 PM12/7/08
to
In article <ghha9u$fmh$1...@aioe.org>,
David Golden <david....@oceanfree.net> wrote:

The design intention was that for global bindings one would use either
packages or lexicons but not both, particularly now that lexicons are
closely coupled with packages. Don't forget that every lexicon now has
a package associated with it, and that the binding of a symbol in a
lexicon is the symbol with the same name interned in the lexicon's
package. So binding P1::FOO and P2::FOO into the same lexicon under the
current design would require two different symbols named FOO to be
interned in the same package, which is not possible.

I could "fix" this (fix in scare quotes because it's not clear that this
is actually a problem) by munging symbol names to include their
packages, but I fear this would undermine one of the principal design
goals of lexicons which is to keep things simple and appeal to
programmers who are used to e.g. Python.

rg

Ron Garret

unread,
Dec 7, 2008, 5:20:22 PM12/7/08
to
In article <200812230...@gmail.com>,
Kaz Kylheku <kkyl...@gmail.com> wrote:

> Can you come up with a hack that lets me choose exactly which identifiers I
> pull in from some namespace (regardless of what that namespace says it
> exports), without having to maintain import lists?
>
> That would be useful. But I don't see how; thep roblem implicitly calls for
> list maintenance. I mean, you have to write down the symbols that you want.
> That's a list.

Yes, but it does not follow that those lists need to be generated by
manually writing down all of their members. For example, I might want
to be able to say, "I want to use class C1" and automatically get all of
the symbols (slot names, associated methods, etc.) associated with that
class without having to manually list them all.

> Now as far as export lists go, the difficulty of maintaining them isn't due
> to
> any fundamental property of Lisp packages. The automation of export lists
> could
> be added to Lisp packages without changing the design.

Really? How? In particular, how do you distinguish between those
symbols in the package that actually have functionality associated with
them versus those that got interned there as a side-effect of some other
process (like someone mistyping something into an interactive session)?

> One way to solve that might be to have some kind of special state in a
> package
> so that you could do this:
>
> ;; top level
> (in-package 'foo)
>
> (public)
>
> (defun foo ...)
>
> (private)
>
> (defun bar ...)
>
> PUBLIC and PRIVATE would set and reset some flag, and defining constructs
> could
> use it to export or not export the name which they are defining.

How would this work if I'm doing interactive development?

How would it know to export my slot names and accessor function names?

What if I defined a macro that defined other things, how would those
things get exported?

> > This violates the DRY principle, and makes maintenance more
> > difficult than it needs to be, rather like the burden imposed by C on
> > the programmer to maintain the .c file and the .h files in parallel.
>
> If you do not maintain an interface and implementation separately (either as
> separate files or as sections of the same file, as is the case with some
> langauges), it means that in before processing module B that depends on A,
> the
> language implementation has to process all of A, so that it could pull out
> the
> interface information that is implicitly distributed throughout B.

Yes. So?


> C header files, crude as the mechanism is, do allow us to compile some foo.c
> that uses bar.h, without having access bar.c, or any representation of it.

Yes. So the programmer has to do a lot of work to make the compiler's
work easier. I thought computers were supposed to make our lives
easier, not vice-versa.


> In order to have an efficient interface/implementation mechanim (i.e. not
> like
> that of C) you'd need some kind of special support. I.e. there wuold have to
> be
> a way of processing a file, similar to COMPILE-FILE, which would digest all
> of
> the interface information and spit it into some kind of efficient form:
> perhaps
> some special interface file: like a fasl, but with no code. Or maybe a
> special
> section of a fasl.

Not necessarily. Lexicons are implemented so that bindings can be
resolved at run-time if needed. The bindings are cached so the overhead
is only incurred once (and it's only a hash-table lookup, so it's pretty
minimal overhead anyway).


> When a module B uses A, the compiler would pull out A's digested interface
> info
> (refreshing it first, if the source of A is newer). In this way, B could be
> compiled without doing a complete compilation of A.
>
> You know, you could develop an annotation system for Lisp modules which would
> allow such processing. Code is data right?
>
> Let's say we have this syntax:
>
> (module foo)
>
> :public
>
> (defun x ...)
>
> :private
>
> (defun y ...)
>
> You could cob together a Lisp build system which will munge source code and
> generate the defpackage definitions, with export lists and all.
>
> The above would result in there being a (DEFPACKAGE FOO ...) somewhere
> which exports the symbol X.