Committee E topics

62 views
Skip to first unread message

John Cowan

unread,
Mar 17, 2022, 5:01:08 PM3/17/22
to scheme-re...@googlegroups.com

On Thu, Mar 17, 2022 at 10:59 AM Nala Ginrut <nalag...@gmail.com> wrote:

Hi folks!
I've read the precious mails, but I still have insufficient information about the potential work of committee E. I guess maybe it's one of the reasons why there's no volunteer.
Anyway, we may open a thread of E to make some initial work till we have a chair.

E will be concerned with the environment of Scheme programs.  In R7RS-small, this includes Sections 6.13 (Input and output) and 6.14 (System interface).  In R6RS Libraries, this includes sections 8.1 (Condition types) and 8.2 (Port I/O), as Section 8.3 is the same as R7RS-small's I/O.

Relevant SRFIs are sockets (SRFI 106), 120 (timers), 170  (Posix), 181 (Custom I/O ports, including transcoded ports), 192 (port positioning), 205 (terminal I/O, still draft).  SRFIs 181 and 192 are directly modeled on R6RS.

Relevant pre-SRFIs written by me can be found in https://github.com/johnwcowan/r7rs-work and include DatagramChannelsCowan (UDP sockets), GraphicsCanvas (high-level graphics), SyslogCowan (what it says on the tin), and TerminalsCowan (Termbox/TUI interface support).  The last can be implemented portably on top of 205, and so it may move to Committee B in future.  A notable alternative to something like GraphicsCanvas is the ezdraw server, which is written in Scheme and exchanges S-expressions over a named pipe or local socket (it can also be "linked in").

Arthur A. Gleckler

unread,
Mar 17, 2022, 5:09:01 PM3/17/22
to scheme-re...@googlegroups.com
On Thu, Mar 17, 2022 at 2:01 PM John Cowan <co...@ccil.org> wrote:
 
Relevant pre-SRFIs written by me can be found in https://github.com/johnwcowan/r7rs-work and include DatagramChannelsCowan (UDP sockets), GraphicsCanvas (high-level graphics), SyslogCowan (what it says on the tin), and TerminalsCowan (Termbox/TUI interface support).  The last can be implemented portably on top of 205, and so it may move to Committee B in future.  A notable alternative to something like GraphicsCanvas is the ezdraw server, which is written in Scheme and exchanges S-expressions over a named pipe or local socket (it can also be "linked in").

Do you know where I might find the ezdraw server you mentioned?  I've found some mentions of it, but the sites are all obsolete or are actually about something else.  I actually built something like that, but using a language-neutral format, just over a year ago: https://github.com/arthurgleckler/nanovg-repl.

Thanks.

Arthur A. Gleckler

unread,
Mar 17, 2022, 5:15:20 PM3/17/22
to scheme-re...@googlegroups.com
On Thu, Mar 17, 2022 at 2:08 PM Arthur A. Gleckler <a...@speechcode.com> wrote:
 
Do you know where I might find the ezdraw server you mentioned?  I've found some mentions of it, but the sites are all obsolete or are actually about something else.  I actually built something like that, but using a language-neutral format, just over a year ago: https://github.com/arthurgleckler/nanovg-repl.

Oh, never mind.  You must mean Joel Bartlett's excellent Don't Fidget With Widgets, Draw! paper.  That must be where I got the idea.
 

Marc Nieper-Wißkirchen

unread,
Mar 17, 2022, 5:15:34 PM3/17/22
to scheme-re...@googlegroups.com
Am Do., 17. März 2022 um 22:01 Uhr schrieb John Cowan <co...@ccil.org>:

> E will be concerned with the environment of Scheme programs. In R7RS-small, this includes Sections 6.13 (Input and output) and 6.14 (System interface). In R6RS Libraries, this includes sections 8.1 (Condition types) and 8.2 (Port I/O), as Section 8.3 is the same as R7RS-small's I/O.
>
> Relevant SRFIs are sockets (SRFI 106), 120 (timers), 170 (Posix), 181 (Custom I/O ports, including transcoded ports), 192 (port positioning), 205 (terminal I/O, still draft). SRFIs 181 and 192 are directly modeled on R6RS.

At least the basics of custom ports will be part of the Foundations as
they cannot be portably implemented on top of the small language on
the one hand side and are host system-independent on the other hand
side. If a condition system becomes part of the language the basic I/O
conditions will have to be in the Foundations as well as the
Foundations include I/O of the small language. Committees E and F will
be in touch so that F can provide what E needs.

Nala Ginrut

unread,
Mar 18, 2022, 12:05:08 AM3/18/22
to scheme-re...@googlegroups.com
Hi all!

On Fri, Mar 18, 2022, 05:15 Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
Am Do., 17. März 2022 um 22:01 Uhr schrieb John Cowan <co...@ccil.org>:

> E will be concerned with the environment of Scheme programs.  In R7RS-small, this includes Sections 6.13 (Input and output) and 6.14 (System interface).  In R6RS Libraries, this includes sections 8.1 (Condition types) and 8.2 (Port I/O), as Section 8.3 is the same as R7RS-small's I/O.
>
> Relevant SRFIs are sockets (SRFI 106), 120 (timers), 170  (Posix), 181 (Custom I/O ports, including transcoded ports), 192 (port positioning), 205 (terminal I/O, still draft).  SRFIs 181 and 192 are directly modeled on R6RS.

No one mentioned epoll? We need a unified I/O multiplex API for poll/epoll/kqueue.
This could be based on FFI, but I think we should provide API in E.
 
Committees E and F will
be in touch so that F can provide what E needs.

+1
 

Best regards.

Lassi Kortela

unread,
Mar 18, 2022, 3:02:15 AM3/18/22
to scheme-re...@googlegroups.com
> No one mentioned epoll? We need a unified I/O multiplex API for
> poll/epoll/kqueue.
> This could be based on FFI, but I think we should provide API in E.

That design space changes a lot. io_uring is the hot new thing in Linux.

A basic POSIX poll() like API, which lets you test ports for readability
and writability and doesn't prescribe an underlying mechanism, might be
a good start. Expect the dynamic duo of threads and signals to cause
trouble. Best to try out in a SRFI first.

Lassi Kortela

unread,
Mar 18, 2022, 3:06:00 AM3/18/22
to scheme-re...@googlegroups.com
> At least the basics of custom ports will be part of the Foundations as
> they cannot be portably implemented on top of the small language on
> the one hand side and are host system-independent on the other hand
> side. If a condition system becomes part of the language the basic I/O
> conditions will have to be in the Foundations as well as the
> Foundations include I/O of the small language.

+1

Everything needed for R6RS compat should be in Foundations.

Any particular reason condition types belong in Environment? Or just I/O
related conditions?

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 3:20:16 AM3/18/22
to scheme-re...@googlegroups.com
Whatever the underlying implementation will be, on the Scheme side we
need a primitive that is able to multiplex different sources, e.g. I/O
port polling, threading condition variables, and the receipt of
signals like SIGTERM. Maybe a high-level API for this can be provided
with futures; as far as these are concerned B and F may need to
coordinate.

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 3:26:23 AM3/18/22
to scheme-re...@googlegroups.com
The foundations will and can only provide the foundations of a
condition hierarchy. Some E APIs will ask for more specialized
conditions and these will have to be provided by E. Moreover, in the
context of E, extra information of the erroneous situation could be
added to the basic I/O conditions. This is at least my understanding.

Lassi Kortela

unread,
Mar 18, 2022, 3:41:12 AM3/18/22
to scheme-re...@googlegroups.com
> Maybe a high-level API for this can be provided with futures

Are any implementers on board with futures? Multi-threading is hard. We
shouldn't add any thread stuff to (any part of) RnRS that isn't portable
and battle-tested.

The choices currently supported by more than one implementation are:

- SRFI 18
- Some adaptation of Concurrent ML (fibers, etc.)
- any others?

Daphne Preston-Kendal

unread,
Mar 18, 2022, 3:47:53 AM3/18/22
to scheme-re...@googlegroups.com
On 18 Mar 2022, at 08:41, Lassi Kortela <la...@lassi.io> wrote:

>> Maybe a high-level API for this can be provided with futures
>
> Are any implementers on board with futures? Multi-threading is hard.

Green threads are simple enough that even Chibi supports them. Indeed, is there a ‘sophisticated’ implementation (to use the WG1 charter language — i.e. one that might reasonably be expected to adopt R7RS Large) that doesn’t support threads in any way at all?


Daphne

Lassi Kortela

unread,
Mar 18, 2022, 3:53:19 AM3/18/22
to scheme-re...@googlegroups.com
>> Are any implementers on board with futures? Multi-threading is hard.
>
> Green threads are simple enough that even Chibi supports them. Indeed, is there a ‘sophisticated’ implementation (to use the WG1 charter language — i.e. one that might reasonably be expected to adopt R7RS Large) that doesn’t support threads in any way at all?

Not sure.

Any thread API in RnRS should be something that ties in seamlessly with
the native thread API in each Scheme implementation (i.e. RnRS threads
are a type of native thread that perhaps uses only some of the available
native features).

IIRC there used to be a situation in Guile where it has 2 (or 3?)
incompatible types of hash tables. We should take great care avoid a
situation where Scheme programs have to juggle incompatible concurrency
constructs.

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 3:57:55 AM3/18/22
to scheme-re...@googlegroups.com
Am Fr., 18. März 2022 um 08:53 Uhr schrieb Lassi Kortela <la...@lassi.io>:

> IIRC there used to be a situation in Guile where it has 2 (or 3?)
> incompatible types of hash tables. We should take great care avoid a
> situation where Scheme programs have to juggle incompatible concurrency
> constructs.

That's not necessarily a bad thing. The point is that Guile wants to
support R6RS, SRFI 69, and its own native hash tables. For portable
R6RS or R7RS programs, this is not a problem.

(As far as hash tables are concerned, SRFI 225 may help.)

Lassi Kortela

unread,
Mar 18, 2022, 4:18:17 AM3/18/22
to scheme-re...@googlegroups.com
> That's not necessarily a bad thing. The point is that Guile wants to
> support R6RS, SRFI 69, and its own native hash tables. For portable
> R6RS or R7RS programs, this is not a problem.

I worry about the tendency to think of RnRS as a standalone or isolated
thing. Big useful programs are never going to use only portable
features, no matter how good RnRS is. They'll interact with non-portable
APIs via cond-expand, pass objects to those APIs, and get objects back
from those APIs. If they receive something like a thread or a hashtable
from a native API, it better be compatible with the RnRS versions of
those things. (If RnRS can use only a subset of the native capabilities,
that's fine.) Likewise, keyword arguments, object systems and the like
have to be compatible with the native versions.

> (As far as hash tables are concerned, SRFI 225 may help.)

(I.e. the Dictionaries SRFI.) The decision was made that each procedure
takes a dictionary type descriptor in addition to the dictionary object
itself -- i.e. two arguments instead of one. Unfortunately I consider
this a dead end; nothing convenient or elegant can come out of pursuing
this design direction.

Alex Shinn

unread,
Mar 18, 2022, 4:26:39 AM3/18/22
to scheme-re...@googlegroups.com
2022年3月18日(金) 16:47 Daphne Preston-Kendal <d...@nonceword.org>:

Green threads are actually much harder to implement.  You need your own scheduler, your own polling mechanisms, your own synchronization primitives, and in chibi's case built-in FFI support.  It pervades every part of the design.

-- 
Alex



Daphne

--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scheme-reports-wg2/4DEF72F0-DD5A-4344-9C1A-FB230EC610B2%40nonceword.org.

Per Bothner

unread,
Mar 18, 2022, 6:40:24 AM3/18/22
to scheme-re...@googlegroups.com


On 3/18/22 00:41, Lassi Kortela wrote:
>> Maybe a high-level API for this can be provided with futures
>
> Are any implementers on board with futures? Multi-threading is hard. We shouldn't add any thread stuff to (any part of) RnRS that isn't portable and battle-tested.

Kawa has futures:
https://www.gnu.org/software/kawa/Threads.html
The resulting thread object implements the Lazy interface,
just like the result of (force exp).

Kawa also has "auto-force" - a Lazy value (including a 'future' thread)
is automatically forced when used in certain operations. This includes
display but not write.
--
--Per Bothner
p...@bothner.com http://per.bothner.com/

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 9:32:42 AM3/18/22
to scheme-re...@googlegroups.com
Am Fr., 18. März 2022 um 09:18 Uhr schrieb Lassi Kortela <la...@lassi.io>:
>
> > That's not necessarily a bad thing. The point is that Guile wants to
> > support R6RS, SRFI 69, and its own native hash tables. For portable
> > R6RS or R7RS programs, this is not a problem.
>
> I worry about the tendency to think of RnRS as a standalone or isolated
> thing. Big useful programs are never going to use only portable
> features, no matter how good RnRS is. They'll interact with non-portable
> APIs via cond-expand, pass objects to those APIs, and get objects back
> from those APIs. If they receive something like a thread or a hashtable
> from a native API, it better be compatible with the RnRS versions of
> those things. (If RnRS can use only a subset of the native capabilities,
> that's fine.) Likewise, keyword arguments, object systems and the like
> have to be compatible with the native versions.

This will most likely give you the small language (or even less).

>
> > (As far as hash tables are concerned, SRFI 225 may help.)
>
> (I.e. the Dictionaries SRFI.) The decision was made that each procedure
> takes a dictionary type descriptor in addition to the dictionary object
> itself -- i.e. two arguments instead of one. Unfortunately I consider
> this a dead end; nothing convenient or elegant can come out of pursuing
> this design direction.

While I am not in agreement with all the details of SRFI 225, I
haven't yet seen a better general idea.

If you can come up with a different idea that allows both efficient
generic algorithms and is logically well-founded (or if you even write
a SRFI about it), I'm sure I am not the only one who would be more
than eager to hear it.

Arthur A. Gleckler

unread,
Mar 18, 2022, 1:25:58 PM3/18/22
to scheme-re...@googlegroups.com
On Fri, Mar 18, 2022 at 12:20 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
 
Whatever the underlying implementation will be, on the Scheme side we
need a primitive that is able to multiplex different sources, e.g. I/O
port polling, threading condition variables, and the receipt of
signals like SIGTERM. Maybe a high-level API for this can be provided
with futures; as far as these are concerned B and F may need to
coordinate. 

Michael Sperber has a nice talk (YouTube) about harvesting the brilliant ideas of Concurrent ML.

Linas Vepstas

unread,
Mar 18, 2022, 1:27:05 PM3/18/22
to scheme-re...@googlegroups.com


On Fri, Mar 18, 2022 at 2:41 AM Lassi Kortela <la...@lassi.io> wrote:

The choices currently supported by more than one implementation are:

- SRFI 18
- Some adaptation of Concurrent ML (fibers, etc.)
- any others?

Guile threads work great. After a quick skim, they seem to be mostly srfi-18-like except in one area: instead of having a thread-set-specific, guile has a concept of "fluids".  A "fluid" is kind of like thread-specific state, except that you can bind it to more than one thread, and you can swap around between different fluids on any given thread.  In summary, a fluid is the blob of state that holds execution-context-specific data, but there is no hard-coded requirement for it to be tied one-to-one with any given thread control block (TCB).  Fluids seem strange if you're used to the C/C++/java view of the world, but they're great once you wrap your mind around the idea.

Fibers seem like a marvelous theoretical idea, and a positive step forward in threading theory. In practice, for my use case, fibers, in guile, offered no performance advantage, and it was not too hard to find bugs and problematic behaviour in the implementation. But I dunno, I never got to the bottom of the issue; I gave up when it became clear that there was no gain.

Actual threading performance depends strongly on proprietary (undocumented) parts of the CPU implemention.  For example, locks are commonly  implemented on cache lines, either on L1 or L2 or L3. Older AMD cpus seem to have only one lock for every 6 CPU's, (I think that means the lock hardware is in the L3 cache? I dunno) and so it is very easy to stall with locked cache-line contention. The very newest AMD CPU's seem to have 1 lock per CPU (so I guess they moved the lock hardware to the L1 cache??) and so are more easily parallelized under heavy lock workloads.  Old PowerPC's had one lock per L1 cache, if I recall correctly.  So servers work better than consumer hardware.

To be clear: mutexes per-se are not the problem; atomic ops are.  For example, in C++, the reference counts on shared pointers uses atomic ops, so if your C++ code uses lots of shared pointers, you will be pounding the heck out CPU lock hardware, and all of the CPU's are all going to snooping on the bus and they will all be checkpointing and invalidating and rolling back like crazy, hitting a very hard brick wall on some CPU designs.  I have no clue how much srfi-18 or fibers depend on atomic ops, but these are real issues that hurt real-world parallelizability.  Avoid splurging with atomic ops.

As to hash-tables:  lock-free hash tables are problematic. Facebook has the open-source "folly" C/C++ implementation for lockless hash tables. Intel has one too, but the documentation for the intel code is... well I could not figure out what intel was doing.  There's some cutting-edge research coming out of Israel on this, but I did not see any usable open-source implementations.

In my application, the lock-less hash tables offered only minor gains; my bottleneck was in the atomics/shared-pointers. YMMV.

-- linas



Nala Ginrut

unread,
Mar 18, 2022, 1:44:14 PM3/18/22
to scheme-re...@googlegroups.com
I think io_uring mixed I/O multiplex and managed async I/O.
It's better to separate them in standard API for portability, after all, Scheme has wide users including BSD, Hurd, and other OS folks.
And we may also consider adding async I/O API.
Except for io_uring, we still have delimited-continuation to make similar things. So the implementation could be various.

Best Regards.



 

--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.

Nala Ginrut

unread,
Mar 18, 2022, 1:52:21 PM3/18/22
to scheme-re...@googlegroups.com
Please also consider the Scheme for the embedded system.
The pthread is widely used in RTOS which could be the low-level implementation of the API, and green-thread is not a good choice for compact embedded systems.

I confess this should be put into R7RS-small topic, but as an useful extension of R7RS-small, I think the API should be general enough to embrace both pthread spec and green-thread design.

PS: As the author of LambdaChip, I'm considering adding threads based on pthread on ZephyrRTOS, but still hesitate about the API design.

Best regards.



--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.

Linas Vepstas

unread,
Mar 18, 2022, 2:22:39 PM3/18/22
to scheme-re...@googlegroups.com
Nala,

On Fri, Mar 18, 2022 at 12:52 PM Nala Ginrut <nalag...@gmail.com> wrote:
Please also consider the Scheme for the embedded system.
The pthread is widely used in RTOS which could be the low-level implementation of the API,

After a quick skim, srfi-18 appears to be almost exactly a one-to-one mapping into the pthreads API.

The raw thread-specific-set! and thread-specific in srfi-18 might be "foundational" but are unusable without additional work: at a minimum, the specific entries need to be alists or hash tables or something -- and, as mentioned earlier, the guile concept of "fluids" seems (to me) to be the superior abstraction, the abstraction that is needed.
 
and green-thread is not a good choice for compact embedded systems.

My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages.   I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

--linas
 


--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 

Amirouche Boubekki

unread,
Mar 18, 2022, 2:34:15 PM3/18/22
to scheme-re...@googlegroups.com
> https://www.youtube.com/watch?v=pf4VbP5q3P0


Thanks a lot for this link!

Amirouche Boubekki

unread,
Mar 18, 2022, 3:01:11 PM3/18/22
to scheme-re...@googlegroups.com
My understanding is not what POSIX or non-POSIX API should be
supported to implement concurrency and parallelism but how they
integrate with Scheme. Some questions:

- Can continuations travel across threads?
- Do parameters travel with their continuations? Hint: continuations
attachments, see https://srfi.schemers.org/srfi-157/

There are several Scheme that implement green threads or parallelism
or both, but there is no clear agreement on what API is the best. The
most common interface is https://srfi.schemers.org/srfi-18/

Alternatives to CML and SRFI-18 is
http://wiki.call-cc.org/eggref/5/gochan and the actor model (Gambit
termite)

Vincent Manis

unread,
Mar 18, 2022, 3:34:49 PM3/18/22
to scheme-re...@googlegroups.com, Marc Nieper-Wißkirchen
On 2022-03-18 00:20, Marc Nieper-Wißkirchen wrote

> Whatever the underlying implementation will be, on the Scheme side we
> need a primitive that is able to multiplex different sources, e.g. I/O
> port polling, threading condition variables, and the receipt of
> signals like SIGTERM. Maybe a high-level API for this can be provided
> with futures; as far as these are concerned B and F may need to
> coordinate.

I think that before we go very far in some of these directions, we need
to make a clear statement about what environment we are targeting. I
would not assert (because I don't have that knowledge) that the Windows
API provides all the concurrency support (and in particular, polling
non-uniform sources) that Unix/Linux/macOS provide, or vice versa. There
are also bare-metal or embedded environments (such as eCos) to consider.

What's clear to me is that the core of Posix and Windows threads can be
supported across many environments, if necessary by green threads. I
don't know if io_uring can be provided (efficiently) in Windows, for
example.

I'm not advocating any particular minimum environment; in fact, we can
even have several concurrency features,  discernable via cond-expand.
But I would be concerned if we painted ourselves into a corner, by
specifying such rigid environmental requirements that only a few systems
could satisfy them.

I think that this comment applies to pretty much everything that will
come within Committee E's purview.

-- vincent


Linas Vepstas

unread,
Mar 18, 2022, 4:12:47 PM3/18/22
to scheme-re...@googlegroups.com
Hi,

On Fri, Mar 18, 2022 at 2:01 PM Amirouche Boubekki <amirouche...@gmail.com> wrote:
On Fri, Mar 18, 2022 at 8:02 AM Lassi Kortela <la...@lassi.io> wrote:
>
> > No one mentioned epoll? We need a unified I/O multiplex API for
> > poll/epoll/kqueue.
> > This could be based on FFI, but I think we should provide API in E.
>
> That design space changes a lot. io_uring is the hot new thing in Linux.
>
> A basic POSIX poll() like API, which lets you test ports for readability
> and writability and doesn't prescribe an underlying mechanism, might be
> a good start. Expect the dynamic duo of threads and signals to cause
> trouble. Best to try out in a SRFI first.
>

My understanding is not what POSIX or non-POSIX API should be
supported to implement concurrency and parallelism but how they
integrate with Scheme. Some questions:

- Can continuations travel across threads?
- Do parameters travel with their continuations? Hint: continuations
attachments, see https://srfi.schemers.org/srfi-157/

I'll say it a third time, and then shut up: guile fluids.  As I understand them, they work with continuations, (although my experiments were minor) and this claim is made by the docs  (see  https://www.gnu.org/software/guile/manual/html_node/Fluids-and-Dynamic-States.html)  This is in sharp contrast to srfi-18 thread-set-specific, which is manifestly continuation-unaware.

There are several Scheme that implement green threads or parallelism
or both, but there is no clear agreement on what API is the best. The
most common interface is https://srfi.schemers.org/srfi-18/

Alternatives to CML and SRFI-18 is
http://wiki.call-cc.org/eggref/5/gochan and the actor model (Gambit
termite)

Introductory explanations of fibers explain how one can start with poll/epoll, and then immediately realize that the go channels model is better. Then, with some "minor" changes of viewpoint, end up with the broader and more flexible idea of fibers. If I recall correctly, the introductory expositions to fibers also talk about the actor model.  All this leaves me with a strong impression that fibers really are an important advancement in parallel computation.  I didn not study fibers closely, but it seemed to be a major advancement. I heartily endorse them.

As to "which": pthreads or fibers, it should be "both". My impression is that rank-n-file, bread-n-butter quotidian programmers are just fine with a pthreads/srfi-18 model (plus the missing-but-needed fluids) so you can't go wrong with that. For those that are pickier, more theoretically inclined, or are facing difficult performance and scheduling problems, my general impression is that they will want fibers.  I'm thinking this is not an either-or-choice, but rather a "both".

And, yes, lock-free hash tables would be ... well, for people who use hash-tables, I'm thinking this would be a pretty big deal.  You can just feel the desperation in the various mailing lists.  For scheme to provide something here would be .. a feather in it's cap.

However, I am a passive bystander in the process here; I won't be implementing this stuff. Sorry. Just making grand pronouncements as to overall general impressions.

-- Linas

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 5:39:36 PM3/18/22
to scheme-re...@googlegroups.com
Am Fr., 18. März 2022 um 20:01 Uhr schrieb Amirouche Boubekki
<amirouche...@gmail.com>:

> - Can continuations travel across threads?
> - Do parameters travel with their continuations? Hint: continuations
> attachments, see https://srfi.schemers.org/srfi-157/

It is one of the many purposes of SRFI 226 to answer all these
questions uniformly and consistently.

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 5:51:21 PM3/18/22
to scheme-re...@googlegroups.com
Am Fr., 18. März 2022 um 19:22 Uhr schrieb Linas Vepstas
<linasv...@gmail.com>:
>
> Nala,
>
> On Fri, Mar 18, 2022 at 12:52 PM Nala Ginrut <nalag...@gmail.com> wrote:
>>
>> Please also consider the Scheme for the embedded system.
>> The pthread is widely used in RTOS which could be the low-level implementation of the API,
>
>
> After a quick skim, srfi-18 appears to be almost exactly a one-to-one mapping into the pthreads API.

It was also influenced in parts by Java and the Windows API, I think,
but Marc Feeley will definitely know better.

> The raw thread-specific-set! and thread-specific in srfi-18 might be "foundational" but are unusable without additional work: at a minimum, the specific entries need to be alists or hash tables or something -- and, as mentioned earlier, the guile concept of "fluids" seems (to me) to be the superior abstraction, the abstraction that is needed.

Please add your thoughts about it to the SRFI 226 mailing list. As far
as Guile fluids are concerned, from a quick look at them, aren't they
mostly what parameter objects are in R7RS?

>
>>
>> and green-thread is not a good choice for compact embedded systems.
>
>
> My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages. I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

For SRFI 18/226, it doesn't matter whether green threads or native
threads underlie the implementation. There can even be a mixture like
mapping N Scheme threads to one OS thread.

>>> Actual threading performance depends strongly on proprietary (undocumented) parts of the CPU implemention. For example, locks are commonly implemented on cache lines, either on L1 or L2 or L3. Older AMD cpus seem to have only one lock for every 6 CPU's, (I think that means the lock hardware is in the L3 cache? I dunno) and so it is very easy to stall with locked cache-line contention. The very newest AMD CPU's seem to have 1 lock per CPU (so I guess they moved the lock hardware to the L1 cache??) and so are more easily parallelized under heavy lock workloads. Old PowerPC's had one lock per L1 cache, if I recall correctly. So servers work better than consumer hardware.
>>>
>>> To be clear: mutexes per-se are not the problem; atomic ops are. For example, in C++, the reference counts on shared pointers uses atomic ops, so if your C++ code uses lots of shared pointers, you will be pounding the heck out CPU lock hardware, and all of the CPU's are all going to snooping on the bus and they will all be checkpointing and invalidating and rolling back like crazy, hitting a very hard brick wall on some CPU designs. I have no clue how much srfi-18 or fibers depend on atomic ops, but these are real issues that hurt real-world parallelizability. Avoid splurging with atomic ops.

If you use std::shared_ptr, many uses of std::move are your friend or
you are probably doing it wrong. :)

>>> As to hash-tables: lock-free hash tables are problematic. Facebook has the open-source "folly" C/C++ implementation for lockless hash tables. Intel has one too, but the documentation for the intel code is... well I could not figure out what intel was doing. There's some cutting-edge research coming out of Israel on this, but I did not see any usable open-source implementations.
>>>
>>> In my application, the lock-less hash tables offered only minor gains; my bottleneck was in the atomics/shared-pointers. YMMV.

Unfortunately, we don't have all the freedom of C/C++. While it is
expected that a C program will crash when a hash table is modified
concurrently, most Scheme systems are expected to handle errors
gracefully and not crash. We may need a few good ideas to minimize the
amount of locking needed.

Alex Shinn

unread,
Mar 18, 2022, 6:20:03 PM3/18/22
to scheme-re...@googlegroups.com
On Sat, Mar 19, 2022 at 3:22 AM Linas Vepstas <linasv...@gmail.com> wrote:
>
> On Fri, Mar 18, 2022 at 12:52 PM Nala Ginrut <nalag...@gmail.com> wrote:
>>
>> and green-thread is not a good choice for compact embedded systems.
>
> My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages. I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

Agreed, green threads are not a good choice for an embedded system, or
most systems
in general.

My choice of green threads for Chibi was in its original design as an
extension language.
It should be easy to include in an existing C/C++ application in a
thread-safe manner
without any concern that it would interact with the host threads at
all. It's a more general
and powerful mechanism than the idle timers of Emacs or JavaScript
async callbacks.

--
Alex

John Cowan

unread,
Mar 18, 2022, 6:51:59 PM3/18/22
to scheme-re...@googlegroups.com
On Fri, Mar 18, 2022 at 4:12 PM Linas Vepstas <linasv...@gmail.com> wrote:

There are several Scheme that implement green threads or parallelism
or both, but there is no clear agreement on what API is the best. The
most common interface is https://srfi.schemers.org/srfi-18/

It's important to realize that SRFI 18 (as opposed to its sample implementation) can be implemented using OS threads, green threads, or both (M:N threads), so you have to be very careful about what is supposed to be per-thread.  errno in particular is only per-thread if SRFI 18 uses OS threads.

John Cowan

unread,
Mar 18, 2022, 7:28:04 PM3/18/22
to scheme-re...@googlegroups.com
On Fri, Mar 18, 2022 at 2:22 PM Linas Vepstas <linasv...@gmail.com> wrote:
 
My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages.   I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

Chicken's threads are green because that's how Cheney on the MTA works: each Scheme call winds up the C stack until it gets too big and then it is chopped off by longjmp(), moving all data structures allocated (also on the stack) into the heap at that time, so that the C stack is also the nursery.  Because all the returns are dead, there is no need for the GC to understand the stack layout used by C.  Chicken provides access to pthreads, but Scheme code can only run in the main thread: the other threads must be running C.

Cyclone uses a variant of Cheney in which an object is always moved to the heap before it can be accessed by more than one Scheme thread.  That's all I know about it.

Nala Ginrut

unread,
Mar 18, 2022, 11:43:56 PM3/18/22
to scheme-re...@googlegroups.com
Hi Linas!

On Sat, Mar 19, 2022 at 2:22 AM Linas Vepstas <linasv...@gmail.com> wrote

My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages.   I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

--linas
 

I have to mention that green-thread is still a good choice for the high-performance concurrent server design. 
Maybe "green-thread" is improper here, I mean the co-routine. GNU Artanis has a co-routine serer core based on delimited-continuation. 
However, IMO, the co-routine could be another separated standard API.

Best regards.

Nala Ginrut

unread,
Mar 18, 2022, 11:52:24 PM3/18/22
to scheme-re...@googlegroups.com
On Sat, Mar 19, 2022 at 5:51 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
Please add your thoughts about it to the SRFI 226 mailing list. As far
as Guile fluids are concerned, from a quick look at them, aren't they
mostly what parameter objects are in R7RS?


Guile's parameter was implemented with fluid, so we may treat them as the same thing here.
It is recommended to use the standard parameterize interface rather than fluid directly. 

 

Linas Vepstas

unread,
Mar 19, 2022, 1:50:38 AM3/19/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Replying to Marc, and Nala,

It doesn't seem appropriate to talk about C/C++ here, so I will skip that discussion, except to say: it's all about performance. Flexibility is great but if an API cannot be implemented so it's fast, then it's not a good API. It's very hard to determine if an API is fast without measuring it. Thinking about it is often misleading; reading the implementation easily leads to false conclusions. Sad but true.

And so, an anecdote: Nala notes: "guile parameters are built on top of guile fluids".   That is what the docs say, but there may be some deep implementation issues. A few years ago, I performance-tuned a medium-sized guile app (https://github.com/MOZI-AI/annotation-scheme/issues/98)  and noticed that code that mixed parameters with call/cc was eating approx half the CPU time. For a job that took hours to run, "half" is a big deal. I vaguely recall that parameters were taking tens of thousands of CPU cycles, or more, to be looked up.

Fluids/parameters need to be extremely fast: dozens of cycles, not tens of thousands.  Recall how this concept works in C/C++:
-- A global variable is stored in the text segment, and a lookup of the current value of that variable is a handful of cpu cycles: it's just at some fixed offset in the text segment.
-- A per-thread global variable is stored in the thread control block (TCB), located at the start of the (per-thread) stack.  So, a few cycles to find the TCB, a few more to compute the offset and maybe do a pointer chase.

Any sort of solution for per-thread storage in scheme, whether fluids or parameters, needs to be no more complex than the above.  The scheme equivalent of the  TCB for the currently running thread needs to be instantly available, and not require some traversal of an a-list or hash table.  The location of the parameterized value should not be more than a couple of pointer-chases away; dereferencing it should not require locks or atomics. It needs to be fast.

It needs to be fast to avoid the conclusions of the earlier-mentioned "Curse of Lisp" essay: Most scheme programmers are going to be smart enough to cook up their own home-grown, thread-safe paramater object: but their home-grown thing will almost surely have mediocre performance. If "E" is going to be an effective interface to the OS, it needs to be fast.  If you can't beat someone's roll-your-own system they cooked up in an afternoon, what's the point?

Conclusion: srfi-226 should almost surely come with a performance-measurement tool-suite, that can spit out hard numbers for parameter-object lookups-per-microsecond while running 12 or 24 threads.  If implementations cannot get these numbers into the many-dozens-per-microsecond range, then ... something is misconceived in the API.

My apologies, I cannot make any specific, explicit recommendations beyond the above.

-- Linas.

--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.

Marc Nieper-Wißkirchen

unread,
Mar 19, 2022, 4:24:31 AM3/19/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
It should be noted that SRFI 226 parameter objects are not meant as a
replacement for thread locals. These form a different concept; there
will have to be an extra set of procedures for fast TLS. While SRFI
18's thread-specific-set! is certainly some primitive on top of which
TLS could be implemented, it is not sufficient for interoperability
between independent libraries accessing it.

Am Sa., 19. März 2022 um 06:50 Uhr schrieb Linas Vepstas
> To view this discussion on the web visit https://groups.google.com/d/msgid/scheme-reports-wg2/CAHrUA37L5LWyaz-EEr0QGMMG70Wqsz7NZm%3DtKH88L4Jy9O6VJA%40mail.gmail.com.

Per Bothner

unread,
Mar 19, 2022, 6:06:18 AM3/19/22
to scheme-re...@googlegroups.com
On 3/18/22 22:50, Linas Vepstas wrote:
> So, a few cycles to find the TCB, a few more to compute the offset and maybe do a pointer chase.

How does that work when fluids/parameters may be created dynamically?

> Any sort of solution for per-thread storage in scheme, whether fluids or parameters, needs to be no more complex than the above.  The scheme equivalent of the  TCB for the currently running thread needs to be instantly available, and not require some traversal of an a-list or hash table.

I don't think it is realistic to prohibit a hash table look up.

I note that Java ThreadLocal lookups are implemented in a per-thread hash-table,
indexed by ThreadLocal. This avoids contention/locks, whichis moe important than
avoiding a hash-table lookup.
https://stackoverflow.com/questions/1202444/how-is-javas-threadlocal-implemented-under-the-hood

John Cowan

unread,
Mar 19, 2022, 2:57:29 PM3/19/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
On Sat, Mar 19, 2022 at 1:50 AM Linas Vepstas <linasv...@gmail.com> wrote:
 
It doesn't seem appropriate to talk about C/C++ here, so I will skip that discussion, except to say: it's all about performance. Flexibility is great but if an API cannot be implemented so it's fast, then it's not a good API.

A counterexample is Earley parsing, which is O(n^3) but allows arbitrary grammars. If you can force your parsing problem into a LALR(1) or LL(k) grammar, then you can go much faster.  But if you don't get to pick the grammar, you simply have to pay the cost.  

In general, optimization isn't about keeping costs low, it's about keeping the cost-to-benefit ratio high.  For example, the most efficient garbage collector is the \epsilon GC, which doesn't actually collect any garbage but is quite satisfactory for bounded-length jobs when there is plenty of memory.

or a job that took hours to run, "half" is a big deal. I vaguely recall that parameters were taking tens of thousands of CPU cycles, or more, to be looked up.

It depends on who or what is waiting for it to finish.  Expending a large fraction of a day is not a problem in a job that runs once a week.  Obviously that wasn't the case with your program, but in general the tendency to assume that your problem is everybody's problem has to be resisted in standards work.

Linas Vepstas

unread,
Mar 19, 2022, 3:09:00 PM3/19/22
to scheme-re...@googlegroups.com
Guile distinguishes between thread-local storage and dynamic state.  Dynamic state is "usually" associated with a single thread, and so it fits the simple-minded idea of per-thread storage. However, it is a bit more: it can be passed around to to thunks, exceptions, continuations (that is, it can be reified). Dynamic state is a table of "fluids"; each fluid is a location to store a single value.  By contrast, "true" thread-local-fluids cannot be captured in this way, they really are local to that thread, only.   Thus, although dynamic-state is usually per-thread, there are unusual cases where it might be shared by several threads (and so entailing concerns about locking, consistency, etc.) But since thread-local storage cannot be reified, those concerns aren't present.

On Sat, Mar 19, 2022 at 5:06 AM Per Bothner <p...@bothner.com> wrote:
On 3/18/22 22:50, Linas Vepstas wrote:
> So, a few cycles to find the TCB, a few more to compute the offset and maybe do a pointer chase.

How does that work when fluids/parameters may be created dynamically?
 
In C/C++, there's nothing dynamic; offsets are known at compile time. They're grouped together in a text segment by the linker, and the loader, aka glibc, makes a copy of that segment for each thread that is spawned. 

In scheme, the analog of that text segment is the "dynamic state". For each new fluid, you append to the end of it.   In a let/letrec context, you point to that location:   (let ((x some-fluid)) ...) means that x finds the current dynamic state (of which there can only be one, for that thread) and then moves to some fixed offset in that state (as all threads spawned from this one must necessarily have the same fixed location.)

This now highlights a limitation to the reification of dynamic states: You cannot take a dynamic state that has only a handful of fluids in it, and bind it to a blob of code that is expecting lots of stuff in there.   This is similar to (the same as?) the case where you might want to set "the current execution environment".  See, however, below re: java.

When TLS was a brand-new concept, several formal papers, suitable for academic journals, were written, spelling out the details: exactly what the responsibilities of the compiler, linker and loader are, exactly which blocks are located where, what pointers point to what.  These formed the primary reference for the implementations.  Reading through the implementations, there are assorted differences and improvements and extensions.  Finally, there are the C/C++ standards committee documents, which mention none of these implementation details.

I don't know how to map this to the srfi process. The srfis are sort of like the C/C++ standards documents: the programmers read them, but they don't explain how implementations actually accomplish this stuff.   If I read  https://www.gnu.org/software/guile/manual/html_node/Fluids-and-Dynamic-States.html it does not explain "how it works".  It doesn't make any performance guarantees; it doesn't even set performance expectations.  Yet clearly, this is complicated enough that a "reference implementation" probably should explain "how it works", instead adding a footnote: "read the source, Luke".

Performance guarantees should be set:  there's been discussion of some srfi for constant strings, simply because someone, somewhere perceives that ordinary scheme strings aren't fast enough for some kind of efficient computation.  Now, strings are easy: we all have a pretty clear idea in our heads of what is going on with a string.  Threads, not so much.
 

> Any sort of solution for per-thread storage in scheme, whether fluids or parameters, needs to be no more complex than the above.  The scheme equivalent of the  TCB for the currently running thread needs to be instantly available, and not require some traversal of an a-list or hash table.

I don't think it is realistic to prohibit a hash table look up.

Umm, I'm not convinced hash maps are needed. But then, I have never written a scheme implementation.  Do you use hash maps to perform lookups for continuations? Do you use hash maps for let and let-rec?  Do you use hash maps for other kinds of execution environments?  Libraries? Loadable modules?  If so, then, sure, I guess hash-maps are unavoidable.   If not, then not. 

Now, C/C++ does not have a garbage collector, so they figured out how to do all this stuff using flat tables and offsets, shared-lib glue stubs and loader trampolines.  Java has a garbage collector, and they have some completely different view of the world (apparently involving hash maps)  Now, you could say "a hah! Scheme has a garbage collector, therefore scheme should take inspiration from java!" but I'm not convinced this is the correct analogy.

--linas


I note that Java ThreadLocal lookups are implemented in a per-thread hash-table,
indexed by ThreadLocal.  This avoids contention/locks, whichis moe important than
avoiding a hash-table lookup.
https://stackoverflow.com/questions/1202444/how-is-javas-threadlocal-implemented-under-the-hood
--
        --Per Bothner
p...@bothner.com   http://per.bothner.com/

--

Linas Vepstas

unread,
Mar 19, 2022, 3:58:29 PM3/19/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Hi John,

I have no bones to pick, here, but one:


On Sat, Mar 19, 2022 at 1:57 PM John Cowan <co...@ccil.org> wrote:

 in general the tendency to assume that your problem is everybody's problem has to be resisted in standards work.

C/C++ sets the benchmark for performance, and all other programming languages rate themselves against how fast they run, compared to C/C++.  Some languages have a charter that explicitly mentions C/C++-like performance:  CaML explicitly mentions this as a language design goal.  Some languages were not explicit about it, but have been forced into this, by powerful commercial forces: this is Java.  Most languages try to be reasonable, but they're not putting design decisions under a performance microscope. Is this where you want to be with scheme?

What's the charter for scheme, really? "To be popular like python" seems unlikely; even MIT is no longer teaching from SICP.  "To be a laboratory for exploration" seems to be accurate: we have three dozen scheme implementations precisely because it's easy and fun to explore. We have things like racket precisely because part of the fun is exploring "what else the language could be", without jumping ship to join the haskell crowd (which btw has only one implementation not dozens. Two if you count f#)

Standardization says: "we've decided what this should be, we've made up our minds, and we're going to set it in stone."  At this point, the standard should allow for at least one high-performance implementation, even if this is not the simplest or smallest-possible implementation.  It's easy to wave a white flag, throw one's hands in the air, and declare that "hash tables are the solution to all implementation problems". 

I'd like scheme to be as-fast-as-possible, as what else is it offering the world? it's not going to be python or java or rust. It may as well be tight.

--linas

Marc Nieper-Wißkirchen

unread,
Mar 19, 2022, 6:11:37 PM3/19/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Am Sa., 19. März 2022 um 20:58 Uhr schrieb Linas Vepstas
<linasv...@gmail.com>:

> I'd like scheme to be as-fast-as-possible, as what else is it offering the world? it's not going to be python or java or rust. It may as well be tight.

I think we can agree that we do not wish to standardize APIs in a way
that prevents efficient implementations. But "efficient" is a notion
relative to the abstraction we want to model in each particular case.
If we have a Scheme API that models something like pthread's
thread-local variables, we would want that it can be implemented
roughly as efficiently as the native C API. On the other hand, if the
more correct model for a certain use case would be dynamic state bound
to the current continuation, Scheme can offer an API for this
abstraction (parameter objects, the current parameterization, etc.)
that is not offered by C and which may more correct in some use case,
but slower than thread-local storage.

For an API like phtread's thread-local variables, we are presented
with different options for an API. One possible API could allow
general Scheme objects as keys to thread-local variables; another
possible API choice could be to restrict keys to opaque values
returned by some procedure, say, (thread-local-key). The latter API
can more likely directly map to the pthread API, and won't be less
efficient (up to a constant factor) than C; this may not be true for
the former API.

Marc

Linas Vepstas

unread,
Mar 20, 2022, 12:57:02 AM3/20/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
On Sat, Mar 19, 2022 at 5:11 PM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:

For an API like phtread's thread-local variables, we are presented
with different options for an API. One possible API could allow
general Scheme objects as keys to thread-local variables; another
possible API choice could be to restrict keys to opaque values
returned by some procedure, say, (thread-local-key). The latter API
can more likely directly map to the pthread API, and won't be less
efficient (up to a constant factor) than C; this may not be true for
the former API.

An opaque key returned by (thread-local-key) is the only thing that makes sense to me.

Allowing an arbitrary scheme object to be a key in a table is ... umm.... it opens the door to some very interesting conversations about databases. I'd rather slam the door shut on that.  (You've already seen my tirades about databases.)

--linas

Marc


--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.

John Cowan

unread,
Mar 20, 2022, 6:34:44 PM3/20/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
On Sat, Mar 19, 2022 at 3:58 PM Linas Vepstas <linasv...@gmail.com> wrote:
 
Most languages try to be reasonable, but they're not putting design decisions under a performance microscope. Is this where you want to be with scheme?

Those who want performance above all else may move away from C++ someday, but it isn't going to be towards any Lisp.  Lisp, like Basic and many other languages, has an inferiority complex about speed: at some point the most readily available implementation was a slow interpreter, and the whole language got tarred with that brush.

What's the charter for scheme, really? "To be popular like python" seems unlikely; even MIT is no longer teaching from SICP.

Because MIT decided that the comp-sci course every student takes should now be about writing glue rather than data structures and algorithms, a plausible choice in the current environment.  Of course Python is not *especially* for writing glue, it's just that people do in fact write glue in it.  Similarly, Python's popularity is primarily a result of being in the right place (not Japan like Ruby) at the right time (when people were fed up with Perl).  It wasn't designed with popularity in mind, any more than Java was designed with performance in mind (it just so happened that advanced JIT work was being done at the same time in the same company).
I'd like scheme to be as-fast-as-possible, as what else is it offering the world?

Syntax extension, for one thing: the Lisps are tools that provide expert users with a huge amount of expressive power.  Of course, many programmers think expressive power is a Bad Thing, and in clumsy hands it is dangerous.  Scheme, unlike CL, provides multiple styles of syntax extension from most to least safe.

Nala Ginrut

unread,
Mar 20, 2022, 9:18:28 PM3/20/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org


On Mon, Mar 21, 2022, 06:34 John Cowan <co...@ccil.org> wrote:


On Sat, Mar 19, 2022 at 3:58 PM Linas Vepstas <linasv...@gmail.com> wrote:
 
Most languages try to be reasonable, but they're not putting design decisions under a performance microscope. Is this where you want to be with scheme?

Those who want performance above all else may move away from C++ someday, but it isn't going to be towards any Lisp.  Lisp, like Basic and many other languages, has an inferiority complex about speed: at some point the most readily available implementation was a slow interpreter, and the whole language got tarred with that brush.


Maybe we need committee P for performance suggestion. P is unecessary to be forced, but producing a series rules that implementation could choose. And users may check them from an implementation to learn its performance consideration.

The performance could be rated under P rules. So that people don't have to argue the benchmark among the various Scheme implementation. And there's reasonable performance rules to choose an implementation.

P rules shouldn't be forced since many Scheme choose to be slow interpreter for better dynamic features.

We don't have to organize the P discussion intended. We may discuss it case by case when there's performance argument, and form the conclusion to be a rule in P.
This kind of passive working style may save some work.

Comments?

Best regards.


Nala Ginrut

unread,
Mar 20, 2022, 9:43:36 PM3/20/22
to scheme-re...@googlegroups.com
Maybe committee O for optimization is better. I saw a suggestion for publication committee which is suitable for P.

Marc Nieper-Wißkirchen

unread,
Mar 21, 2022, 2:22:18 AM3/21/22
to scheme-re...@googlegroups.com
Am Mo., 21. März 2022 um 02:43 Uhr schrieb Nala Ginrut <nalag...@gmail.com>:
>
> Maybe committee O for optimization is better. I saw a suggestion for publication committee which is suitable for P.

Having a group of people checking other committees' work for
performance pitfalls looks like a helpful idea to me.

Although it would be a lot more work, naming the group I for
Implementation and tasking it with delivering a fast implementation of
the language would be optimal.

Marc Nieper-Wißkirchen

unread,
Mar 21, 2022, 2:28:22 AM3/21/22
to Marc Feeley, John Cowan, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Am Mo., 21. März 2022 um 05:49 Uhr schrieb Marc Feeley (via srfi-226
list) <srfi...@srfi.schemers.org>:
>
> The SRFI-18 thread-specific and thread-specific-set! procedures are a primitive way of attaching local storage to a thread. It essentially gives you a single cell of local storage, but a larger data structure can be put in that cell to give more storage. That mechanism is low-level but sufficient for the programmer to create his own higher level thread abstraction.
>
> Gambit offers another mechanism that is based on extensible records (i.e. the definition of record subtypes). Threads are essentially records of type “thread”. Subtypes of the thread type can be defined using
>
> (define-type-of-thread foo
> field1
> field2
> …)
>
> If t is a “foo” thread, then (foo-field1 t) and (foo-field1-set! t x) can be used to read and write field1 of that thread. So these fields are a form of thread local storage. Starting a thread is however more involved because the thread needs to be initialized before it is started:
>
> (define t (make-foo 42 …))
>
> (write (foo-field1 t)) ;; prints 42
>
> (thread-init! t (lambda () (write (foo-field1 (current-thread))))
>
> (thread-start! t) ;; will start the thread, and it will print 42
>
> This mechanism gives an O(1) access time for thread local storage, which is hard to beat.

Thank you for sharing this; I will think about whether and how it fits
into SRFI 226.

We have actually two use cases here: Thread-local cells that are known
at thread creation time and thread-local cells that are to be
dynamically allocated. The former would be created via `_Thread_local`
in C11, the latter with `tss_create`. Gambit's mechanism gives an
optimal solution for the first use case. The second way of creating
thread-local cells could be built on the first or could be an extra
mechanism to exploit some performance characteristics of the
underlying platform.

Marc Nieper-Wißkirchen

unread,
Mar 21, 2022, 2:54:57 AM3/21/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Am So., 20. März 2022 um 23:34 Uhr schrieb John Cowan <co...@ccil.org>:

> Because MIT decided that the comp-sci course every student takes should now be about writing glue rather than data structures and algorithms, a plausible choice in the current environment.

Thinking that leads to such decisions is fatal in my opinion.

Nala Ginrut

unread,
Mar 21, 2022, 3:38:28 AM3/21/22
to scheme-re...@googlegroups.com


On Mon, Mar 21, 2022, 14:22 Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:

Although it would be a lot more work, naming the group I for
Implementation and tasking it with delivering a fast implementation of
the language would be optimal.


Although Scheme keeps forming itself with diversity mind (expressiveness, flexibility, minimalism, etc), it's better to have a branch aiming for the performance. After all, the benchmark race is always hot among languages, and attractive in the industry.


Dr. Arne Babenhauserheide

unread,
Mar 21, 2022, 11:12:35 AM3/21/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org, Linas Vepstas

Linas Vepstas <linasv...@gmail.com> writes:

> I'd like scheme to be as-fast-as-possible, as what else is it offering the world? it's not going to be python or java or rust. It may as well be tight.

One thing that impressed me about Scheme is that, while not being totally
about performance, it avoided features that forced using hard to
optimize approaches.

That’s where I see the role of the standard: leave the implementations
to actually produce the high performance code, but take care to make
their job easy.

Making a high-performance Scheme should not require building a huge V8
or SpiderMonkey or Javascript Core system with their massive memory
consumption and bulging complexity. Those are the consequence of
language-design-by-accident. The same goes for Java and the JVM. Yes, it
is awesome what performance they got, but it should not be that hard.

Finding the right limitations for a language to make it easy to optimize
without placing the burden on the programmers is hard; but it’s one of
the things that make Scheme special to me.

I loved it that the discussion about ER and syntax-case was decided on
the basis of “we can only have one, because if we implement one, the
other provably¹ requires more than O(N) for expansion”.

¹: I did not see the proof, though — maybe I should have dug deeper.

Best wishes,
Arne
--
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
draketo.de
signature.asc

Marc Nieper-Wißkirchen

unread,
Mar 21, 2022, 12:03:58 PM3/21/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org, Linas Vepstas
Am Mo., 21. März 2022 um 16:12 Uhr schrieb Dr. Arne Babenhauserheide
<arne...@web.de>:
>
>
> Linas Vepstas <linasv...@gmail.com> writes:
>
> > I'd like scheme to be as-fast-as-possible, as what else is it offering the world? it's not going to be python or java or rust. It may as well be tight.
>
> One thing that impressed me about Scheme is that, while not being totally
> about performance, it avoided features that forced using hard to
> optimize approaches.
>
> That’s where I see the role of the standard: leave the implementations
> to actually produce the high performance code, but take care to make
> their job easy.
>
> Making a high-performance Scheme should not require building a huge V8
> or SpiderMonkey or Javascript Core system with their massive memory
> consumption and bulging complexity. Those are the consequence of
> language-design-by-accident. The same goes for Java and the JVM. Yes, it
> is awesome what performance they got, but it should not be that hard.

I think you make a very good point here.

> Finding the right limitations for a language to make it easy to optimize
> without placing the burden on the programmers is hard; but it’s one of
> the things that make Scheme special to me.
>
> I loved it that the discussion about ER and syntax-case was decided on
> the basis of “we can only have one, because if we implement one, the
> other provably¹ requires more than O(N) for expansion”.
>
> ¹: I did not see the proof, though — maybe I should have dug deeper.

Our proof is currently not a mathematical proof but proof by lack of
falsification. :)

A formal argument would be based on that syntax-case requires the
identifiers in the output of a macro transformer to be marked and ER
requires fully unwrapped syntax objects as the input to a macro
transformer.

There are, of course, loopholes if you overload car and cdr so that
car and cdr transparently unwrap syntax objects. But then you probably
slow down all list operations because car and cdr would need to
dispatch on the type of their arguments.

Nala Ginrut

unread,
Mar 21, 2022, 12:16:12 PM3/21/22
to scheme-re...@googlegroups.com
Back to the topic of this thread, I think futures is more modern than thread API.
The futures is actually the managed thread model, and hide low-level (spawn, join, schedule), people can check its status to confirm if the work was done, and the futures can return a result which is missing feature in the traditional thread model.

Can we start from there? 

On Fri, Mar 18, 2022, 15:41 Lassi Kortela <la...@lassi.io> wrote:
> Maybe a high-level API for this can be provided with futures

Are any implementers on board with futures? Multi-threading is hard. We
shouldn't add any thread stuff to (any part of) RnRS that isn't portable
and battle-tested.

The choices currently supported by more than one implementation are:

- SRFI 18
- Some adaptation of Concurrent ML (fibers, etc.)
- any others?


--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.

Marc Nieper-Wißkirchen

unread,
Mar 21, 2022, 12:22:35 PM3/21/22
to scheme-re...@googlegroups.com
Am Mo., 21. März 2022 um 17:16 Uhr schrieb Nala Ginrut <nalag...@gmail.com>:
>
> Back to the topic of this thread, I think futures is more modern than thread API.
> The futures is actually the managed thread model, and hide low-level (spawn, join, schedule), people can check its status to confirm if the work was done, and the futures can return a result which is missing feature in the traditional thread model.

SRFI 18/226 threads do report results so there's no missing feature.

It possibly makes sense to discuss a future API in committee B unless
F abandons threads altogether and replaces them with futures covering
the lower-level details. I'm not yet sure about the latter because
SRFI 18 is well-known and has seen plenty of implementations while a
future API will be something necessarily new.

Martin Rodgers

unread,
Mar 21, 2022, 12:48:49 PM3/21/22
to scheme-re...@googlegroups.com
On 21/03/2022, Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
> Am Mo., 21. März 2022 um 16:12 Uhr schrieb Dr. Arne Babenhauserheide
> <arne...@web.de>:

> There are, of course, loopholes if you overload car and cdr so that
> car and cdr transparently unwrap syntax objects. But then you probably
> slow down all list operations because car and cdr would need to
> dispatch on the type of their arguments.

I've found that by using a single record type for the concrete syntax
tree, and using a symbol as the discriminator, dispatching can be
reduced to O(1) complexity. I have a define-datatype/cases
implementation that uses this technique. Adding wraps to this to this
data type system should no further Big-O complexity.

Clearly the symbols could be replaced with fixnum integers for faster
implementation. A native data type for define-datatype/cases could
also remove the tagging/untagging overhead. Similar support could be
done at the syntax level. My implementation uses nothing more than
R7RS Small provides. However, an earlier implementation used
syntax-case and R6RS.

I believe this could be generalised to any R6RS/R7RS system. I tested
my define-datatype/cases code in 4 R6RS systems in 2008. I rewrote it
using R7RS syntax-rules in 2019 and tested it with Larceny and Chibi.

BTW, I also had a R6RS "matching" tool similar to syntax-case for my
concrete syntax datatype. In 2019 I found this too could be translated
to R7RS, with some reduced functionality of course. It only matches at
one level, but still supports guard tests. I've used it extensively.
Yes, it uses an expose procedure under the syntax. So my concrete
syntax "pairs" and translated to Scheme pairs with O(1) complexity. My
define-datatype constructers also use type guards. I discovered the
value of this in 2008. The performance impact can be eliminated with
cond-expand and only using the guards when the debug feature is
present.

My tests so far (2019-today) suggest no performance issues when
compiled by Larceny. Chibi performs less well, but gives acceptable
performance for unit testing and bootstrapping. I suspect most of the
time is spent in my parser, which uses read-char and peek-char
extensively. Larceny outperforms Chibi similarly (or much worse) on
the Gabriel benchmarks, so I'm not concerned by this. I've not yet
tested Guile's performance.

I hope this is of some small interest. For all I know, this may be
obvious to everyone here. The Big-O complexity of syntax-rules is
covered in the paper by Clinger and Rees, after all. Rees has
commented on the size variations of implementations. I'd say mine
tends toward the larger end of his spectrum. Perhaps this says more
about my coding style. So your mileage may vary.

So, stating the (by now, I hope) painfully obvious, list operations
for a concrete syntax datatype can perform with the same time and
space complexity as their counterparts on Scheme pairs. ;) Little-O
cost is another matter. I find the cost is too small to measure
accurately. Again, YMMV.

Taylan Kammer

unread,
Mar 23, 2022, 10:19:08 AM3/23/22
to scheme-re...@googlegroups.com, Dr. Arne Babenhauserheide, srfi...@srfi.schemers.org, Linas Vepstas
A big +1 on this.

I see Scheme as being in a very interesting place with regard to the
question of "very high-level" languages like Python vs. "lower-level"
languages like C.

On one hand we have incredible things like hygienic macros with pattern
matching, procedural macros, delimited continuations (not standardized
yet but still), correct maths by default, and so on, yet on the other
hand a lot of these things are defined in such a way that they map to
lower level concepts and don't necessarily hinder (relatively) efficient
implementation, at least if your implementation uses some key optimization
strategies.

Macros are inherently compile-time. Use of continuations can be turned
into a simple jump instruction after rudimentary static analysis, tricks
for storing small numbers ("fixnums") on the stack are well-known, and
the language doesn't force dynamic dispatch on data structures since you
can use type-specific accessors like vector-ref. We even have a bunch
of type-specific equality procedures, plus eq?, eqv?, and equal?.

I'd say Scheme is "high-level in all the right places." :-)

--
Taylan

Marc Nieper-Wißkirchen

unread,
Mar 23, 2022, 10:28:18 AM3/23/22
to scheme-re...@googlegroups.com, Dr. Arne Babenhauserheide, srfi...@srfi.schemers.org, Linas Vepstas
Am Mi., 23. März 2022 um 15:19 Uhr schrieb Taylan Kammer
<taylan...@gmail.com>:

> Macros are inherently compile-time. Use of continuations can be turned
> into a simple jump instruction after rudimentary static analysis, tricks
> for storing small numbers ("fixnums") on the stack are well-known, and
> the language doesn't force dynamic dispatch on data structures since you
> can use type-specific accessors like vector-ref. We even have a bunch
> of type-specific equality procedures, plus eq?, eqv?, and equal?.
>
> I'd say Scheme is "high-level in all the right places." :-)

What Scheme does not yet have, though, is an efficient and usable
idiom to write generic algorithms à la those in the C++ template
library.

Some Schemes like Guile experiment with generic methods, but these
actually have a runtime overhead that shouldn't be imposed on the
program (and would be a counter-example to your general statement).
Then we have SRFI 225 using something like runtime type classes. This
will be faster than generics, but there's still some indirection
overhead compared to C++ and I am far from being sure that SRFI 225 is
specified in a way that makes inlining achievable by optimizing
compilers.

I think the right abstraction hasn't been found for Scheme yet. For
zero-time overhead, it will probably have to be based on syntactic
abstraction.

Nala Ginrut

unread,
Mar 23, 2022, 11:12:26 AM3/23/22
to scheme-re...@googlegroups.com
On Tue, Mar 22, 2022 at 12:22 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
SRFI 18/226 threads do report results so there's no missing feature.


I've checked SRFI-18, there's end-result. So yes.
Here are my points on the issues listed in SRFI-226:
The features other than threads in 18 should be dropped. 
For historical reasons, 18 had added extra features as the base of the threads, and some of the features are redundant in R7RS, say, with-exception-handler. 

We need to decide if we should make a modern threads API like futures in R7RS-large, or just borrow the SRFI-18 threads API.
Or add them both.
Here're what futures need:
1. managed thread pool
2. status (running, returned, etc) which is missing in SRFI-18
3. result (already in SRFI-18)
4. thread-terminate! (I vote to keep thread-terminate! as the response to the issued question in SRFI-226).

 
It possibly makes sense to discuss a future API in committee B unless
F abandons threads altogether and replaces them with futures covering
the lower-level details. I'm not yet sure about the latter because
SRFI 18 is well-known and has seen plenty of implementations while a
future API will be something necessarily new.


I think SRFI-18 provides the most useful features, if any possible, we can make some advanced features based on it.


Best regards.

 

Dr. Arne Babenhauserheide

unread,
Mar 23, 2022, 11:28:36 AM3/23/22
to Marc Nieper-Wißkirchen, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org, Linas Vepstas

Marc Nieper-Wißkirchen <marc....@gmail.com> writes:
> What Scheme does not yet have, though, is an efficient and usable
> idiom to write generic algorithms à la those in the C++ template
> library.

Having waited a lot for C++ compiles, I do not consider these efficient.
They *are* runtime efficient, but they have a huge compile-time cost.

Also because compile-time can be during runtime for Scheme (for example
when you use a macro in a lamda), the optimal idiom will likely be
different than C++.

> I think the right abstraction hasn't been found for Scheme yet.

I agree, but don’t really know a good solution.

I’d love to have one, though, because I miss the ergonomics of generic
functions I used to use in Python.

Maybe something could be done with pulling inner type-checks to outer
scope, so you don’t get dynamic dispatch on single procedures (maybe in
a tight loop) but on larger codeblocks (so the tight loop itself would
use the optimized procedures directly)?
signature.asc

Marc Nieper-Wißkirchen

unread,
Mar 23, 2022, 11:29:11 AM3/23/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Am Mi., 23. März 2022 um 16:12 Uhr schrieb Nala Ginrut <nalag...@gmail.com>:

Crossposting this to the SRFI 226 mailing list.

> On Tue, Mar 22, 2022 at 12:22 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
>>
>> SRFI 18/226 threads do report results so there's no missing feature.
>>
>
> I've checked SRFI-18, there's end-result. So yes.
> Here are my points on the issues listed in SRFI-226:
> The features other than threads in 18 should be dropped.
> For historical reasons, 18 had added extra features as the base of the threads, and some of the features are redundant in R7RS, say, with-exception-handler.

These are not completely irrelevant for SRFI 226 (or SRFI 18), for
example, because R[67]RS do not specify the initial exception handler
for threads nor give access to the current exception handler.

> We need to decide if we should make a modern threads API like futures in R7RS-large, or just borrow the SRFI-18 threads API.
> Or add them both.
> Here're what futures need:
> 1. managed thread pool

Wouldn't this be part of the low-level implementation underlying the
threads API of SRFI 18/226?

> 2. status (running, returned, etc) which is missing in SRFI-18

This could be added to SRFI 226 if it is clear what the "current
status" means. What are the semantics outside of synchronization
points?

> 3. result (already in SRFI-18)
> 4. thread-terminate! (I vote to keep thread-terminate! as the response to the issued question in SRFI-226).

I'm also inclined to leave it in. To avoid the corruption of internal
data structures, thread-terminate! won't immediately terminate a
thread (what that means isn't well-defined anyway, see my comment
about the status above), so there's no guarantee that it is
implemented through pthread_kill, say. Instead, all that can be said
is that a thread may be terminated at any point and that certain
synchronizing procedures like those locking a mutex or waiting for a
condition variable won't return if a termination request is pending.

>> It possibly makes sense to discuss a future API in committee B unless
>> F abandons threads altogether and replaces them with futures covering
>> the lower-level details. I'm not yet sure about the latter because
>> SRFI 18 is well-known and has seen plenty of implementations while a
>> future API will be something necessarily new.
>>
>
> I think SRFI-18 provides the most useful features, if any possible, we can make some advanced features based on it.

Thank you for your input and independent valuation.

Marc Nieper-Wißkirchen

unread,
Mar 23, 2022, 11:33:59 AM3/23/22
to Dr. Arne Babenhauserheide, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org, Linas Vepstas
Am Mi., 23. März 2022 um 16:28 Uhr schrieb Dr. Arne Babenhauserheide
<arne...@web.de>:
>
>
> Marc Nieper-Wißkirchen <marc....@gmail.com> writes:
> > What Scheme does not yet have, though, is an efficient and usable
> > idiom to write generic algorithms à la those in the C++ template
> > library.
>
> Having waited a lot for C++ compiles, I do not consider these efficient.
> They *are* runtime efficient, but they have a huge compile-time cost.

The long compilation times of C++ compilers are a lot due to the fact
that there is no module system prior to C++20, meaning that in a
typical program millions of lines of C++ code have to be processed.

> Also because compile-time can be during runtime for Scheme (for example
> when you use a macro in a lamda), the optimal idiom will likely be
> different than C++.

I didn't necessarily want to advertise the C++ idiom (which is based
on its static type system, anyway). This is just the benchmark
efficiency-wise. :)

I don't understand your comment about a macro in a lambda. Had you
mentioned the eval procedure, I'd understood.

Martin Rodgers

unread,
Mar 23, 2022, 12:05:01 PM3/23/22
to scheme-re...@googlegroups.com
On 23/03/2022, Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
> Am Mi., 23. März 2022 um 15:19 Uhr schrieb Taylan Kammer
> <taylan...@gmail.com>:

> I think the right abstraction hasn't been found for Scheme yet. For
> zero-time overhead, it will probably have to be based on syntactic
> abstraction.

I agree about both your points. The existing syntactic systems for
this kind of abstraction may at least be useful proofs that this can
be done efficiently in Scheme. (I can't comment on C++ templates.)

For example, Chez Scheme's define-interface (also discussed in in
section 3.4.6 of Oscar Waddell's PhD thesis) suggests one possible way
of doing this.

What other examples do we have in the Scheme world? Scheme 48's
define-structure?

Could anything like these abstractions be portably defined in R7RS
using syntax-case? If not, what else might be needed? Waddell-style
modules?

These just seem like the obvious questions to ask here. Thanks for
making some important points.

Marc Nieper-Wißkirchen

unread,
Mar 23, 2022, 12:19:18 PM3/23/22
to scheme-re...@googlegroups.com
I implemented Chez's first-class modules in Unsyntax. These cannot be
implemented on top of syntax-case. Not only because of that they are a
good addition to the Foundations language. They can also be used to
interpret the static top-level library declarations in more
foundational terms, reducing the number of core concepts needed.

However, whether an implementation uses first-class modules (using to
build ML functors) or uses an equivalent concept built with only
syntax-case, the problem of how to prevent code bloat remains. The
macro expander is normally stateless meaning that if I request the
same specialization twice, I will usually end up with the code twice
(there is a similar problem with C++ templates but C++ compilers have
become sophisticated enough to merge equal template instantiations -
but as Arne wrote above, we would want Scheme to work with well with
less sophisticated compilers as well).

I think this needs to be investigated further.

Martin Rodgers

unread,
Mar 23, 2022, 1:07:22 PM3/23/22
to scheme-re...@googlegroups.com
On 23/03/2022, Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:

> I implemented Chez's first-class modules in Unsyntax. These cannot be
> implemented on top of syntax-case. Not only because of that they are a
> good addition to the Foundations language. They can also be used to
> interpret the static top-level library declarations in more
> foundational terms, reducing the number of core concepts needed.

Agreed.

> However, whether an implementation uses first-class modules (using to
> build ML functors) or uses an equivalent concept built with only
> syntax-case, the problem of how to prevent code bloat remains. The
> macro expander is normally stateless meaning that if I request the
> same specialization twice, I will usually end up with the code twice
> (there is a similar problem with C++ templates but C++ compilers have
> become sophisticated enough to merge equal template instantiations -
> but as Arne wrote above, we would want Scheme to work with well with
> less sophisticated compilers as well).

While Waddell addresses this in the third section of his thesis, that
might not be a practical solution for all Scheme systems. ISTR that
linkers have addressed this issue in C++. Scheme systems that compile
to code using the same or similar linkers might also reduce code
bloat. This may be a heavy burden for some Scheme implementations.

However, R7RS supports subsets of the language, so I think this need
not necessarily be a obstacle for *all* implementations. It may well
be possible, if not simple. We have Chez and S48 as proofs that
*something* works, but how much can they help with a practical R7RS
system? I don't know.

> I think this needs to be investigated further.

Agreed.

Jakob Wuhrer

unread,
Mar 23, 2022, 2:28:21 PM3/23/22
to scheme-re...@googlegroups.com
Marc Nieper-Wißkirchen <marc....@gmail.com> writes:
> What Scheme does not yet have, though, is an efficient and usable
> idiom to write generic algorithms à la those in the C++ template
> library.
>
> ...
>
> I think the right abstraction hasn't been found for Scheme yet. For
> zero-time overhead, it will probably have to be based on syntactic
> abstraction.

Just out of curiosity: what would such an abstraction (need to) offer
compared to passing around functions or records containing functions?

Marc Nieper-Wißkirchen

unread,
Mar 23, 2022, 3:36:51 PM3/23/22
to scheme-re...@googlegroups.com
Am Mi., 23. März 2022 um 19:28 Uhr schrieb Jakob Wuhrer
<jakob....@gmail.com>:
In order not to punish a clear programming style, an idiom like

(map f ls)

must not be less efficient than a hand-coded loop. A Scheme compiler
can achieve this by inlining the map procedure. Thanks to the
immutability of library imports, this is possible because the compiler
can know the binding of map statically at the place of the above
expression.

Now suppose we have a generic sequence library. The above procedure could become

(seq-map sto f ls)

where sto is a "sequence-type object" analogous to SRFI 225's DTO's.
In order to be able to inline this, the compiler would need to know
statically which procedures are stored in sto. If directly imported
from a library, this could work, but if the above invocation of
seq-map is part of a larger generic algorithm in form of a procedure
that takes an sto as an argument and this procedure is too large to be
considered inlineable by the compiler, the binding of sto won't be
statically known.

Does this answer your question?

Dr. Arne Babenhauserheide

unread,
Mar 23, 2022, 6:06:01 PM3/23/22
to Marc Nieper-Wißkirchen, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org, Linas Vepstas

Marc Nieper-Wißkirchen <marc....@gmail.com> writes:

> I don't understand your comment about a macro in a lambda. Had you
> mentioned the eval procedure, I'd understood.

You can’t, eval would have been correct … sorry, and thank you for
taking it up constructively!
signature.asc

Nala Ginrut

unread,
Mar 26, 2022, 12:28:41 AM3/26/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
On Wed, Mar 23, 2022 at 11:29 PM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
>
> I think SRFI-18 provides the most useful features, if any possible, we can make some advanced features based on it.

Thank you for your input and independent valuation.



How can we move forward about this topic (thread/future), do we have a standard process to proceed with the conclusion of the discussion so far?
And people may take part in the further API definition work.

Best regards.
 

Marc Nieper-Wißkirchen

unread,
Mar 26, 2022, 3:28:16 AM3/26/22
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
SRFI 226 will eventually be finalized based on the input here. If
Committee F decides so, it will be integrated into the Foundations
where it may then be amended. A library with futures compatible with
SRFI 226 is probably something that should be specified as a SRFI and
then discussed as part of the batteries.
> --
> You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/scheme-reports-wg2/CAPjoZodN3aY6PZMr5xiUwGeZrEBQ6Asrbvn6otMOUksKdFSejw%40mail.gmail.com.

Linas Vepstas

unread,
Mar 27, 2022, 12:12:16 PM3/27/22
to Taylan Kammer, scheme-re...@googlegroups.com, Dr. Arne Babenhauserheide, srfi...@srfi.schemers.org
Hi Taylan,

On Wed, Mar 23, 2022 at 9:19 AM Taylan Kammer <taylan...@gmail.com> wrote:

Macros are inherently compile-time.

I've been trying to drum up interest in macros that are *not* compile-time, but rather can be applied repeatedly, at run-time (at any time). Systems that have this ability are often called "rule engines", and the macros are called "rules", and sometimes more specifically, "rewrite rules" (if focused on term rewriting), "inference rules" (if focused on theorem proving), "business rules" (if focused on bureaucratic systems management), or "parse rules" (if focused on syntax and semantic extraction), "stimulus-response rules" (if focused on chatbots), "control logic" (if focused on automated systems control, e.g. flight control or robot control), etc.

I admit that a runtime macro system for scheme-as-a-programming-language may be a bridge too far. Certainly, no other programming language has such a thing.  And yet there are hundreds of software systems with inbuilt rule engines, and all of them are terrible, utterly unusable outside of their narrowest domains.  My personal experience is that a scheme-like macro system seems to be the correct generalization of all of these systems.  But it has to be run-time, not compile time. Rules can be triggered repeatedly, instead of triggering only once and being retired.

At any rate, yes, this is perhaps rather far off-topic for this working group.

--linas


Linas Vepstas

unread,
Mar 27, 2022, 12:28:19 PM3/27/22
to Marc Nieper-Wißkirchen, scheme-re...@googlegroups.com, Dr. Arne Babenhauserheide, srfi...@srfi.schemers.org
On Wed, Mar 23, 2022 at 9:28 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:

What Scheme does not yet have, though, is an efficient and usable
idiom to write generic algorithms à la those in the C++ template
library.

Yes. +1 to this. My dirty secret is that I am a low-brow unimaginative scheme programmer, and use srfi-1 almost exclusively, I don't use scheme vectors because it doesn't seem to be worth the bother. I use a-lists fairly often, but half the time, I use my own home-grown a-lists instead of standard ones. I've used scheme hash tables in the two or three places where I had to have performance, but the trouble rarely seems worth it.  Never used generators because I've never had a problem that they solved.  This is in sharp contrast to C++, where I use all of these things, easy and naturally, because they are convenient and simple.

Off-topic, but cppreference.com is much easier to navigate than srfi.schemers.org 

-- linas

Linas Vepstas

unread,
Mar 27, 2022, 1:10:22 PM3/27/22
to scheme-re...@googlegroups.com


On Wed, Mar 23, 2022 at 11:19 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:


However, whether an implementation uses first-class modules (using to
build ML functors) or uses an equivalent concept built with only
syntax-case, the problem of how to prevent code bloat remains. The
macro expander is normally stateless meaning that if I request the
same specialization twice, I will usually end up with the code twice

ML caught my eye. Please be careful about engineering compile-time-only concepts.  When I first dove into CaML, I thought to myself "Ahh! At last! A full-fledged type system (with functors and all the other goodies)!" and very rapidly came to a full halt as I realized that the type system was available **only at compile time** ! Ouch! This makes it unusable, as far as I'm concerned; and when I'm in a bad mood, I will blurt out openly hostile remarks about it.

Compare this to C++, where ideas like dynamic casting (run-time type casting) have taken decades to evolve, but the result is a C++ slowly morphing into a first-class-everything system (when I'm in a bad mood, I say that C++ is re-inventing scheme, badly.)   I'm not sure, but some of the latest proposals (that I do not understand) make me think that C++ might even be slowly morphing into an ML-like system, without the compile-time-only lunacy of ML. 

So, as to syntax-case and all of the rest of the scheme macro system: its static, hard-wired, compile-time-only, which makes it pretty much totally and utterly useless and unusable for solving any kind of real-world problems.  There's little benefit in systems that are ham-strung by mis-guided "compile-time-only" mandates.  Make it dynamic and programmable.  Don't make it static and hard-wired.  Don't build yet more scheme infrastructure on top of the macro system. In that direction lies only a dead-end of quick-set concrete.

--linas

Marc Nieper-Wißkirchen

unread,
Mar 27, 2022, 3:33:43 PM3/27/22
to scheme-re...@googlegroups.com
Am So., 27. März 2022 um 19:10 Uhr schrieb Linas Vepstas
<linasv...@gmail.com>:

> ML caught my eye. Please be careful about engineering compile-time-only concepts. When I first dove into CaML, I thought to myself "Ahh! At last! A full-fledged type system (with functors and all the other goodies)!" and very rapidly came to a full halt as I realized that the type system was available **only at compile time** ! Ouch! This makes it unusable, as far as I'm concerned; and when I'm in a bad mood, I will blurt out openly hostile remarks about it.
>
> Compare this to C++, where ideas like dynamic casting (run-time type casting) have taken decades to evolve, but the result is a C++ slowly morphing into a first-class-everything system (when I'm in a bad mood, I say that C++ is re-inventing scheme, badly.) I'm not sure, but some of the latest proposals (that I do not understand) make me think that C++ might even be slowly morphing into an ML-like system, without the compile-time-only lunacy of ML.
>
> So, as to syntax-case and all of the rest of the scheme macro system: its static, hard-wired, compile-time-only, which makes it pretty much totally and utterly useless and unusable for solving any kind of real-world problems. There's little benefit in systems that are ham-strung by mis-guided "compile-time-only" mandates. Make it dynamic and programmable. Don't make it static and hard-wired. Don't build yet more scheme infrastructure on top of the macro system. In that direction lies only a dead-end of quick-set concrete.

I'm not quite sure where you are getting at. Running a Scheme program
means evaluating it. Macros allow the definition of special forms, so
a language with macros is more powerful than without. Like any other
system, also macros can be abused. More expressive than macros are
fexprs (macros are a subset of fexprs) but they make reasoning about a
program impossible in general. In Scheme, you always have eval if you
need to run the evaluator at runtime.

For example, assume we are doing language parsing. We express the
grammar in form of an s-expression. The parser generator (our
Scheme-equivalent of Yacc) can be written in two versions:

(1) It is a procedure f that takes the grammar in form of a datum as
an argument and returns a parsing procedure.
(2) It is a macro m that takes the grammar in form of a syntax object
as an argument and expands into an expression evaluating to a parsing
procedure.

Given f, you can define m by

(define-syntax m
(lambda (stx)
(syntax-case stx ()
((_ grammar)
#'(f 'grammar)))))

Given m, you can define f by

(define f
(lambda (grammar)
(eval `(m ,grammar))))

This example is, of course, a bit simplified. For example, I haven't
specified an environment to eval. Moreover, grammar rules usually have
actions that we want to code as Scheme procedures or Scheme syntax and
not as a Scheme datum.

Whether (1) or (2) should be the primary API depends a lot on how the
parser will usually be used. If the grammar is usually fixed per
program, (2) seems to be the most natural and is closest to how Yacc
works. If eval is slow in your implementation (or produces slow code),
(1) is the better approach if you need all generality (and you don't
need syntactic sugar for grammar action rules). But even if you choose
(1), it makes sense to pass syntax objects and not datums to your
procedure because most grammars will be hard-coded in the program file
so having line numbers available will help in debugging. But this is a
minor detail not related to the actual topic.

Linas Vepstas

unread,
Mar 27, 2022, 10:03:40 PM3/27/22
to scheme-re...@googlegroups.com
Hi Marc,

On Sun, Mar 27, 2022 at 2:33 PM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
Am So., 27. März 2022 um 19:10 Uhr schrieb Linas Vepstas
<linasv...@gmail.com>:

> ML caught my eye. Please be careful about engineering compile-time-only concepts.  When I first dove into CaML, I thought to myself "Ahh! At last! A full-fledged type system (with functors and all the other goodies)!" and very rapidly came to a full halt as I realized that the type system was available **only at compile time** ! Ouch! This makes it unusable, as far as I'm concerned; and when I'm in a bad mood, I will blurt out openly hostile remarks about it.
>
> Compare this to C++, where ideas like dynamic casting (run-time type casting) have taken decades to evolve, but the result is a C++ slowly morphing into a first-class-everything system (when I'm in a bad mood, I say that C++ is re-inventing scheme, badly.)   I'm not sure, but some of the latest proposals (that I do not understand) make me think that C++ might even be slowly morphing into an ML-like system, without the compile-time-only lunacy of ML.
>
> So, as to syntax-case and all of the rest of the scheme macro system: its static, hard-wired, compile-time-only, which makes it pretty much totally and utterly useless and unusable for solving any kind of real-world problems.  There's little benefit in systems that are ham-strung by mis-guided "compile-time-only" mandates.  Make it dynamic and programmable.  Don't make it static and hard-wired.  Don't build yet more scheme infrastructure on top of the macro system. In that direction lies only a dead-end of quick-set concrete.

I'm not quite sure where you are getting at. Running a Scheme program
means evaluating it.

Thanks for replying! You realize, of course, that, above, I descended into a bit of a tirade, so, yes, it was less than clear. I'll try to be plainer, but still hand-wavy.

Lists are data structures, records are data structures.  So are a-lists and dictionaries and objects (and there is a rainbow of different kinds of objects; not just srfi-100 and its ilk.)

Somehow overlooked is that s-expressions are data structures. 

Valid scheme programs are just subsets of all possible s-expressions.

The point is that "macros", or, more  generally "syntactic systems" can be applied to sexprs in general; to  sexprs that aren't runnable/evaluatable/executable scheme programs.

You are comparing scheme macros to yacc, and I suppose that is a fair comparison: yacc is a good example of the kind of system that lacks the flexibility and expressive power to be useful in a rewriting system.  You can run yacc once, and then you hit a brick wall. You can't run it twice, never mind invoke it recursively.  You can't write yacc rules that selectively act on other yacc rules to transform yacc expressions into other yacc expressions.  Yet this is exactly the kind of expressive ability needed for rewriting.

A few concrete examples, perhaps. One is the so-called "office furniture requisition order". Created by accounting, it is instantiated by an employee, It is amended by the manager (adding approvals) and accounting (adding account numbers) and then processed by purchasing (which removes employee and manager names, and removes other confidential information, and adds shipping instructions) and delivers it to a vendor. Back in 1995, at the very dawn of the WWW, such a system was open-sourced by NASA (yes, the space agency) and it used lex & yacc to do this. It was quite amazingly powerful: you could write rules not only to buy office furniture, but you could write bug trackers with it, and asset managers and decision-making systems, create bureaucracies of the most complicated kind. I used it to prototype an oil-field asset tracking system for a client (alas; a waste of my talents, but very educational.) You've never heard of this system because it died in obscurity: the large commercial competition were offering drag-n-drop GUI systems that were a zillion times easier than crafting lex & yacc rules ... they also could mobilize platoons of suits to clean up any issues that arose.  All at the low, low price of 10x of what I was making.

Perhaps you could create such a system using srfi-9 records and some kind of clever abuse of syntax-case, but it hurts my head to imagine how. In the decades since that time, I've learned that s-expressions are more powerful and flexible than records, and that rewriting sexprs is more flexible and powerful than writing SQL 'select into' queries.  Under certain very special circumstances, it does make sense to have "evaluatable" sexprs, and yes, in that special case, fexprs seem like the correct abstraction (this is not obvious; I learned from this mailing list).

If you look at macros as a compile-time only system to perform rewrites to generate executable scheme programs, then, yes, I guess macros do an admirably good job at that. But if you want the ability to dispatch a pattern matcher at arbitrary points during execution (say, when that chair purchase request form hits the manager's desk) I don't see that this is possible in the current system. 

My apologies; I feel like I am hijacking the WG2 conversation to go in some completely foreign direction, and so will attempt to bite my tongue, Just that whenever I see compile-time-only ideas, I feel compelled to be contrarian, and say "no, it must be dynamic and run-time".

--linas

--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.


--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 

Amirouche Boubekki

unread,
Mar 28, 2022, 8:57:44 AM3/28/22
to scheme-re...@googlegroups.com
On Mon, Mar 28, 2022 at 4:03 AM Linas Vepstas <linasv...@gmail.com> wrote:
>
> Hi Marc,
>
> On Sun, Mar 27, 2022 at 2:33 PM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
>>
>> Am So., 27. März 2022 um 19:10 Uhr schrieb Linas Vepstas
>> <linasv...@gmail.com>:
>>
>> > ML caught my eye. Please be careful about engineering compile-time-only concepts. When I first dove into CaML, I thought to myself "Ahh! At last! A full-fledged type system (with functors and all the other goodies)!" and very rapidly came to a full halt as I realized that the type system was available **only at compile time** ! Ouch! This makes it unusable, as far as I'm concerned; and when I'm in a bad mood, I will blurt out openly hostile remarks about it.
>> >
>> > Compare this to C++, where ideas like dynamic casting (run-time type casting) have taken decades to evolve, but the result is a C++ slowly morphing into a first-class-everything system (when I'm in a bad mood, I say that C++ is re-inventing scheme, badly.) I'm not sure, but some of the latest proposals (that I do not understand) make me think that C++ might even be slowly morphing into an ML-like system, without the compile-time-only lunacy of ML.
>> >
>> > So, as to syntax-case and all of the rest of the scheme macro system: its static, hard-wired, compile-time-only, which makes it pretty much totally and utterly useless and unusable for solving any kind of real-world problems. There's little benefit in systems that are ham-strung by mis-guided "compile-time-only" mandates. Make it dynamic and programmable. Don't make it static and hard-wired. Don't build yet more scheme infrastructure on top of the macro system. In that direction lies only a dead-end of quick-set concrete.
>>
>> I'm not quite sure where you are getting at. Running a Scheme program
>> means evaluating it.
>
>
> Thanks for replying! You realize, of course, that, above, I descended into a bit of a tirade, so, yes, it was less than clear. I'll try to be plainer, but still hand-wavy.
>
> Lists are data structures, records are data structures. So are a-lists and dictionaries and objects (and there is a rainbow of different kinds of objects; not just srfi-100 and its ilk.)
>
> Somehow overlooked is that s-expressions are data structures.
>
> Valid scheme programs are just subsets of all possible s-expressions.
>
> The point is that "macros", or, more generally "syntactic systems" can be applied to sexprs in general; to sexprs that aren't runnable/evaluatable/executable scheme programs.
>
> You are comparing scheme macros to yacc, and I suppose that is a fair comparison: yacc is a good example of the kind of system that lacks the flexibility and expressive power to be useful in a rewriting system. You can run yacc once, and then you hit a brick wall. You can't run it twice, never mind invoke it recursively. You can't write yacc rules that selectively act on other yacc rules to transform yacc expressions into other yacc expressions. Yet this is exactly the kind of expressive ability needed for rewriting.
>
> A few concrete examples, perhaps. One is the so-called "office furniture requisition order". Created by accounting, it is instantiated by an employee, It is amended by the manager (adding approvals) and accounting (adding account numbers) and then processed by purchasing (which removes employee and manager names, and removes other confidential information, and adds shipping instructions) and delivers it to a vendor. Back in 1995, at the very dawn of the WWW, such a system was open-sourced by NASA (yes, the space agency) and it used lex & yacc to do this. It was quite amazingly powerful: you could write rules not only to buy office furniture, but you could write bug trackers with it, and asset managers and decision-making systems, create bureaucracies of the most complicated kind. I used it to prototype an oil-field asset tracking system for a client (alas; a waste of my talents, but very educational.) You've never heard of this system because it died in obscurity: the large commercial competition were offering drag-n-drop GUI systems that were a zillion times easier than crafting lex & yacc rules ... they also could mobilize platoons of suits to clean up any issues that arose. All at the low, low price of 10x of what I was making.
>
> Perhaps you could create such a system using srfi-9 records and some kind of clever abuse of syntax-case, but it hurts my head to imagine how. In the decades since that time, I've learned that s-expressions are more powerful and flexible than records, and that rewriting sexprs is more flexible and powerful than writing SQL 'select into' queries. Under certain very special circumstances, it does make sense to have "evaluatable" sexprs, and yes, in that special case, fexprs seem like the correct abstraction (this is not obvious; I learned from this mailing list).
>

> If you look at macros as a compile-time only system to perform rewrites to generate executable scheme programs, then, yes, I guess macros do an admirably good job at that. But if you want the ability to dispatch a pattern matcher at arbitrary points during execution (say, when that chair purchase request form hits the manager's desk) I don't see that this is possible in the current system.

If I understand correctly the idea stated by Linas: given a problem,
an algorithm, and a program that implements that algorithm, optimize
its performance given the knowledge gathered at runtime; that is
already possible with runtime compilation, also called manual jit.
With Chez Scheme, it can be done with: (eval `(lambda ...))

ref: https://m00natic.github.io/lisp/manual-jit.html

Marc Nieper-Wißkirchen

unread,
Mar 28, 2022, 8:58:20 AM3/28/22
to scheme-re...@googlegroups.com
Am Mo., 28. März 2022 um 04:03 Uhr schrieb Linas Vepstas
<linasv...@gmail.com>:
>
> Hi Marc,
>
> On Sun, Mar 27, 2022 at 2:33 PM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
>>
>> Am So., 27. März 2022 um 19:10 Uhr schrieb Linas Vepstas
>> <linasv...@gmail.com>:
>>
>> > ML caught my eye. Please be careful about engineering compile-time-only concepts. When I first dove into CaML, I thought to myself "Ahh! At last! A full-fledged type system (with functors and all the other goodies)!" and very rapidly came to a full halt as I realized that the type system was available **only at compile time** ! Ouch! This makes it unusable, as far as I'm concerned; and when I'm in a bad mood, I will blurt out openly hostile remarks about it.
>> >
>> > Compare this to C++, where ideas like dynamic casting (run-time type casting) have taken decades to evolve, but the result is a C++ slowly morphing into a first-class-everything system (when I'm in a bad mood, I say that C++ is re-inventing scheme, badly.) I'm not sure, but some of the latest proposals (that I do not understand) make me think that C++ might even be slowly morphing into an ML-like system, without the compile-time-only lunacy of ML.
>> >
>> > So, as to syntax-case and all of the rest of the scheme macro system: its static, hard-wired, compile-time-only, which makes it pretty much totally and utterly useless and unusable for solving any kind of real-world problems. There's little benefit in systems that are ham-strung by mis-guided "compile-time-only" mandates. Make it dynamic and programmable. Don't make it static and hard-wired. Don't build yet more scheme infrastructure on top of the macro system. In that direction lies only a dead-end of quick-set concrete.
>>
>> I'm not quite sure where you are getting at. Running a Scheme program
>> means evaluating it.
>
>
> Thanks for replying! You realize, of course, that, above, I descended into a bit of a tirade, so, yes, it was less than clear. I'll try to be plainer, but still hand-wavy.
>
> Lists are data structures, records are data structures. So are a-lists and dictionaries and objects (and there is a rainbow of different kinds of objects; not just srfi-100 and its ilk.)
>
> Somehow overlooked is that s-expressions are data structures.
>
> Valid scheme programs are just subsets of all possible s-expressions.
>
> The point is that "macros", or, more generally "syntactic systems" can be applied to sexprs in general; to sexprs that aren't runnable/evaluatable/executable scheme programs.

We have to differentiate between macro keyword and the transformer
underlying the keyword. In Scheme, macro keywords are defined with
`define-syntax' and won't do (barring `eval') what you have in mind. A
transformer, however, is an ordinary procedure taking syntax objects
into syntax objects (at least in the R6RS/R7RS-large model). Nothing
prevents you to apply your transformer at runtime as an ordinary
procedure. So, instead

(define-syntax foo-macro
(lambda (stx)
(syntax-case stx ()
((_ a b) #'(c b a)))))

you would write

(define foo-proc
(lambda (stx)
(syntax-case stx ()
((_ a b) #'(c b a)))))

and you can use `foo-proc' without `eval' at runtime.

Representing your S-expressions as syntax objects makes sense whenever
there's a chance that the S-expression comes literally from some
source file.

Amirouche Boubekki

unread,
Mar 28, 2022, 1:15:42 PM3/28/22
to scheme-re...@googlegroups.com
I only know f-expr as developed in R-1RK and its implementation called
SINK. syntax-case and syntax-rule do not relate to f-expr more than
lambda relate to f-expr. Kernel's vau and associated forms, allow to
implement forms that are similar in spirit to syntax-case or
syntax-rule, but there is no implementation that "expand" those at
compile time, so far.

Kernel is simpler, but there is no implementation that demonstrates
its potential.

There is no program that will automatically™ rewrite itself at runtime
given runtime knowledge, taking into account domain specific runtime
knowledge, even when you provide the goal at compile time to minimize
the output of a function.

Marc Nieper-Wißkirchen

unread,
Mar 28, 2022, 1:56:12 PM3/28/22
to scheme-re...@googlegroups.com
Am Mo., 28. März 2022 um 19:15 Uhr schrieb Amirouche Boubekki
<amirouche...@gmail.com>:

> I only know f-expr as developed in R-1RK and its implementation called
> SINK. syntax-case and syntax-rule do not relate to f-expr more than
> lambda relate to f-expr. Kernel's vau and associated forms, allow to
> implement forms that are similar in spirit to syntax-case or
> syntax-rule, but there is no implementation that "expand" those at
> compile time, so far.

It makes sense to compare fexprs/vau-created forms to macros, but not
to syntax-rules/syntax-case, which are entities of a very different
kind.

Just a general remark as syntax-rules/syntax-case is often confused
with the macro concept, also in this thread.

Per Bothner

unread,
Apr 1, 2022, 7:21:09 PM4/1/22
to Ray Dillinger, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org, Linas Vepstas
On 4/1/22 16:03, Ray Dillinger wrote:
> In every header file, you'll see something like
>
> #ifndef LIB_H
> at the top and
>
> #endif
> at the bottom.
>
> But then the file is #include'd from a hundred places, and each of those
> places is #included from three others, and so on.....
>
> In order to prevent the system from having to read and parse the file
> again, leave the above just as it is but also go to all the places in
> your program where you say
>
> #include "lib.h"
>
> And say instead
>
> #ifndef LIB_H
> #include "lib.h"
> #endif

From my days as a young gcc hacker, I'm pretty sure gcc does
this optimization automatically. Or at least it used to.
Other quality compilers probably also do so.
--
--Per Bothner
p...@bothner.com http://per.bothner.com/

Per Bothner

unread,
Apr 1, 2022, 7:50:05 PM4/1/22
to Ray Dillinger, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org, Linas Vepstas
On 4/1/22 16:27, Ray Dillinger wrote:
> Coding the 'optimization' correctly though is problematic because
> you don't know what you don't know about the header and how it's going
> to interact with other header variables that have been set since the
> most recent time it was parsed.  It's easy if we assume that headers are
> idempotent.  But in conforming C/C++ they are not idempotent unless the
> contents make them so.

A "correctly-written" header file has the format:

/* comments and whitespace */
#ifndef GUARD
BODY
#endif
/*trailing comments and whitespace */

The first time you read the file, it is easy to verify that it matches
this format, and if so stash (EXPANDED_FILENAME, GUARD) in a hash table
On subsequent encounters of #include EXPANDED_FILENAME it is easy to
check GUARD: if it is defined, no need to open the file.

Per Bothner

unread,
Apr 1, 2022, 8:35:18 PM4/1/22
to Ray Dillinger, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org, Linas Vepstas
[Still off-topic]

On 4/1/22 17:00, Ray Dillinger wrote:
> Okay, I agree with you... it ought to be easy ... I can't think of a
> specific reason why it shouldn't be ... but take a large project that
> has libraries included from lots of different directions and by lots of
> different paths, and instrument the compiler. See whether it's spending
> 90% of its time parsing header files.  And you'll see that this
> optimization is not happening.  I assume there's some esoteric
> language-standard reason why not.

The compiler can optimize the case of A.h, B,h, and C.h where both B.h and C.h
do #include <A.h>, and the main file incluydes both B.h and C.h.

However, it is harder for the compiler to optimize the case when A.h and B.h
mutually include each other. In that case, adding the guards "by hand"
around the #include statement can be beneficial.

However, spending "90% of its time parsing header files" can still happen
if using a large library with dozens or hundreds of header files: A typical
.c/.cpp file will often include transitively dozens of header files, dwarfing
the text of the .c/.cpp files. That is why there are been so many attempts
at precompiled header files.

Careful coding can help. To use a Qt example: Instead of:
#include <QtGui/QMouseEvent>
use the forward reference:
class QMouseEvent;
(except when you actually need the specifics of QMouseEvent of course).

Linas Vepstas

unread,
Apr 4, 2022, 2:42:42 AM4/4/22
to scheme-re...@googlegroups.com
Excuse my untimely reply; I'll keep it short, at the bottom. I've been obsessively diverted by the war in the Ukraine.

Let me try again. In the daily course of my programming, technical, research work, I work with large databases of s-expressions. Most of these are not valid scheme, although they do encode a kind of "language", some of which is "executable". (It's not scheme due to historical legacy)

By "large database", I mean many millions, with the primary limitation being the RAM usage, as not only do the expressions need to be held in RAM, but also a variety of indexes to organize and search and traverse them. 

One of the primary operations applied to this database are rewrite rules, which "pattern match" a given expression, and "transform" it into another expression. So kind-of macro-like. However, unlike macros, the rewrite rules are not just run once, and retired; they can be triggered repeatedly, at arbitrary points in time, and are often triggered in sequences or chains.

Anticipating the question of "why the heck would anyone want to do that?" and "what's wrong with just using SQL to solve that problem?" I provided the office-furniture-requisition as a concrete example: its a sequence of pattern matches applied to some collection of s-expressions, generating yet more s-expressions.  For the most part, these sexps are not "executable": they represent office furniture, manager approvals, and shipping instructions. There might be the occasional lambda in there, used to trigger some action. But for the most part, they're not valid scheme, they're just s-exps.

--linas

Linas Vepstas

unread,
Apr 4, 2022, 3:27:15 AM4/4/22
to scheme-re...@googlegroups.com
Ah hah! Continuing...

OK, Yes! That seems like a step in the desired direction. Now consider the case where the s-exp is not coming from a file, but is "just sitting in RAM somewhere". And there isn't just one of them, but millions.  And the pattern matching to be applied is not a simple one, but what database people call a "join". A stereotypical example would be "find all employees $X whose salary $Y is greater than 500 then do-stuff with $X $Y". Sprinkling some parenthesis into this becomes pseudocode "(find (and (employee $X) (> 500 (salary $X))) (let (($Y (salary $X))) (do-stuff $X $Y))" .  This is meant to be applied to some large collection of s-expressions that encode, somehow, employee names and salaries, the precise encoding left to your imagination. I don't know how convert that pseudo-code into syntax-case.

Hmm. The above example might be too simple. Here's one with a circular dependency, a loop (a triangle):  "find all ($A $B $C) such that (and (relation-one $A $B) (relation2 $B $C) (rel3 $C $A))"  applied to a database of triples of the form '(relation-foo bar baz)

One could write a system that performs the above, using nothing more than srfi-1. In practice, one hits the problem of query optimization: out of those millions of triples, there might be only a few dozen that are called "relation-one", and so it's foolish to search over millions to weed out a few dozen starting points for the rest of the search. 

The above is a prototypical "graph walk". The triples '(relation-foo bar baz) are labelled edges: 'bar and 'baz are the two vertexes, and 'relation-foo is the edge-label.

I fear, though, that I've strayed too far afield, though. This is not just a simple syntax-case applied to a list of inputs.  It's a graph-walk of a directed cycliic graph, encoded as s-expression triples.

-- Linas

Dr. Arne Babenhauserheide

unread,
Apr 4, 2022, 6:58:57 AM4/4/22
to scheme-re...@googlegroups.com, Linas Vepstas

Linas Vepstas <linasv...@gmail.com> writes:

> OK, Yes! That seems like a step in the desired direction. Now consider
> the case where the s-exp is not coming from a file, but is "just
> sitting in RAM somewhere". And there isn't just one of them, but
> millions. And the pattern matching to be applied is not a simple one,
> but what database people call a "join". A stereotypical example would
> be "find all employees $X whose salary $Y is greater than 500 then
> do-stuff with $X $Y". Sprinkling some parenthesis into this becomes
> pseudocode "(find (and (employee $X) (> 500 (salary $X))) (let (($Y
> (salary $X))) (do-stuff $X $Y))" . This is meant to be applied to some
> large collection of s-expressions that encode, somehow, employee names
> and salaries, the precise encoding left to your imagination. I don't
> know how convert that pseudo-code into syntax-case.
>
> Hmm. The above example might be too simple. Here's one with a circular
> dependency, a loop (a triangle): "find all ($A $B $C) such that (and
> (relation-one $A $B) (relation2 $B $C) (rel3 $C $A))" applied to a
> database of triples of the form '(relation-foo bar baz)
>
> One could write a system that performs the above, using nothing more
> than srfi-1. In practice, one hits the problem of query optimization:
> out of those millions of triples, there might be only a few dozen that
> are called "relation-one", and so it's foolish to search over millions
> to weed out a few dozen starting points for the rest of the search.

This gives me the impression that it could benefit from mini-kanren or
such.

But I’ve been searching for a reason to use that for quite a while and
did not find it yet, so I might be very wrong about that.
signature.asc

Jason Hemann

unread,
Apr 4, 2022, 7:23:13 AM4/4/22
to scheme-re...@googlegroups.com, Linas Vepstas
miniKanren could probably do what you want here, depending on the
kinds of arithmetic constraints that you're needing, and maybe on how
relational your "do-stuff" is.

JBH
> --
> You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/scheme-reports-wg2/87tub9gln8.fsf%40web.de.



--
JBH

Linas Vepstas

unread,
Apr 4, 2022, 2:06:20 PM4/4/22
to Jason Hemann, scheme-re...@googlegroups.com
Thanks for the suggestions. Yes, I looked at minikanren a decade ago. It was used as an inspiration for the system that actually got built. It's still mentioned in scattered readme's and wiki pages.

The system that got built is light-years in sophistication beyond what minikanren can do. In these emails, I'm trying to digest back down, in simplistic and basic terms, what the grand lessons have been. What has been discovered. My hope is to push scheme in that general direction.

As to Arne's remark about not finding any use, any project: Well, here's some. Try to build a chair-purchase requisition system, from scratch. Or a bug tracker or an asset tracker.  Now try to make it distributed, decentralized, so that it runs on cell phones.

Next, try to build a chatbot, one that is fun to talk to, and is reasonably knowledgeable about something, say a help-desk chatbot.  Attach that chatbot to the chair-purchase requisition system.

As you do these things, you will be repeatedly hammered by the need to transform complex s-expressions into other complex s-expressions. These transformations need to be kind-of syntax-case-like, but not quite. Also minikanren-like, but not quite. Kind-of-like graph traversals, but not quite. Or maybe a superset of all of these, mashed together.

This is the cutting-edge-but-mainstream of where real-world software developers are at. I'm bombarded by advertisements for some kind of office-project management app. Some kind of accounting app. Something that lets me buy groceries with my cell phone. Some kind of chatbot that automatically reads NDA's and legal contracts and tells me if they are safe to sign. Another kind of chatbot that I can deploy in my call-center.  Surely you guys get bombarded by these same ads, right?

In my case, I've actually built versions of the things they are advertising. Admittedly very crappy ones, or incomplete ones, Some very successful (gnucash.org) others less so.  The gnucash app is jam-packed with s-expressions and transformations on them (the code base calls them kvps's, short for "key-value pairs")  The need to transform and manipulate s-expressions is utterly and completely central to pretty much anything and everything in modern computing.  I'm vaguely flabbergasted by my inability to... well, look, has anyone else here on this mailing list ever built any of these kinds of large, complex software systems? Banking? Customer service? Sales? Management? Minikanren is nice, but you'd have to redesign it from scratch before you could ever use it as the central core of a banking system.  It's not scalable, it's not flexible, and its feature-poor.

-- Linas

Linas Vepstas

unread,
Apr 4, 2022, 2:51:44 PM4/4/22
to Jason Hemann, scheme-re...@googlegroups.com
Let me rephrase, to be more direct.

A key-value pair is the same thing as a car-cdr pair. Given how fundamental the notion of a "pair" is to scheme and lisp, you would think that schemers and lispers drove the development of key-value databases. Well, no. Of the 1001 key-value databases out there (aka "nosql databases") pretty much exactly none of them have scheme or lisp bindings. How can it be that Java Enterprise software developers are bigger users of s-expressions than schemers? What's at the heart of this cognitive dissonance?

What does minikanren have to do with banking? Well, a typical banking transaction has to be processed: if the customer has made a transfer request, and the bank manager has approved, and the regulatory filings have been performed, and the payee is not in an ITAR country like North Korea, or is not otherwise in a sanction list, and if the payment does not require royalties, transit rights, taxes, fees or other fractional accounting splits or surcharges, then let the payment go through, else route to the appropriate subsystem.  These regulatory decisions are conventionally processed with rule systems.

Minikanren is a rule system. It's utterly inadequate for deployment at the heart of a banking system.  What's any of this got to do with scheme? Well, syntax-case is an example of a "rule".  It is something that gets pattern-matched on the inputs, and then generates some outputs, some of which might even be executable.

Rebuild minikanren so that it sits on top of syntax-case and maybe rebuild syntax-case so that this becomes possible. Now add all the bells-and-whistles, all the performance tweaks, so that minikanren can be used to build a banking system.  Hook it all up to an s-expression database (a key-value database) so that when the electric power goes out, you still have a copy of your s-expressions on disk.

This is the kind of software people build today.  I have a very clear idea of how to build something like that, because I've done it in the past. But to build it out of syntax-case, minikanren and srfi-whatever for foreign-function interface to some s-expression (sorry, key-value) databases .. Oof. I can't do that.  A bridge too far.

-- Linas

Dr. Arne Babenhauserheide

unread,
Apr 4, 2022, 2:53:00 PM4/4/22
to scheme-re...@googlegroups.com, Jason Hemann, Linas Vepstas

Linas Vepstas <linasv...@gmail.com> writes:

> As to Arne's remark about not finding any use, any project: Well,
> here's some. Try to build a chair-purchase requisition system, from
> scratch. Or a bug tracker or an asset tracker. Now try to make it
> distributed, decentralized, so that it runs on cell phones.

That’s what people tend to do with SQL. And tons of boilerplate.

But a bug-tracker … that’s something that’s actually needed direly in
many companies, because they’ll have to replace jira that pushes them to
move into the Atlassian cloud.
signature.asc

Linas Vepstas

unread,
Apr 4, 2022, 3:24:46 PM4/4/22
to Dr. Arne Babenhauserheide, scheme-re...@googlegroups.com, Jason Hemann
On Mon, Apr 4, 2022 at 1:52 PM Dr. Arne Babenhauserheide <arne...@web.de> wrote:

Linas Vepstas <linasv...@gmail.com> writes:

> As to Arne's remark about not finding any use, any project: Well,
> here's some. Try to build a chair-purchase requisition system, from
> scratch. Or a bug tracker or an asset tracker. Now try to make it
> distributed, decentralized, so that it runs on cell phones.

That’s what people tend to do with SQL. And tons of boilerplate.

But a bug-tracker … that’s something that’s actually needed direly in
many companies, because they’ll have to replace jira that pushes them to
move into the Atlassian cloud.

Oof-dah. Both of these have a lot in common. In fact, these are very nearly identical systems.  Earlier, I had mentioned an open-source system from NASA (the space agency) and it could do both: in fact, both the chair-purchasing pipeline, and the bug-tracker, these were both in the documentation, as *examples*. You were invited to add the bells and whistles to flesh them out to suit your needs.

This NASA system is now obsolete. Here's a copy of version 1.0 --  https://linas.org/linux/wise/ It's obsolete because it was a mashup of SQL, lex, yacc and html, with no abstraction layers in between.  However, it's utter and complete simplicity is educational. 

What it really was, at its core, was a rule engine.  A human would author the rules (e.g. who has permission to open a new bug report, who has permission to add new users -- all of the state transitions and permissions to execute the state transitions) The rules were held in files, and the lex/yacc scripts generated HTML pages, and generated SQL queries, and hooked it all together, so you had a fully functional website after compiling. I suppose it might be bit-rotted, but it was so friggin simple, it should still work.

That rule system -- the state-transition system, and the user-classes, permissions to create new users, permissions to be an admin,  this could be, I suppose, replaced by syntax-case or maybe by minikanren, or, I dunno. The generic concept is that of syntactic transformations on structures.

As to SQL: it sucks. This realization is what drove the object-relational turmoil of the 1990's, and the noSQL revolt of the 2000's. In the 2010's, the graph databases became all the rage.

When you step back, and look at what people are *actually doing* with SQL, noSQL, and graph databases, and ask "what do all of these have in common?" you will discover that the answer is "rule-driven transformations on s-expressions''.  People keep reinventing the same ideas, but badly.  My message here is that scheme has a superior theoretical foundation for this stuff, but that advantage is being ignored, being squandered.

-- Linas

Best wishes,
Arne
--
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
draketo.de

Amirouche Boubekki

unread,
Apr 6, 2022, 10:06:55 AM4/6/22
to scheme-re...@googlegroups.com, Dr. Arne Babenhauserheide, Jason Hemann
Consolidated reply to several messages in other threads (even if it is
getting very off-topic, even if interesting, especially since opencog
is one the biggest Scheme projects available in the wild).

Knowledge is encoded as an s-expr explains nothing about a) the
knowledge, b) the shape of the knowledge, c) the read and / or write
queries you need to execute, or d) how much time you can give to each
query.

The advantage of s-expr is that they are easy to represent using text,
and easy to be rewritten again with text. syntax-case, syntax-rule,
explicit and implicit renaming macro are fine-tuned machineries with
the goal to a) rewrite a source expression b) into something that the
scheme evaluator can understand, c) at compile time, d) provide debug
information.

Back at the runtime macros, f-expr, there is no system that implements
it in the way that you imagine. What people do is pile up
interpreters, and wait.

There is a very thick barrier between in-memory representation,
on-disk representation, and single region distributed database,
multi-region distributed database, and some kind of peer-to-peer
distributed database because of latency and other guarantees those
paradigms may or may not provide. That is not bipolar either, stating
there is continuum between in-memory s-expr data, and a peer-to-peer
database that can be written and read by anyone is true within one
particular framework, a framework that I will not describe because
that is not the point of this conversation.

Regarding the example of an application, I think an intelligence
augmentation (aka. second brain, tool for thought...) product is a
better medium to experiment with a s-expr database.

> That rule system -- the state-transition system, and the user-classes, permissions to create new users, permissions to be an admin, this could be, I suppose, replaced by syntax-case or maybe by minikanren, or, I dunno.

Linas, my understanding is that the problem is not how to execute a
given query / rewriting rule, but instead to optimize it at runtime
given runtime knowledge, did I misunderstand?

> The generic concept is that of syntactic transformations on structures.

I will rephrase that as a transformation of human-centered textual
representation of an intent taking the form of s-expr to efficient
executable code.

NB: key-value database, is not an order-key-value store.
NNB: OKVS are older than the NoSQL movement by two decades.
NNNB: I added to my todo to explore how to manipulate s-expr in a way
that is not specific to a particular encoding.

Linas Vepstas

unread,
Apr 9, 2022, 7:41:19 PM4/9/22
to scheme-re...@googlegroups.com, Dr. Arne Babenhauserheide, Jason Hemann
Hi Amirouche,

On Wed, Apr 6, 2022 at 9:06 AM Amirouche Boubekki <amirouche...@gmail.com> wrote:
Consolidated reply to several messages in other threads (even if it is
getting very off-topic, even if interesting, especially since opencog
is one the biggest Scheme projects available in the wild).

Scheme projects in the wild: there is also GnuCash (https://gnucash.org and  https://github.com/Gnucash/gnucash) which is a financial accounting system with a checkbook-register user interface. So, like Quicken, of the days of yore.  The design of opencog was influenced by the design of Gnucash.

The core architecture of gnucash dates back to the 1990's, when ideas like MVC (model view controller) were popular. The model is the data being represented; the controller performs a series of transformations on that data. The data is a complex graph: money withdrawn from one account must be deposited in another. Each transaction must be linked to info identifying the account. When books close at the end of the year, various balancing transactions must be created using various complex schemes (FIFO, LIFO, depreciated value, etc.) 

Similar issues arise in the creation of web shopping carts, asset trackers, bug trackers, dispatch queues, project management tools. They're all very much special cases of a general paradigm.

All of the structures in GnuCash are hand-crafted. By contrast, in machine learning, or rather, in machine thinking (which is not learning, but something else again), those structures are not known, but must be inferred from unstructured data.  There's a ladder, the first few rungs of which I know how to climb: infer pairwise correlations, infer local (nearest-neighbor) graphical structure, infer more distant relationships and structures in a recursive fashion. 

The structures that are being inferred are the same structures that an accounting system or a bug tracker might use internally: its the very same idea of a graphical network of relationships and transformations on those relationships. It's just one level up in the abstraction complexity. Humans are not creating the structures; humans are creating the code that creates the structures.

A concrete example might be reverse engineering a virus for some obscure industrial microcontroller, given only a binary blob. Is there even a virus in there? If there is, what does it do? What's the structure, the network, the location and lines of code that it inhabits? Obviously, there is structure, but how can an algorithm search for it, find it, reason about it? You've dis-assembled it, great, now what? How do you infer the structure in that blob of code?


Knowledge is encoded as an s-expr explains nothing about a) the
knowledge, b) the shape of the knowledge, c) the read and / or write
queries you need to execute, or d) how much time you can give to each
query.

Yes, exactly. That's the power, the strength of s-expressions. It seems not widely understood or appreciated. I think the early designers of javascript "got it", understood this, and designed the language correctly. It certainly propelled javascript into tremendous popularity.

With opencog, I am facing the "next level up", where the structures are not hand-crafted by humans, by some web designer, by some programmer, but are instead crafted by other algorithms.  This requires clear thinking about "what is a structure" and "how do algorithms manipulate it".

There are some historical precedents: most well-understood are compiler intermediate languages. (e.g microsoft CIL, gcc gimple, LLVM)  In the 1990's (and I guess still today?) the intermediate language for GCC was this weird custom lisp dialect, and my understanding of the origins of guile was that there was a vision that it would grow to be a replacement for this intermediate language.  I'm not sure, but I suspect this is why you see guile inside of GDB.

Back at the runtime macros, f-expr, there is no system that implements
it in the way that you imagine. What people do is pile up
interpreters, and wait.

I am trying to plant a seed. As I've mentioned, *I already have* a system that does all this, does what I want. It works, it works well, I'm happy with it. It's got a few warts, a few missing features, a few issues, just like all software systems. But overall, I'm happy.

The message I am trying to get across here is along the lines of "hey guys, this is important, it's generally neglected, it is something that should be built in as part of scheme, or at least as a library or module or extension. It's on the cutting edge, and yes, people in other programming languages are working on this sort of stuff. For example, 3-4 years ago, there was a startup grakn.ai (they changed their name, I forget to what) that built this kind of a system for javascript.  They're still in business, they brag about a few dozen customers, slick businissy-looking website. whatever. So other people see the need for this kind of technology. I'm saying, hey, this should be done in scheme, too.  Since **I already have this kind of system**, I'm really just advocating a new clean-room design that perhaps skips past all the mistakes and faults I experienced on my journey.  Or rather, I'm trying to plant a seed, trying to get you to think in that direction.

There is a very thick barrier between in-memory representation,
on-disk representation, and single region distributed database,
multi-region distributed database, and some kind of peer-to-peer
distributed database because of latency and other guarantees those
paradigms may or may not provide. That is not bipolar either, stating
there is continuum between in-memory s-expr data, and a peer-to-peer
database that can be written and read by anyone is true within one
particular framework, a framework that I will not describe because
that is not the point of this conversation.

Yep. Again, the system I have has already implemented these things, sometimes poorly, sometimes well, I can point at the lessons learned.  Nothing particularly magical, though; anyone who has worked at the close-to-the-metal issues in the data industry already knows the general ideas. There are hundreds of blog pages on this; just that they don't explicitly talk about s-expressions or scheme.

Regarding the example of an application, I think an intelligence
augmentation (aka. second brain, tool for thought...)  product is a
better medium to experiment with a s-expr database.

And that's exactly what I actually do, have been doing for the last decade. That's how I got to here.

> That rule system -- the state-transition system, and the user-classes, permissions to create new users, permissions to be an admin,  this could be, I suppose, replaced by syntax-case or maybe by minikanren, or, I dunno.

Linas, my understanding is that the problem is not how to execute a
given query / rewriting rule, but instead to optimize it at runtime
given runtime knowledge, did I misunderstand?

I don't have a problem that needs to be solved (well, actually, I do, I have many, but they are all 3 steps away from this conversation.) Again, I am trying to plant a seed, to get you to think about a generic s-expression data system, and all the baggage that comes with that concept.


> The generic concept is that of syntactic transformations on structures.

I will rephrase that as a transformation of human-centered textual
representation of an intent taking the form of s-expr to efficient
executable code.

data is not executable code. Data is data. For example, json is data. You don't "execute" json. You "do stuff" with it.

-- linas

NB: key-value database, is not an order-key-value store.
NNB: OKVS are older than the NoSQL movement by two decades.
NNNB: I added to my todo to explore how to manipulate s-expr in a way
that is not specific to a particular encoding.

--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages