Committee E topics

50 views
Skip to first unread message

John Cowan

unread,
Mar 17, 2022, 5:01:08 PMMar 17
to scheme-re...@googlegroups.com

On Thu, Mar 17, 2022 at 10:59 AM Nala Ginrut <nalag...@gmail.com> wrote:

Hi folks!
I've read the precious mails, but I still have insufficient information about the potential work of committee E. I guess maybe it's one of the reasons why there's no volunteer.
Anyway, we may open a thread of E to make some initial work till we have a chair.

E will be concerned with the environment of Scheme programs.  In R7RS-small, this includes Sections 6.13 (Input and output) and 6.14 (System interface).  In R6RS Libraries, this includes sections 8.1 (Condition types) and 8.2 (Port I/O), as Section 8.3 is the same as R7RS-small's I/O.

Relevant SRFIs are sockets (SRFI 106), 120 (timers), 170  (Posix), 181 (Custom I/O ports, including transcoded ports), 192 (port positioning), 205 (terminal I/O, still draft).  SRFIs 181 and 192 are directly modeled on R6RS.

Relevant pre-SRFIs written by me can be found in https://github.com/johnwcowan/r7rs-work and include DatagramChannelsCowan (UDP sockets), GraphicsCanvas (high-level graphics), SyslogCowan (what it says on the tin), and TerminalsCowan (Termbox/TUI interface support).  The last can be implemented portably on top of 205, and so it may move to Committee B in future.  A notable alternative to something like GraphicsCanvas is the ezdraw server, which is written in Scheme and exchanges S-expressions over a named pipe or local socket (it can also be "linked in").

Arthur A. Gleckler

unread,
Mar 17, 2022, 5:09:01 PMMar 17
to scheme-re...@googlegroups.com
On Thu, Mar 17, 2022 at 2:01 PM John Cowan <co...@ccil.org> wrote:
 
Relevant pre-SRFIs written by me can be found in https://github.com/johnwcowan/r7rs-work and include DatagramChannelsCowan (UDP sockets), GraphicsCanvas (high-level graphics), SyslogCowan (what it says on the tin), and TerminalsCowan (Termbox/TUI interface support).  The last can be implemented portably on top of 205, and so it may move to Committee B in future.  A notable alternative to something like GraphicsCanvas is the ezdraw server, which is written in Scheme and exchanges S-expressions over a named pipe or local socket (it can also be "linked in").

Do you know where I might find the ezdraw server you mentioned?  I've found some mentions of it, but the sites are all obsolete or are actually about something else.  I actually built something like that, but using a language-neutral format, just over a year ago: https://github.com/arthurgleckler/nanovg-repl.

Thanks.

Arthur A. Gleckler

unread,
Mar 17, 2022, 5:15:20 PMMar 17
to scheme-re...@googlegroups.com
On Thu, Mar 17, 2022 at 2:08 PM Arthur A. Gleckler <a...@speechcode.com> wrote:
 
Do you know where I might find the ezdraw server you mentioned?  I've found some mentions of it, but the sites are all obsolete or are actually about something else.  I actually built something like that, but using a language-neutral format, just over a year ago: https://github.com/arthurgleckler/nanovg-repl.

Oh, never mind.  You must mean Joel Bartlett's excellent Don't Fidget With Widgets, Draw! paper.  That must be where I got the idea.
 

Marc Nieper-Wißkirchen

unread,
Mar 17, 2022, 5:15:34 PMMar 17
to scheme-re...@googlegroups.com
Am Do., 17. März 2022 um 22:01 Uhr schrieb John Cowan <co...@ccil.org>:

> E will be concerned with the environment of Scheme programs. In R7RS-small, this includes Sections 6.13 (Input and output) and 6.14 (System interface). In R6RS Libraries, this includes sections 8.1 (Condition types) and 8.2 (Port I/O), as Section 8.3 is the same as R7RS-small's I/O.
>
> Relevant SRFIs are sockets (SRFI 106), 120 (timers), 170 (Posix), 181 (Custom I/O ports, including transcoded ports), 192 (port positioning), 205 (terminal I/O, still draft). SRFIs 181 and 192 are directly modeled on R6RS.

At least the basics of custom ports will be part of the Foundations as
they cannot be portably implemented on top of the small language on
the one hand side and are host system-independent on the other hand
side. If a condition system becomes part of the language the basic I/O
conditions will have to be in the Foundations as well as the
Foundations include I/O of the small language. Committees E and F will
be in touch so that F can provide what E needs.

Nala Ginrut

unread,
Mar 18, 2022, 12:05:08 AMMar 18
to scheme-re...@googlegroups.com
Hi all!

On Fri, Mar 18, 2022, 05:15 Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
Am Do., 17. März 2022 um 22:01 Uhr schrieb John Cowan <co...@ccil.org>:

> E will be concerned with the environment of Scheme programs.  In R7RS-small, this includes Sections 6.13 (Input and output) and 6.14 (System interface).  In R6RS Libraries, this includes sections 8.1 (Condition types) and 8.2 (Port I/O), as Section 8.3 is the same as R7RS-small's I/O.
>
> Relevant SRFIs are sockets (SRFI 106), 120 (timers), 170  (Posix), 181 (Custom I/O ports, including transcoded ports), 192 (port positioning), 205 (terminal I/O, still draft).  SRFIs 181 and 192 are directly modeled on R6RS.

No one mentioned epoll? We need a unified I/O multiplex API for poll/epoll/kqueue.
This could be based on FFI, but I think we should provide API in E.
 
Committees E and F will
be in touch so that F can provide what E needs.

+1
 

Best regards.

Lassi Kortela

unread,
Mar 18, 2022, 3:02:15 AMMar 18
to scheme-re...@googlegroups.com
> No one mentioned epoll? We need a unified I/O multiplex API for
> poll/epoll/kqueue.
> This could be based on FFI, but I think we should provide API in E.

That design space changes a lot. io_uring is the hot new thing in Linux.

A basic POSIX poll() like API, which lets you test ports for readability
and writability and doesn't prescribe an underlying mechanism, might be
a good start. Expect the dynamic duo of threads and signals to cause
trouble. Best to try out in a SRFI first.

Lassi Kortela

unread,
Mar 18, 2022, 3:06:00 AMMar 18
to scheme-re...@googlegroups.com
> At least the basics of custom ports will be part of the Foundations as
> they cannot be portably implemented on top of the small language on
> the one hand side and are host system-independent on the other hand
> side. If a condition system becomes part of the language the basic I/O
> conditions will have to be in the Foundations as well as the
> Foundations include I/O of the small language.

+1

Everything needed for R6RS compat should be in Foundations.

Any particular reason condition types belong in Environment? Or just I/O
related conditions?

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 3:20:16 AMMar 18
to scheme-re...@googlegroups.com
Whatever the underlying implementation will be, on the Scheme side we
need a primitive that is able to multiplex different sources, e.g. I/O
port polling, threading condition variables, and the receipt of
signals like SIGTERM. Maybe a high-level API for this can be provided
with futures; as far as these are concerned B and F may need to
coordinate.

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 3:26:23 AMMar 18
to scheme-re...@googlegroups.com
The foundations will and can only provide the foundations of a
condition hierarchy. Some E APIs will ask for more specialized
conditions and these will have to be provided by E. Moreover, in the
context of E, extra information of the erroneous situation could be
added to the basic I/O conditions. This is at least my understanding.

Lassi Kortela

unread,
Mar 18, 2022, 3:41:12 AMMar 18
to scheme-re...@googlegroups.com
> Maybe a high-level API for this can be provided with futures

Are any implementers on board with futures? Multi-threading is hard. We
shouldn't add any thread stuff to (any part of) RnRS that isn't portable
and battle-tested.

The choices currently supported by more than one implementation are:

- SRFI 18
- Some adaptation of Concurrent ML (fibers, etc.)
- any others?

Daphne Preston-Kendal

unread,
Mar 18, 2022, 3:47:53 AMMar 18
to scheme-re...@googlegroups.com
On 18 Mar 2022, at 08:41, Lassi Kortela <la...@lassi.io> wrote:

>> Maybe a high-level API for this can be provided with futures
>
> Are any implementers on board with futures? Multi-threading is hard.

Green threads are simple enough that even Chibi supports them. Indeed, is there a ‘sophisticated’ implementation (to use the WG1 charter language — i.e. one that might reasonably be expected to adopt R7RS Large) that doesn’t support threads in any way at all?


Daphne

Lassi Kortela

unread,
Mar 18, 2022, 3:53:19 AMMar 18
to scheme-re...@googlegroups.com
>> Are any implementers on board with futures? Multi-threading is hard.
>
> Green threads are simple enough that even Chibi supports them. Indeed, is there a ‘sophisticated’ implementation (to use the WG1 charter language — i.e. one that might reasonably be expected to adopt R7RS Large) that doesn’t support threads in any way at all?

Not sure.

Any thread API in RnRS should be something that ties in seamlessly with
the native thread API in each Scheme implementation (i.e. RnRS threads
are a type of native thread that perhaps uses only some of the available
native features).

IIRC there used to be a situation in Guile where it has 2 (or 3?)
incompatible types of hash tables. We should take great care avoid a
situation where Scheme programs have to juggle incompatible concurrency
constructs.

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 3:57:55 AMMar 18
to scheme-re...@googlegroups.com
Am Fr., 18. März 2022 um 08:53 Uhr schrieb Lassi Kortela <la...@lassi.io>:

> IIRC there used to be a situation in Guile where it has 2 (or 3?)
> incompatible types of hash tables. We should take great care avoid a
> situation where Scheme programs have to juggle incompatible concurrency
> constructs.

That's not necessarily a bad thing. The point is that Guile wants to
support R6RS, SRFI 69, and its own native hash tables. For portable
R6RS or R7RS programs, this is not a problem.

(As far as hash tables are concerned, SRFI 225 may help.)

Lassi Kortela

unread,
Mar 18, 2022, 4:18:17 AMMar 18
to scheme-re...@googlegroups.com
> That's not necessarily a bad thing. The point is that Guile wants to
> support R6RS, SRFI 69, and its own native hash tables. For portable
> R6RS or R7RS programs, this is not a problem.

I worry about the tendency to think of RnRS as a standalone or isolated
thing. Big useful programs are never going to use only portable
features, no matter how good RnRS is. They'll interact with non-portable
APIs via cond-expand, pass objects to those APIs, and get objects back
from those APIs. If they receive something like a thread or a hashtable
from a native API, it better be compatible with the RnRS versions of
those things. (If RnRS can use only a subset of the native capabilities,
that's fine.) Likewise, keyword arguments, object systems and the like
have to be compatible with the native versions.

> (As far as hash tables are concerned, SRFI 225 may help.)

(I.e. the Dictionaries SRFI.) The decision was made that each procedure
takes a dictionary type descriptor in addition to the dictionary object
itself -- i.e. two arguments instead of one. Unfortunately I consider
this a dead end; nothing convenient or elegant can come out of pursuing
this design direction.

Alex Shinn

unread,
Mar 18, 2022, 4:26:39 AMMar 18
to scheme-re...@googlegroups.com
2022年3月18日(金) 16:47 Daphne Preston-Kendal <d...@nonceword.org>:

Green threads are actually much harder to implement.  You need your own scheduler, your own polling mechanisms, your own synchronization primitives, and in chibi's case built-in FFI support.  It pervades every part of the design.

-- 
Alex



Daphne

--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scheme-reports-wg2/4DEF72F0-DD5A-4344-9C1A-FB230EC610B2%40nonceword.org.

Per Bothner

unread,
Mar 18, 2022, 6:40:24 AMMar 18
to scheme-re...@googlegroups.com


On 3/18/22 00:41, Lassi Kortela wrote:
>> Maybe a high-level API for this can be provided with futures
>
> Are any implementers on board with futures? Multi-threading is hard. We shouldn't add any thread stuff to (any part of) RnRS that isn't portable and battle-tested.

Kawa has futures:
https://www.gnu.org/software/kawa/Threads.html
The resulting thread object implements the Lazy interface,
just like the result of (force exp).

Kawa also has "auto-force" - a Lazy value (including a 'future' thread)
is automatically forced when used in certain operations. This includes
display but not write.
--
--Per Bothner
p...@bothner.com http://per.bothner.com/

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 9:32:42 AMMar 18
to scheme-re...@googlegroups.com
Am Fr., 18. März 2022 um 09:18 Uhr schrieb Lassi Kortela <la...@lassi.io>:
>
> > That's not necessarily a bad thing. The point is that Guile wants to
> > support R6RS, SRFI 69, and its own native hash tables. For portable
> > R6RS or R7RS programs, this is not a problem.
>
> I worry about the tendency to think of RnRS as a standalone or isolated
> thing. Big useful programs are never going to use only portable
> features, no matter how good RnRS is. They'll interact with non-portable
> APIs via cond-expand, pass objects to those APIs, and get objects back
> from those APIs. If they receive something like a thread or a hashtable
> from a native API, it better be compatible with the RnRS versions of
> those things. (If RnRS can use only a subset of the native capabilities,
> that's fine.) Likewise, keyword arguments, object systems and the like
> have to be compatible with the native versions.

This will most likely give you the small language (or even less).

>
> > (As far as hash tables are concerned, SRFI 225 may help.)
>
> (I.e. the Dictionaries SRFI.) The decision was made that each procedure
> takes a dictionary type descriptor in addition to the dictionary object
> itself -- i.e. two arguments instead of one. Unfortunately I consider
> this a dead end; nothing convenient or elegant can come out of pursuing
> this design direction.

While I am not in agreement with all the details of SRFI 225, I
haven't yet seen a better general idea.

If you can come up with a different idea that allows both efficient
generic algorithms and is logically well-founded (or if you even write
a SRFI about it), I'm sure I am not the only one who would be more
than eager to hear it.

Arthur A. Gleckler

unread,
Mar 18, 2022, 1:25:58 PMMar 18
to scheme-re...@googlegroups.com
On Fri, Mar 18, 2022 at 12:20 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
 
Whatever the underlying implementation will be, on the Scheme side we
need a primitive that is able to multiplex different sources, e.g. I/O
port polling, threading condition variables, and the receipt of
signals like SIGTERM. Maybe a high-level API for this can be provided
with futures; as far as these are concerned B and F may need to
coordinate. 

Michael Sperber has a nice talk (YouTube) about harvesting the brilliant ideas of Concurrent ML.

Linas Vepstas

unread,
Mar 18, 2022, 1:27:05 PMMar 18
to scheme-re...@googlegroups.com


On Fri, Mar 18, 2022 at 2:41 AM Lassi Kortela <la...@lassi.io> wrote:

The choices currently supported by more than one implementation are:

- SRFI 18
- Some adaptation of Concurrent ML (fibers, etc.)
- any others?

Guile threads work great. After a quick skim, they seem to be mostly srfi-18-like except in one area: instead of having a thread-set-specific, guile has a concept of "fluids".  A "fluid" is kind of like thread-specific state, except that you can bind it to more than one thread, and you can swap around between different fluids on any given thread.  In summary, a fluid is the blob of state that holds execution-context-specific data, but there is no hard-coded requirement for it to be tied one-to-one with any given thread control block (TCB).  Fluids seem strange if you're used to the C/C++/java view of the world, but they're great once you wrap your mind around the idea.

Fibers seem like a marvelous theoretical idea, and a positive step forward in threading theory. In practice, for my use case, fibers, in guile, offered no performance advantage, and it was not too hard to find bugs and problematic behaviour in the implementation. But I dunno, I never got to the bottom of the issue; I gave up when it became clear that there was no gain.

Actual threading performance depends strongly on proprietary (undocumented) parts of the CPU implemention.  For example, locks are commonly  implemented on cache lines, either on L1 or L2 or L3. Older AMD cpus seem to have only one lock for every 6 CPU's, (I think that means the lock hardware is in the L3 cache? I dunno) and so it is very easy to stall with locked cache-line contention. The very newest AMD CPU's seem to have 1 lock per CPU (so I guess they moved the lock hardware to the L1 cache??) and so are more easily parallelized under heavy lock workloads.  Old PowerPC's had one lock per L1 cache, if I recall correctly.  So servers work better than consumer hardware.

To be clear: mutexes per-se are not the problem; atomic ops are.  For example, in C++, the reference counts on shared pointers uses atomic ops, so if your C++ code uses lots of shared pointers, you will be pounding the heck out CPU lock hardware, and all of the CPU's are all going to snooping on the bus and they will all be checkpointing and invalidating and rolling back like crazy, hitting a very hard brick wall on some CPU designs.  I have no clue how much srfi-18 or fibers depend on atomic ops, but these are real issues that hurt real-world parallelizability.  Avoid splurging with atomic ops.

As to hash-tables:  lock-free hash tables are problematic. Facebook has the open-source "folly" C/C++ implementation for lockless hash tables. Intel has one too, but the documentation for the intel code is... well I could not figure out what intel was doing.  There's some cutting-edge research coming out of Israel on this, but I did not see any usable open-source implementations.

In my application, the lock-less hash tables offered only minor gains; my bottleneck was in the atomics/shared-pointers. YMMV.

-- linas



Nala Ginrut

unread,
Mar 18, 2022, 1:44:14 PMMar 18
to scheme-re...@googlegroups.com
I think io_uring mixed I/O multiplex and managed async I/O.
It's better to separate them in standard API for portability, after all, Scheme has wide users including BSD, Hurd, and other OS folks.
And we may also consider adding async I/O API.
Except for io_uring, we still have delimited-continuation to make similar things. So the implementation could be various.

Best Regards.



 

--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.

Nala Ginrut

unread,
Mar 18, 2022, 1:52:21 PMMar 18
to scheme-re...@googlegroups.com
Please also consider the Scheme for the embedded system.
The pthread is widely used in RTOS which could be the low-level implementation of the API, and green-thread is not a good choice for compact embedded systems.

I confess this should be put into R7RS-small topic, but as an useful extension of R7RS-small, I think the API should be general enough to embrace both pthread spec and green-thread design.

PS: As the author of LambdaChip, I'm considering adding threads based on pthread on ZephyrRTOS, but still hesitate about the API design.

Best regards.



--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.

Linas Vepstas

unread,
Mar 18, 2022, 2:22:39 PMMar 18
to scheme-re...@googlegroups.com
Nala,

On Fri, Mar 18, 2022 at 12:52 PM Nala Ginrut <nalag...@gmail.com> wrote:
Please also consider the Scheme for the embedded system.
The pthread is widely used in RTOS which could be the low-level implementation of the API,

After a quick skim, srfi-18 appears to be almost exactly a one-to-one mapping into the pthreads API.

The raw thread-specific-set! and thread-specific in srfi-18 might be "foundational" but are unusable without additional work: at a minimum, the specific entries need to be alists or hash tables or something -- and, as mentioned earlier, the guile concept of "fluids" seems (to me) to be the superior abstraction, the abstraction that is needed.
 
and green-thread is not a good choice for compact embedded systems.

My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages.   I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

--linas
 


--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 

Amirouche Boubekki

unread,
Mar 18, 2022, 2:34:15 PMMar 18
to scheme-re...@googlegroups.com
> https://www.youtube.com/watch?v=pf4VbP5q3P0


Thanks a lot for this link!

Amirouche Boubekki

unread,
Mar 18, 2022, 3:01:11 PMMar 18
to scheme-re...@googlegroups.com
My understanding is not what POSIX or non-POSIX API should be
supported to implement concurrency and parallelism but how they
integrate with Scheme. Some questions:

- Can continuations travel across threads?
- Do parameters travel with their continuations? Hint: continuations
attachments, see https://srfi.schemers.org/srfi-157/

There are several Scheme that implement green threads or parallelism
or both, but there is no clear agreement on what API is the best. The
most common interface is https://srfi.schemers.org/srfi-18/

Alternatives to CML and SRFI-18 is
http://wiki.call-cc.org/eggref/5/gochan and the actor model (Gambit
termite)

Vincent Manis

unread,
Mar 18, 2022, 3:34:49 PMMar 18
to scheme-re...@googlegroups.com, Marc Nieper-Wißkirchen
On 2022-03-18 00:20, Marc Nieper-Wißkirchen wrote

> Whatever the underlying implementation will be, on the Scheme side we
> need a primitive that is able to multiplex different sources, e.g. I/O
> port polling, threading condition variables, and the receipt of
> signals like SIGTERM. Maybe a high-level API for this can be provided
> with futures; as far as these are concerned B and F may need to
> coordinate.

I think that before we go very far in some of these directions, we need
to make a clear statement about what environment we are targeting. I
would not assert (because I don't have that knowledge) that the Windows
API provides all the concurrency support (and in particular, polling
non-uniform sources) that Unix/Linux/macOS provide, or vice versa. There
are also bare-metal or embedded environments (such as eCos) to consider.

What's clear to me is that the core of Posix and Windows threads can be
supported across many environments, if necessary by green threads. I
don't know if io_uring can be provided (efficiently) in Windows, for
example.

I'm not advocating any particular minimum environment; in fact, we can
even have several concurrency features,  discernable via cond-expand.
But I would be concerned if we painted ourselves into a corner, by
specifying such rigid environmental requirements that only a few systems
could satisfy them.

I think that this comment applies to pretty much everything that will
come within Committee E's purview.

-- vincent


Linas Vepstas

unread,
Mar 18, 2022, 4:12:47 PMMar 18
to scheme-re...@googlegroups.com
Hi,

On Fri, Mar 18, 2022 at 2:01 PM Amirouche Boubekki <amirouche...@gmail.com> wrote:
On Fri, Mar 18, 2022 at 8:02 AM Lassi Kortela <la...@lassi.io> wrote:
>
> > No one mentioned epoll? We need a unified I/O multiplex API for
> > poll/epoll/kqueue.
> > This could be based on FFI, but I think we should provide API in E.
>
> That design space changes a lot. io_uring is the hot new thing in Linux.
>
> A basic POSIX poll() like API, which lets you test ports for readability
> and writability and doesn't prescribe an underlying mechanism, might be
> a good start. Expect the dynamic duo of threads and signals to cause
> trouble. Best to try out in a SRFI first.
>

My understanding is not what POSIX or non-POSIX API should be
supported to implement concurrency and parallelism but how they
integrate with Scheme. Some questions:

- Can continuations travel across threads?
- Do parameters travel with their continuations? Hint: continuations
attachments, see https://srfi.schemers.org/srfi-157/

I'll say it a third time, and then shut up: guile fluids.  As I understand them, they work with continuations, (although my experiments were minor) and this claim is made by the docs  (see  https://www.gnu.org/software/guile/manual/html_node/Fluids-and-Dynamic-States.html)  This is in sharp contrast to srfi-18 thread-set-specific, which is manifestly continuation-unaware.

There are several Scheme that implement green threads or parallelism
or both, but there is no clear agreement on what API is the best. The
most common interface is https://srfi.schemers.org/srfi-18/

Alternatives to CML and SRFI-18 is
http://wiki.call-cc.org/eggref/5/gochan and the actor model (Gambit
termite)

Introductory explanations of fibers explain how one can start with poll/epoll, and then immediately realize that the go channels model is better. Then, with some "minor" changes of viewpoint, end up with the broader and more flexible idea of fibers. If I recall correctly, the introductory expositions to fibers also talk about the actor model.  All this leaves me with a strong impression that fibers really are an important advancement in parallel computation.  I didn not study fibers closely, but it seemed to be a major advancement. I heartily endorse them.

As to "which": pthreads or fibers, it should be "both". My impression is that rank-n-file, bread-n-butter quotidian programmers are just fine with a pthreads/srfi-18 model (plus the missing-but-needed fluids) so you can't go wrong with that. For those that are pickier, more theoretically inclined, or are facing difficult performance and scheduling problems, my general impression is that they will want fibers.  I'm thinking this is not an either-or-choice, but rather a "both".

And, yes, lock-free hash tables would be ... well, for people who use hash-tables, I'm thinking this would be a pretty big deal.  You can just feel the desperation in the various mailing lists.  For scheme to provide something here would be .. a feather in it's cap.

However, I am a passive bystander in the process here; I won't be implementing this stuff. Sorry. Just making grand pronouncements as to overall general impressions.

-- Linas

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 5:39:36 PMMar 18
to scheme-re...@googlegroups.com
Am Fr., 18. März 2022 um 20:01 Uhr schrieb Amirouche Boubekki
<amirouche...@gmail.com>:

> - Can continuations travel across threads?
> - Do parameters travel with their continuations? Hint: continuations
> attachments, see https://srfi.schemers.org/srfi-157/

It is one of the many purposes of SRFI 226 to answer all these
questions uniformly and consistently.

Marc Nieper-Wißkirchen

unread,
Mar 18, 2022, 5:51:21 PMMar 18
to scheme-re...@googlegroups.com
Am Fr., 18. März 2022 um 19:22 Uhr schrieb Linas Vepstas
<linasv...@gmail.com>:
>
> Nala,
>
> On Fri, Mar 18, 2022 at 12:52 PM Nala Ginrut <nalag...@gmail.com> wrote:
>>
>> Please also consider the Scheme for the embedded system.
>> The pthread is widely used in RTOS which could be the low-level implementation of the API,
>
>
> After a quick skim, srfi-18 appears to be almost exactly a one-to-one mapping into the pthreads API.

It was also influenced in parts by Java and the Windows API, I think,
but Marc Feeley will definitely know better.

> The raw thread-specific-set! and thread-specific in srfi-18 might be "foundational" but are unusable without additional work: at a minimum, the specific entries need to be alists or hash tables or something -- and, as mentioned earlier, the guile concept of "fluids" seems (to me) to be the superior abstraction, the abstraction that is needed.

Please add your thoughts about it to the SRFI 226 mailing list. As far
as Guile fluids are concerned, from a quick look at them, aren't they
mostly what parameter objects are in R7RS?

>
>>
>> and green-thread is not a good choice for compact embedded systems.
>
>
> My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages. I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

For SRFI 18/226, it doesn't matter whether green threads or native
threads underlie the implementation. There can even be a mixture like
mapping N Scheme threads to one OS thread.

>>> Actual threading performance depends strongly on proprietary (undocumented) parts of the CPU implemention. For example, locks are commonly implemented on cache lines, either on L1 or L2 or L3. Older AMD cpus seem to have only one lock for every 6 CPU's, (I think that means the lock hardware is in the L3 cache? I dunno) and so it is very easy to stall with locked cache-line contention. The very newest AMD CPU's seem to have 1 lock per CPU (so I guess they moved the lock hardware to the L1 cache??) and so are more easily parallelized under heavy lock workloads. Old PowerPC's had one lock per L1 cache, if I recall correctly. So servers work better than consumer hardware.
>>>
>>> To be clear: mutexes per-se are not the problem; atomic ops are. For example, in C++, the reference counts on shared pointers uses atomic ops, so if your C++ code uses lots of shared pointers, you will be pounding the heck out CPU lock hardware, and all of the CPU's are all going to snooping on the bus and they will all be checkpointing and invalidating and rolling back like crazy, hitting a very hard brick wall on some CPU designs. I have no clue how much srfi-18 or fibers depend on atomic ops, but these are real issues that hurt real-world parallelizability. Avoid splurging with atomic ops.

If you use std::shared_ptr, many uses of std::move are your friend or
you are probably doing it wrong. :)

>>> As to hash-tables: lock-free hash tables are problematic. Facebook has the open-source "folly" C/C++ implementation for lockless hash tables. Intel has one too, but the documentation for the intel code is... well I could not figure out what intel was doing. There's some cutting-edge research coming out of Israel on this, but I did not see any usable open-source implementations.
>>>
>>> In my application, the lock-less hash tables offered only minor gains; my bottleneck was in the atomics/shared-pointers. YMMV.

Unfortunately, we don't have all the freedom of C/C++. While it is
expected that a C program will crash when a hash table is modified
concurrently, most Scheme systems are expected to handle errors
gracefully and not crash. We may need a few good ideas to minimize the
amount of locking needed.

Alex Shinn

unread,
Mar 18, 2022, 6:20:03 PMMar 18
to scheme-re...@googlegroups.com
On Sat, Mar 19, 2022 at 3:22 AM Linas Vepstas <linasv...@gmail.com> wrote:
>
> On Fri, Mar 18, 2022 at 12:52 PM Nala Ginrut <nalag...@gmail.com> wrote:
>>
>> and green-thread is not a good choice for compact embedded systems.
>
> My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages. I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

Agreed, green threads are not a good choice for an embedded system, or
most systems
in general.

My choice of green threads for Chibi was in its original design as an
extension language.
It should be easy to include in an existing C/C++ application in a
thread-safe manner
without any concern that it would interact with the host threads at
all. It's a more general
and powerful mechanism than the idle timers of Emacs or JavaScript
async callbacks.

--
Alex

John Cowan

unread,
Mar 18, 2022, 6:51:59 PMMar 18
to scheme-re...@googlegroups.com
On Fri, Mar 18, 2022 at 4:12 PM Linas Vepstas <linasv...@gmail.com> wrote:

There are several Scheme that implement green threads or parallelism
or both, but there is no clear agreement on what API is the best. The
most common interface is https://srfi.schemers.org/srfi-18/

It's important to realize that SRFI 18 (as opposed to its sample implementation) can be implemented using OS threads, green threads, or both (M:N threads), so you have to be very careful about what is supposed to be per-thread.  errno in particular is only per-thread if SRFI 18 uses OS threads.

John Cowan

unread,
Mar 18, 2022, 7:28:04 PMMar 18
to scheme-re...@googlegroups.com
On Fri, Mar 18, 2022 at 2:22 PM Linas Vepstas <linasv...@gmail.com> wrote:
 
My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages.   I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

Chicken's threads are green because that's how Cheney on the MTA works: each Scheme call winds up the C stack until it gets too big and then it is chopped off by longjmp(), moving all data structures allocated (also on the stack) into the heap at that time, so that the C stack is also the nursery.  Because all the returns are dead, there is no need for the GC to understand the stack layout used by C.  Chicken provides access to pthreads, but Scheme code can only run in the main thread: the other threads must be running C.

Cyclone uses a variant of Cheney in which an object is always moved to the heap before it can be accessed by more than one Scheme thread.  That's all I know about it.

Nala Ginrut

unread,
Mar 18, 2022, 11:43:56 PMMar 18
to scheme-re...@googlegroups.com
Hi Linas!

On Sat, Mar 19, 2022 at 2:22 AM Linas Vepstas <linasv...@gmail.com> wrote

My memory of green threads is very dim; as I recall, they were ugly hacks to work around the lack of thread support in the base OS, but otherwise offered zero advantages and zillions of disadvantages.   I can't imagine why anyone would want green threads in this day and age, but perhaps I am criminally ignorant on the topic.

--linas
 

I have to mention that green-thread is still a good choice for the high-performance concurrent server design. 
Maybe "green-thread" is improper here, I mean the co-routine. GNU Artanis has a co-routine serer core based on delimited-continuation. 
However, IMO, the co-routine could be another separated standard API.

Best regards.

Nala Ginrut

unread,
Mar 18, 2022, 11:52:24 PMMar 18
to scheme-re...@googlegroups.com
On Sat, Mar 19, 2022 at 5:51 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
Please add your thoughts about it to the SRFI 226 mailing list. As far
as Guile fluids are concerned, from a quick look at them, aren't they
mostly what parameter objects are in R7RS?


Guile's parameter was implemented with fluid, so we may treat them as the same thing here.
It is recommended to use the standard parameterize interface rather than fluid directly. 

 

Linas Vepstas

unread,
Mar 19, 2022, 1:50:38 AMMar 19
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Replying to Marc, and Nala,

It doesn't seem appropriate to talk about C/C++ here, so I will skip that discussion, except to say: it's all about performance. Flexibility is great but if an API cannot be implemented so it's fast, then it's not a good API. It's very hard to determine if an API is fast without measuring it. Thinking about it is often misleading; reading the implementation easily leads to false conclusions. Sad but true.

And so, an anecdote: Nala notes: "guile parameters are built on top of guile fluids".   That is what the docs say, but there may be some deep implementation issues. A few years ago, I performance-tuned a medium-sized guile app (https://github.com/MOZI-AI/annotation-scheme/issues/98)  and noticed that code that mixed parameters with call/cc was eating approx half the CPU time. For a job that took hours to run, "half" is a big deal. I vaguely recall that parameters were taking tens of thousands of CPU cycles, or more, to be looked up.

Fluids/parameters need to be extremely fast: dozens of cycles, not tens of thousands.  Recall how this concept works in C/C++:
-- A global variable is stored in the text segment, and a lookup of the current value of that variable is a handful of cpu cycles: it's just at some fixed offset in the text segment.
-- A per-thread global variable is stored in the thread control block (TCB), located at the start of the (per-thread) stack.  So, a few cycles to find the TCB, a few more to compute the offset and maybe do a pointer chase.

Any sort of solution for per-thread storage in scheme, whether fluids or parameters, needs to be no more complex than the above.  The scheme equivalent of the  TCB for the currently running thread needs to be instantly available, and not require some traversal of an a-list or hash table.  The location of the parameterized value should not be more than a couple of pointer-chases away; dereferencing it should not require locks or atomics. It needs to be fast.

It needs to be fast to avoid the conclusions of the earlier-mentioned "Curse of Lisp" essay: Most scheme programmers are going to be smart enough to cook up their own home-grown, thread-safe paramater object: but their home-grown thing will almost surely have mediocre performance. If "E" is going to be an effective interface to the OS, it needs to be fast.  If you can't beat someone's roll-your-own system they cooked up in an afternoon, what's the point?

Conclusion: srfi-226 should almost surely come with a performance-measurement tool-suite, that can spit out hard numbers for parameter-object lookups-per-microsecond while running 12 or 24 threads.  If implementations cannot get these numbers into the many-dozens-per-microsecond range, then ... something is misconceived in the API.

My apologies, I cannot make any specific, explicit recommendations beyond the above.

-- Linas.

--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.

Marc Nieper-Wißkirchen

unread,
Mar 19, 2022, 4:24:31 AMMar 19
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
It should be noted that SRFI 226 parameter objects are not meant as a
replacement for thread locals. These form a different concept; there
will have to be an extra set of procedures for fast TLS. While SRFI
18's thread-specific-set! is certainly some primitive on top of which
TLS could be implemented, it is not sufficient for interoperability
between independent libraries accessing it.

Am Sa., 19. März 2022 um 06:50 Uhr schrieb Linas Vepstas
> To view this discussion on the web visit https://groups.google.com/d/msgid/scheme-reports-wg2/CAHrUA37L5LWyaz-EEr0QGMMG70Wqsz7NZm%3DtKH88L4Jy9O6VJA%40mail.gmail.com.

Per Bothner

unread,
Mar 19, 2022, 6:06:18 AMMar 19
to scheme-re...@googlegroups.com
On 3/18/22 22:50, Linas Vepstas wrote:
> So, a few cycles to find the TCB, a few more to compute the offset and maybe do a pointer chase.

How does that work when fluids/parameters may be created dynamically?

> Any sort of solution for per-thread storage in scheme, whether fluids or parameters, needs to be no more complex than the above.  The scheme equivalent of the  TCB for the currently running thread needs to be instantly available, and not require some traversal of an a-list or hash table.

I don't think it is realistic to prohibit a hash table look up.

I note that Java ThreadLocal lookups are implemented in a per-thread hash-table,
indexed by ThreadLocal. This avoids contention/locks, whichis moe important than
avoiding a hash-table lookup.
https://stackoverflow.com/questions/1202444/how-is-javas-threadlocal-implemented-under-the-hood

John Cowan

unread,
Mar 19, 2022, 2:57:29 PMMar 19
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
On Sat, Mar 19, 2022 at 1:50 AM Linas Vepstas <linasv...@gmail.com> wrote:
 
It doesn't seem appropriate to talk about C/C++ here, so I will skip that discussion, except to say: it's all about performance. Flexibility is great but if an API cannot be implemented so it's fast, then it's not a good API.

A counterexample is Earley parsing, which is O(n^3) but allows arbitrary grammars. If you can force your parsing problem into a LALR(1) or LL(k) grammar, then you can go much faster.  But if you don't get to pick the grammar, you simply have to pay the cost.  

In general, optimization isn't about keeping costs low, it's about keeping the cost-to-benefit ratio high.  For example, the most efficient garbage collector is the \epsilon GC, which doesn't actually collect any garbage but is quite satisfactory for bounded-length jobs when there is plenty of memory.

or a job that took hours to run, "half" is a big deal. I vaguely recall that parameters were taking tens of thousands of CPU cycles, or more, to be looked up.

It depends on who or what is waiting for it to finish.  Expending a large fraction of a day is not a problem in a job that runs once a week.  Obviously that wasn't the case with your program, but in general the tendency to assume that your problem is everybody's problem has to be resisted in standards work.

Linas Vepstas

unread,
Mar 19, 2022, 3:09:00 PMMar 19
to scheme-re...@googlegroups.com
Guile distinguishes between thread-local storage and dynamic state.  Dynamic state is "usually" associated with a single thread, and so it fits the simple-minded idea of per-thread storage. However, it is a bit more: it can be passed around to to thunks, exceptions, continuations (that is, it can be reified). Dynamic state is a table of "fluids"; each fluid is a location to store a single value.  By contrast, "true" thread-local-fluids cannot be captured in this way, they really are local to that thread, only.   Thus, although dynamic-state is usually per-thread, there are unusual cases where it might be shared by several threads (and so entailing concerns about locking, consistency, etc.) But since thread-local storage cannot be reified, those concerns aren't present.

On Sat, Mar 19, 2022 at 5:06 AM Per Bothner <p...@bothner.com> wrote:
On 3/18/22 22:50, Linas Vepstas wrote:
> So, a few cycles to find the TCB, a few more to compute the offset and maybe do a pointer chase.

How does that work when fluids/parameters may be created dynamically?
 
In C/C++, there's nothing dynamic; offsets are known at compile time. They're grouped together in a text segment by the linker, and the loader, aka glibc, makes a copy of that segment for each thread that is spawned. 

In scheme, the analog of that text segment is the "dynamic state". For each new fluid, you append to the end of it.   In a let/letrec context, you point to that location:   (let ((x some-fluid)) ...) means that x finds the current dynamic state (of which there can only be one, for that thread) and then moves to some fixed offset in that state (as all threads spawned from this one must necessarily have the same fixed location.)

This now highlights a limitation to the reification of dynamic states: You cannot take a dynamic state that has only a handful of fluids in it, and bind it to a blob of code that is expecting lots of stuff in there.   This is similar to (the same as?) the case where you might want to set "the current execution environment".  See, however, below re: java.

When TLS was a brand-new concept, several formal papers, suitable for academic journals, were written, spelling out the details: exactly what the responsibilities of the compiler, linker and loader are, exactly which blocks are located where, what pointers point to what.  These formed the primary reference for the implementations.  Reading through the implementations, there are assorted differences and improvements and extensions.  Finally, there are the C/C++ standards committee documents, which mention none of these implementation details.

I don't know how to map this to the srfi process. The srfis are sort of like the C/C++ standards documents: the programmers read them, but they don't explain how implementations actually accomplish this stuff.   If I read  https://www.gnu.org/software/guile/manual/html_node/Fluids-and-Dynamic-States.html it does not explain "how it works".  It doesn't make any performance guarantees; it doesn't even set performance expectations.  Yet clearly, this is complicated enough that a "reference implementation" probably should explain "how it works", instead adding a footnote: "read the source, Luke".

Performance guarantees should be set:  there's been discussion of some srfi for constant strings, simply because someone, somewhere perceives that ordinary scheme strings aren't fast enough for some kind of efficient computation.  Now, strings are easy: we all have a pretty clear idea in our heads of what is going on with a string.  Threads, not so much.
 

> Any sort of solution for per-thread storage in scheme, whether fluids or parameters, needs to be no more complex than the above.  The scheme equivalent of the  TCB for the currently running thread needs to be instantly available, and not require some traversal of an a-list or hash table.

I don't think it is realistic to prohibit a hash table look up.

Umm, I'm not convinced hash maps are needed. But then, I have never written a scheme implementation.  Do you use hash maps to perform lookups for continuations? Do you use hash maps for let and let-rec?  Do you use hash maps for other kinds of execution environments?  Libraries? Loadable modules?  If so, then, sure, I guess hash-maps are unavoidable.   If not, then not. 

Now, C/C++ does not have a garbage collector, so they figured out how to do all this stuff using flat tables and offsets, shared-lib glue stubs and loader trampolines.  Java has a garbage collector, and they have some completely different view of the world (apparently involving hash maps)  Now, you could say "a hah! Scheme has a garbage collector, therefore scheme should take inspiration from java!" but I'm not convinced this is the correct analogy.

--linas


I note that Java ThreadLocal lookups are implemented in a per-thread hash-table,
indexed by ThreadLocal.  This avoids contention/locks, whichis moe important than
avoiding a hash-table lookup.
https://stackoverflow.com/questions/1202444/how-is-javas-threadlocal-implemented-under-the-hood
--
        --Per Bothner
p...@bothner.com   http://per.bothner.com/

--

Linas Vepstas

unread,
Mar 19, 2022, 3:58:29 PMMar 19
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Hi John,

I have no bones to pick, here, but one:


On Sat, Mar 19, 2022 at 1:57 PM John Cowan <co...@ccil.org> wrote:

 in general the tendency to assume that your problem is everybody's problem has to be resisted in standards work.

C/C++ sets the benchmark for performance, and all other programming languages rate themselves against how fast they run, compared to C/C++.  Some languages have a charter that explicitly mentions C/C++-like performance:  CaML explicitly mentions this as a language design goal.  Some languages were not explicit about it, but have been forced into this, by powerful commercial forces: this is Java.  Most languages try to be reasonable, but they're not putting design decisions under a performance microscope. Is this where you want to be with scheme?

What's the charter for scheme, really? "To be popular like python" seems unlikely; even MIT is no longer teaching from SICP.  "To be a laboratory for exploration" seems to be accurate: we have three dozen scheme implementations precisely because it's easy and fun to explore. We have things like racket precisely because part of the fun is exploring "what else the language could be", without jumping ship to join the haskell crowd (which btw has only one implementation not dozens. Two if you count f#)

Standardization says: "we've decided what this should be, we've made up our minds, and we're going to set it in stone."  At this point, the standard should allow for at least one high-performance implementation, even if this is not the simplest or smallest-possible implementation.  It's easy to wave a white flag, throw one's hands in the air, and declare that "hash tables are the solution to all implementation problems". 

I'd like scheme to be as-fast-as-possible, as what else is it offering the world? it's not going to be python or java or rust. It may as well be tight.

--linas

Marc Nieper-Wißkirchen

unread,
Mar 19, 2022, 6:11:37 PMMar 19
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Am Sa., 19. März 2022 um 20:58 Uhr schrieb Linas Vepstas
<linasv...@gmail.com>:

> I'd like scheme to be as-fast-as-possible, as what else is it offering the world? it's not going to be python or java or rust. It may as well be tight.

I think we can agree that we do not wish to standardize APIs in a way
that prevents efficient implementations. But "efficient" is a notion
relative to the abstraction we want to model in each particular case.
If we have a Scheme API that models something like pthread's
thread-local variables, we would want that it can be implemented
roughly as efficiently as the native C API. On the other hand, if the
more correct model for a certain use case would be dynamic state bound
to the current continuation, Scheme can offer an API for this
abstraction (parameter objects, the current parameterization, etc.)
that is not offered by C and which may more correct in some use case,
but slower than thread-local storage.

For an API like phtread's thread-local variables, we are presented
with different options for an API. One possible API could allow
general Scheme objects as keys to thread-local variables; another
possible API choice could be to restrict keys to opaque values
returned by some procedure, say, (thread-local-key). The latter API
can more likely directly map to the pthread API, and won't be less
efficient (up to a constant factor) than C; this may not be true for
the former API.

Marc

Linas Vepstas

unread,
Mar 20, 2022, 12:57:02 AMMar 20
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
On Sat, Mar 19, 2022 at 5:11 PM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:

For an API like phtread's thread-local variables, we are presented
with different options for an API. One possible API could allow
general Scheme objects as keys to thread-local variables; another
possible API choice could be to restrict keys to opaque values
returned by some procedure, say, (thread-local-key). The latter API
can more likely directly map to the pthread API, and won't be less
efficient (up to a constant factor) than C; this may not be true for
the former API.

An opaque key returned by (thread-local-key) is the only thing that makes sense to me.

Allowing an arbitrary scheme object to be a key in a table is ... umm.... it opens the door to some very interesting conversations about databases. I'd rather slam the door shut on that.  (You've already seen my tirades about databases.)

--linas

Marc


--
You received this message because you are subscribed to the Google Groups "scheme-reports-wg2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scheme-reports-...@googlegroups.com.

John Cowan

unread,
Mar 20, 2022, 6:34:44 PMMar 20
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
On Sat, Mar 19, 2022 at 3:58 PM Linas Vepstas <linasv...@gmail.com> wrote:
 
Most languages try to be reasonable, but they're not putting design decisions under a performance microscope. Is this where you want to be with scheme?

Those who want performance above all else may move away from C++ someday, but it isn't going to be towards any Lisp.  Lisp, like Basic and many other languages, has an inferiority complex about speed: at some point the most readily available implementation was a slow interpreter, and the whole language got tarred with that brush.

What's the charter for scheme, really? "To be popular like python" seems unlikely; even MIT is no longer teaching from SICP.

Because MIT decided that the comp-sci course every student takes should now be about writing glue rather than data structures and algorithms, a plausible choice in the current environment.  Of course Python is not *especially* for writing glue, it's just that people do in fact write glue in it.  Similarly, Python's popularity is primarily a result of being in the right place (not Japan like Ruby) at the right time (when people were fed up with Perl).  It wasn't designed with popularity in mind, any more than Java was designed with performance in mind (it just so happened that advanced JIT work was being done at the same time in the same company).
I'd like scheme to be as-fast-as-possible, as what else is it offering the world?

Syntax extension, for one thing: the Lisps are tools that provide expert users with a huge amount of expressive power.  Of course, many programmers think expressive power is a Bad Thing, and in clumsy hands it is dangerous.  Scheme, unlike CL, provides multiple styles of syntax extension from most to least safe.

Nala Ginrut

unread,
Mar 20, 2022, 9:18:28 PMMar 20
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org


On Mon, Mar 21, 2022, 06:34 John Cowan <co...@ccil.org> wrote:


On Sat, Mar 19, 2022 at 3:58 PM Linas Vepstas <linasv...@gmail.com> wrote:
 
Most languages try to be reasonable, but they're not putting design decisions under a performance microscope. Is this where you want to be with scheme?

Those who want performance above all else may move away from C++ someday, but it isn't going to be towards any Lisp.  Lisp, like Basic and many other languages, has an inferiority complex about speed: at some point the most readily available implementation was a slow interpreter, and the whole language got tarred with that brush.


Maybe we need committee P for performance suggestion. P is unecessary to be forced, but producing a series rules that implementation could choose. And users may check them from an implementation to learn its performance consideration.

The performance could be rated under P rules. So that people don't have to argue the benchmark among the various Scheme implementation. And there's reasonable performance rules to choose an implementation.

P rules shouldn't be forced since many Scheme choose to be slow interpreter for better dynamic features.

We don't have to organize the P discussion intended. We may discuss it case by case when there's performance argument, and form the conclusion to be a rule in P.
This kind of passive working style may save some work.

Comments?

Best regards.


Nala Ginrut

unread,
Mar 20, 2022, 9:43:36 PMMar 20