nREPL.Next

639 views
Skip to first unread message

Chas Emerick

unread,
Nov 17, 2011, 5:09:59 PM11/17/11
to cloju...@googlegroups.com
I'm opening up a design page around what I'm calling nREPL.Next:

> nREPL has been around for some bit over a year now, with moderate success in attaining its objective to provide a tool-agnostic networked Clojure REPL implementation. Current events are such that now may be a good time to apply some of the lessons learned over that period to maximize nREPL’s applicability and reach.

https://github.com/clojure/tools.nrepl/wiki/nREPL.Next

I have enumerated the known key pain points I am aware of from building and working with nREPL, and other issues that have been described to me by some that _might have_ worked with nREPL, but could not for various reasons. I'd like to address all such problems, and have included a strawman proposal that describes one approach to address the most pressing issues.

I would like feedback, suggestions, and counterproposals — in particular from past, present, and future REPL client and server implementors and leads and contributors to any Clojure tools, plugins, IDEs, and so on.

Thank you in advance,

- Chas

Meikel Brandmeyer

unread,
Nov 17, 2011, 5:45:00 PM11/17/11
to cloju...@googlegroups.com
Hi Chas,

I will into this in detail hopefully this weekend.

One short note which I can tell immediately: without “done” nrepl doesn't fly for me. I need a marker that the evaluation of the whole message is – well – done. Vim forks out a client and waits for it to finish. But how does the client know that it is finished? If you have multiple forms in the message there might be multiple “value” answers. So the client needs a dedicated “I'm done with the message answer.” Then the client may terminate with a result. Output which happens afterwards could be cached and be delivered on the next connect to that repl. As long as the client runs, Vim is frozen for the user.

I see now why you want to use agents. I think agents are a perfect fit. I'll have to play a little with this idea.

Meikel

Chas Emerick

unread,
Nov 17, 2011, 6:59:06 PM11/17/11
to cloju...@googlegroups.com

On Nov 17, 2011, at 5:45 PM, Meikel Brandmeyer wrote:

> One short note which I can tell immediately: without “done” nrepl doesn't fly for me. I need a marker that the evaluation of the whole message is – well – done. Vim forks out a client and waits for it to finish. But how does the client know that it is finished? If you have multiple forms in the message there might be multiple “value” answers. So the client needs a dedicated “I'm done with the message answer.” Then the client may terminate with a result. Output which happens afterwards could be cached and be delivered on the next connect to that repl. As long as the client runs, Vim is frozen for the user.

The whole concept of "done" came about prior to binding conveyance and the potential for reliably redirecting output sent to *out* and *err* back to the connected client. If you start a future that prints to *out* N minutes later, it's impossible for nREPL to know when that future is finished.

An event to indicate that all messages related to the same-thread evaluation of the provided code can be delivered reliably, and would match up with what "done" meant at the beginning when binding conveyance wasn't around.

Caching *out*/*err* to be picked up later sounds like a big can of worms, but I suppose it's something that must be addressed if nREPL is to be accessed via transient connections to remote hosts (i.e. HTTP).

- Chas

Kevin Downey

unread,
Nov 17, 2011, 7:06:23 PM11/17/11
to cloju...@googlegroups.com

possible something like

client: I want a set of io descriptors (stdin,stdout,stderr) called X, id is 4
server: ok for 4
client: evaluate form "(doto (+ 1 3) prn)" with id 1 and io descriptors X
server: response for 1 is 4
server: stdout for X is 4

client: write "1" to stdin of X, id is 10
server: ok for 10
client: evaluate form "(read)" with id 32 and io descriptors X
server: response for 32 is 1


> - Chas
>
> --
> You received this message because you are subscribed to the Google Groups "Clojure Dev" group.
> To post to this group, send email to cloju...@googlegroups.com.
> To unsubscribe from this group, send email to clojure-dev...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/clojure-dev?hl=en.
>
>

--
And what is good, Phaedrus,
And what is not good—
Need we ask anyone to tell us these things?

Chas Emerick

unread,
Nov 17, 2011, 9:40:59 PM11/17/11
to cloju...@googlegroups.com
On Nov 17, 2011, at 7:06 PM, Kevin Downey wrote:

>> Caching *out*/*err* to be picked up later sounds like a big can of worms, but I suppose it's something that must be addressed if nREPL is to be accessed via transient connections to remote hosts (i.e. HTTP).
>
> possible something like
>
> client: I want a set of io descriptors (stdin,stdout,stderr) called X, id is 4
> server: ok for 4
> client: evaluate form "(doto (+ 1 3) prn)" with id 1 and io descriptors X
> server: response for 1 is 4
> server: stdout for X is 4
>
> client: write "1" to stdin of X, id is 10
> server: ok for 10
> client: evaluate form "(read)" with id 32 and io descriptors X
> server: response for 32 is 1

Sure — though things get trickier with *in* when the client's evaluated form is pulling. That's when we get into territory with the server prompting for a write to *in*. A solved problem with swank (at least to some degree), but the getting that right may be a delicate thing (esp. if the same semantics should surface over other transports).

I think correct interaction semantics actually fall right out of handling client sessions via agents. The trickier bit may be retention policy, how to set it, how to query it, what to do after a client connects after 100MB of (retained?) junk has been written to *out*, etc.

- Chas

Meikel Brandmeyer

unread,
Nov 18, 2011, 12:33:04 AM11/18/11
to cloju...@googlegroups.com
Hi,

Am 18.11.2011 um 00:59 schrieb Chas Emerick:

> The whole concept of "done" came about prior to binding conveyance and the potential for reliably redirecting output sent to *out* and *err* back to the connected client. If you start a future that prints to *out* N minutes later, it's impossible for nREPL to know when that future is finished.

Yes. At the moment this output gets lost for vimclojure. But that's life. I can't wait for future if it is for example supposed to kick off a server thread or is something long running. Then my Vim is frozen until the end of times. Currently things work like this:

client: connects to server
client: {:request-id 5}
server: {:id 5 :status :ok}
client: {:id 5 :code "(future-call do-something)" }
server: {:id 5 :value "#<future bla :pending>" }
server: {:id 5 :status :done}
client: disconnects from server
server: discards output if 5 was a transient repl, but could cache output in case it is a persistent one.
(optionally):
client: connects to server
client: {:request-id 5}
server: {:id 5 :status :ok :stdout "Did something."}

Something like that. Discarding the output is sad, but life is hard.

Just as nrepl doesn't know, what the future does, the client doesn't know what the user code does. Think for example:

client: {:id 5 :code "(def x 1) (def y 2) (def z 3)"}
server: {:id 5 :value "#'user/x"}
server: {:id 5 :value "#'user/y"}

How does the client know to wait longer for #'user/z? And afterwards, how does it know that it has to disconnect after the #'user/z?

> An event to indicate that all messages related to the same-thread evaluation of the provided code can be delivered reliably, and would match up with what "done" meant at the beginning when binding conveyance wasn't around.

It still does. “done” has nothing to do with input or output.

> Caching *out*/*err* to be picked up later sounds like a big can of worms, but I suppose it's something that must be addressed if nREPL is to be accessed via transient connections to remote hosts (i.e. HTTP).

It's even worse: you also have to care for System/out and System/err, because Java code can print to them directly—bypassing *out* and *err*.

Meikel

Laurent PETIT

unread,
Nov 18, 2011, 1:19:23 AM11/18/11
to cloju...@googlegroups.com
Hi,

2011/11/17 Meikel Brandmeyer <m...@kotka.de>
So using agents will imply that a users entering new requests in his REPL while another is still processed in the background (long computation, etc.) will result in having several sessions created, each one "derived" from the other ? What if the user does set! some things in one or the other ? For all sets to be done, then I imagine the client REPL would have to explicitly "kill" all the sessions created under the hood, and just keep the one corresponding to the last sent command. 

Either I didn't understand the proposal, either there's something weird about it. As if the notion of "session" and "asynchronous request" were conflated to "deal with" the implementation detail of the server using an agent and "working around that" by creating new sessions (to not make the session operations sequential) ...

For example in a servlet container, an HttpSession can be accessed in parallel by several browser requests, ...
 

Meikel

Meikel Brandmeyer

unread,
Nov 18, 2011, 2:20:34 AM11/18/11
to cloju...@googlegroups.com
Salut Laurent,

it is important to distinguish different the different layers.

1. Connection sessions

A client connects to the server by some means (socket, message queue, in-process, whatever). Via this connection data is transferred. What data is irrelevant.

2. Repl sessions

This comprises the state of a repl as you would see it by starting one in a console. You get all the thread bindings like *warn-on-reflection* etc. and gooides like *1, *2, *3 and *e. The code sent by the client is evaluated in the context of such state. The interesting thing is: by assigning an id to the repl session you can run multiple such repl sessions in one process. When a client sends some code for evaluation and also specifies the id of the repl session in whose context the evaluation should happen. Things like dynamic bindings and *1 etc. are independent between the repls sessions. However executing a “(def x 5)” makes this x available to all sessions. So things are independent, but not isolated.

Now things get interesting: you can have relationship between connection and repl sessions— 1:1 n:1 1:m n:m. You can multiplex multiple repl interactions in independent repl sessions over one connection session. You can talk to the same repl session from different connections. You can connect, say something to the repl, disconnect, reconnect, say something to the repl, ...

Upon connection an agent is created holding the repl state. And map in an atom would maintain the repl session id → agent mapping. Send code to this repl session would basically send an action to the agent. The agent does the evaluation, sends output via some provided i/o handles, and signals finish of the evaluation (“done”, really! this is important!). The result of the action is naturally the dynamic state of the bindings after evaluation.

From the usage scenarios, the only interesting here are the n:* ones. Many connections talk to the same repl session. And here the agent gives natural coordination.

1. Requests

The requests are served by the agent in the order they arrived. The state is always consistent. Commands see everything previous commands have done. Long running computations block the next commands until they are finished. !!! BUT !!! Only in this specific repl session! Other repl session in the same server happily accept further input.

2. Forking

If you want to fork a repl session, that is perfectly possible! Just deref the agent (consistent snapshot of repl state!), create a new agent and assign an id to it. Et voila: two independent repl sessions with initially the same state.

I jotted down a very rough sketch how this could look like: http://paste.pocoo.org/show/509605/

Does that clarify things?

Meikel

Laurent PETIT

unread,
Nov 18, 2011, 4:54:40 AM11/18/11
to cloju...@googlegroups.com


2011/11/18 Meikel Brandmeyer <m...@kotka.de>
Hello Meikel,

this indeed clarifies things. It is what I had in mind after reading the whole thing, after all. I was just wondering if we need all this (still thinking about it, I'm more and more leaning towards the proposal, but not totally there yet). There is possibility to maintain consistency by just using an atom for holding the session state. Derefing the atom gives a consistent snapshot of the session state for starting a new request on the same repl session. Once the request's work is done, the atom's state is compareAndSet. If another request has finished before this one, compareAndSet fails, and the state is dropped (optimistic locking).

This is more or less the way HttpSessions are handled in servlet containers, and probably why I am thinking in terms of this way of doing things. (But using atoms would maintain better consistency of the overall state, of course).

The question is: would the "forking" you describe be made transparent to the user, or would it be an explicit action from the user ? Currently in CCW, this is transparent. If it remains so, then I don't know what final "state" to present the user with when all sessions have finished: the initial state, the final state ?


Meikel Brandmeyer

unread,
Nov 18, 2011, 5:48:48 AM11/18/11
to cloju...@googlegroups.com
Salut Laurent,


Am Freitag, 18. November 2011 10:54:40 UTC+1 schrieb lpetit:

    this indeed clarifies things. It is what I had in mind after reading the whole thing, after all. I was just wondering if we need all this (still thinking about it, I'm more and more leaning towards the proposal, but not totally there yet).

Eh, yes. I need this. When working on Clojure code I do hot code evaluation in parallel to (sometimes more than one) repl sessions without interfering with their *1, etc. I want each repl session to be independent, eg. “being” in different namespaces. So, the ability of running several repl session including one-shots for hot code eval or doc retrieval etc. is required.

Also, the repl sessions have to be non-transient, that means they last longer than an eventual client connection, because I have to terminate the connection between commands. (This of course doesn't matter for one-shot sessions.)

I really have some strong requirements about what a repl server has to be able to do in order to integrate with VimClojure due to some serious limitations of plain Vim. So I'll state again, what I already said for the original nRepl: I need all the features I describe to connect VimClojure with an nRepl server. However, I will not force anything of this onto nRepl. If the requirements are too strong to result in a general usable solution, I will roll my own for VimClojure for the greater good of the other nRepl consumers.

Nevertheless, I'll hope we get a solution addressing everybody's needs, including VimClojure's.


    There is possibility to maintain consistency by just using an atom for holding the session state. Derefing the atom gives a consistent snapshot of the session state for starting a new request on the same repl session. Once the request's work is done, the atom's state is compareAndSet. If another request has finished before this one, compareAndSet fails, and the state is dropped (optimistic locking).

I prefer the behaviour of agents. When two commands arrive they are executed in order, one after the other. This is normal behaviour of every shell I know. Be it clojure, factor, zsh, whatever...


    The question is: would the "forking" you describe be made transparent to the user, or would it be an explicit action from the user ? Currently in CCW, this is transparent. If it remains so, then I don't know what final "state" to present the user with when all sessions have finished: the initial state, the final state ?

Forking would be obviously an explicit user action. A fork request would clone a given repl session and henceforth both will be independent of each other, though not isolated.

Meikel

Chas Emerick

unread,
Nov 18, 2011, 8:49:44 AM11/18/11
to cloju...@googlegroups.com

On Nov 18, 2011, at 12:33 AM, Meikel Brandmeyer wrote:

>> An event to indicate that all messages related to the same-thread evaluation of the provided code can be delivered reliably, and would match up with what "done" meant at the beginning when binding conveyance wasn't around.
>
> It still does. “done” has nothing to do with input or output.

Well, it did until 1.3 provided the binding conveyance — in 1.2, things sent to *out* via a future just dropped into the process's stdout and weren't returned to the client. A better word than "done" would be nice to imply that all responses containing evaluated values had been sent.

>> Caching *out*/*err* to be picked up later sounds like a big can of worms, but I suppose it's something that must be addressed if nREPL is to be accessed via transient connections to remote hosts (i.e. HTTP).
>
> It's even worse: you also have to care for System/out and System/err, because Java code can print to them directly—bypassing *out* and *err*.

System/out and /err are a separate, less pleasant topic.

- Chas

Chas Emerick

unread,
Nov 18, 2011, 9:10:57 AM11/18/11
to cloju...@googlegroups.com

On Nov 18, 2011, at 1:19 AM, Laurent PETIT wrote:

Either I didn't understand the proposal, either there's something weird about it. As if the notion of "session" and "asynchronous request" were conflated to "deal with" the implementation detail of the server using an agent and "working around that" by creating new sessions (to not make the session operations sequential) ...

Quite right.  Using agents does mix up the retention of sessions and evaluation environments. 

This leaves us with three discrete things, not two:

1. Messages ("evaluate this")
2. Sessions (the dynamic context within which a message may be evaluated)
3. Evaluation queues (an [optional?] point of synchronization wherein multiple messages may be guaranteed to be evaluated sequentially within the context of a single session)

Providing access to all three concepts would allow e.g. ccw to retain current semantics if we so chose (send messages whenever you like, often against the same session; they'll be evaluated concurrently, and the state of the session is accepted to be a race condition), and allow those that want to have console-like semantics opt into them (send messages whenever you like, against the same session, but evaluate them sequentially on queue XYZ).

Laurent, as you suggest, atoms seem perfectly appropriate for #2; agents may be appropriate for #3.

The alternative would be to say that each session's evaluations _should_ be performed sequentially, that last-one-in semantics for updating dynamic context are an unnecessary complication, and you should identify N sessions (perhaps all clones of a common parent if you so wish) to support N concurrent evaluations.

I note that the most widespread notion of "session" (HTTP) does not take this position, but perhaps it is important enough to support so that clients can offload ordering of evaluations to the server.

On Nov 18, 2011, at 5:48 AM, Meikel Brandmeyer wrote:

I prefer the behaviour of agents. When two commands arrive they are executed in order, one after the other. This is normal behaviour of every shell I know. Be it clojure, factor, zsh, whatever...

Granted, but if we only wanted a shell, we'd be using telnet. :-)

IIRC, doesn't vim functionally serialize all of your interactions with external processes anyway?

Tangentially, I would also expect "forking" sessions to be an explicit operation.  Perhaps a better name for that operation would be "clone", since "fork" has some inapplicable process-related implications.

I clearly have some work to do on the strawman. :-)

Thanks,

- Chas

Meikel Brandmeyer

unread,
Nov 18, 2011, 9:17:14 AM11/18/11
to cloju...@googlegroups.com
Hi,

how does nrepl guarantee to send "done" only after the future has finished?

"code"
"(do (future (Thread/sleep 5000) (println "Bar")) nil)"

The future call returns immediatelly, is hidden from nrepl and does print something after some arbitrary time. I don't think that nrepl can handle this, hence "done" in this sense does not work. While printing "done" after all code forms have been evaluated is always possible.

Different name: "eval done"?

Meikel

Chas Emerick

unread,
Nov 18, 2011, 10:00:34 AM11/18/11
to cloju...@googlegroups.com
Right, it doesn't — never did, really.

"done" was a bad name from the start.  Given 1.2 binding semantics, "done" was intended to mean that all messages that *could* be delivered related to a prior request had been delivered.  Once binding conveyance came along, no such proclamation could be made.

- Chas

--
You received this message because you are subscribed to the Google Groups "Clojure Dev" group.
To view this discussion on the web visit https://groups.google.com/d/msg/clojure-dev/-/VtyS1eskfWgJ.

Meikel Brandmeyer

unread,
Nov 18, 2011, 10:06:23 AM11/18/11
to cloju...@googlegroups.com
Hi,

maybe we should enrich the wikipage with some use cases. Will try to collect some from my experience over the weekend.

Meikel

Chas Emerick

unread,
Nov 21, 2011, 10:46:10 PM11/21/11
to cloju...@googlegroups.com
I and others have updated the wiki page with some significant edits based on discussion here and elsewhere, including:

* a fleshed out discussion of bencode
* a new wire protocol suggestion (tagged netstrings)
* a rethinking of the session/evaluation queue notion as outlined in the exchange from last week

https://github.com/clojure/tools.nrepl/wiki/nREPL.Next

On the wire protocol front, I am currently leaning towards bencode, and am nearly ready to relegate my original netstring-only mashup to the bin.

Review and comments are greatly appreciated, especially from:

* Emacs / SLIME / swank hackers
* Existing Clojure tool implementors / contributors
* ClojureScript browser-repl hackers
* likely implementors of non-Clojure/Java nREPL clients (esp. in languages w/ "unique" constraints)

You know who you are. ;-)

We're not in "forever hold your peace" territory (yet), but if you're going to speak, now's a good time.

Thanks,

- Chas

Kevin Downey

unread,
Nov 22, 2011, 12:17:04 AM11/22/11
to cloju...@googlegroups.com
I am having trouble of finding a comparison of the old (custom?)
encoding vs. netstrings vs. bencode.

Why is each better than the last?

> --
> You received this message because you are subscribed to the Google Groups "Clojure Dev" group.
> To post to this group, send email to cloju...@googlegroups.com.
> To unsubscribe from this group, send email to clojure-dev...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/clojure-dev?hl=en.
>
>

--

Meikel Brandmeyer

unread,
Nov 22, 2011, 5:16:21 AM11/22/11
to cloju...@googlegroups.com
Hi,

how do you support an interactive repl in a server without "done" marker and where concurrent commands may trash the repl state?

Meikel

Meikel Brandmeyer

unread,
Nov 22, 2011, 5:32:03 AM11/22/11
to cloju...@googlegroups.com
Hi,

as for bencode: the number encoding is a pain (read: "special case"), because you have to live again with unknown buffer sizes. How does enriched repl feedback fit in here?

Meikel

Meikel Brandmeyer

unread,
Nov 22, 2011, 5:35:03 AM11/22/11
to cloju...@googlegroups.com
Ah. Nevermind the number encoding. Still asleep...

The question of enriched repl feedback still stands, though.

Meikel

Chas Emerick

unread,
Nov 22, 2011, 5:36:49 AM11/22/11
to cloju...@googlegroups.com
There's nothing like a comparison grid in the wiki, but the wiki page text does describe relative pros and cons.

In summary: The failings of the current protocol are described in detail in the initial Problems section. The custom netstrings-based protocol resolves all of those issues, but invents its own treatment for expressing message values that are e.g. a list/vector of values. bencode formalizes this; if we used it, any bencode implementation (of which there are many for all sorts of languages, a benefit all its own) would be able to read and write nREPL messages without modification.

The tagged netstring notion is new to me, so I am less certain of its sits vs. bencode. But, its promise would be that we could extend it to support inline identification of encodings / media types. It is a very new spec though (i.e. very few implementations relative to bencode), and I worry about some aspects of it (incl. type indicators being at the *end* of values, which seems like it would be a complicating factor).

Hope that helps,

- Chas

Chas Emerick

unread,
Nov 22, 2011, 5:43:49 AM11/22/11
to cloju...@googlegroups.com

On Nov 22, 2011, at 5:35 AM, Meikel Brandmeyer wrote:

> Ah. Nevermind the number encoding. Still asleep...
>
> The question of enriched repl feedback still stands, though.

I'm not sure I understood the question. :-)

If you're asking about encoding of nontextual REPL results — you're right that there's nothing in bencode that helps us with that. Encoding / media type of each non-string slot in a response would need to either be transmitted in a separate slot or be identified based on the key's name (e.g. tools that support it [and indicate that via their requests' :accept value(s)] would know what to do with :jpeg-data or :overtone-data).

- Chas

Chas Emerick

unread,
Nov 22, 2011, 5:46:03 AM11/22/11
to cloju...@googlegroups.com

On Nov 22, 2011, at 5:16 AM, Meikel Brandmeyer wrote:

> Hi,
>
> how do you support an interactive repl in a server without "done" marker and where concurrent commands may trash the repl state?

I removed the indication that "done" should be eliminated as a status last night. I still would like to have a better name for that concept, though.

Did you review the new section in the wiki on this ("Serializing evaluations") where the notion of an evaluation queue is described with examples? This was motivated by your and Laurent's comments earlier.

- Chas

Meikel Brandmeyer

unread,
Nov 22, 2011, 7:39:55 AM11/22/11
to cloju...@googlegroups.com
Hi,

the question is: do we restrict ourselves to what bencode delivers or do we extend bencode with a custom extension for custom data. The problem is, that if you want to allow for example lists of jpeg images, you are in trouble if you don't have protocol support.

Another nicety of more complete protocol support for custom data, is that the other side can produce arbitrary clojure code. At the moment I have to construct strings which are again read by Clojure. Together with all the escape hell this implies this is quite painful. However with protocol support I could emit programmatically Clojure code without requireing a print or read implementation.

printf("(macroexpand-1 '%)", someCodeSnippet) vs. BencodeExtended([Symbol("macroexpand-1"), [Symbol("quote"), someCodeSnippet]])

While the first looks less convoluted the second one would be much more pleasant to work with.

Meikel

Meikel Brandmeyer

unread,
Nov 22, 2011, 7:51:38 AM11/22/11
to cloju...@googlegroups.com
Hi,

yes, I read the new section. And I still don't understand it. I can agree, that there is a race condition about the order in which the commands are executed, when several commands arrive in parallel for the same session. But I can't understand why this should result in a corrupted repl state. Clojure always provides a consistent view of the world, but here we throw this idea off board.

{:id "1" :session "A" :code (set! *warn-on-reflection* true)}
{:id "2" :session "A" :code (set! *print-length* 5)}

Why should *warn-on-reflection* being true or not depend on the order of the arrival of these commands given they arrive in parallel? If my understanding was correct, then this would be a serious flaw. If not, you are about to reinvent agents since you then serialise the execution of the commands. A simple (swap! repl-states merge ...) doesn't do it.

Meikel

Chas Emerick

unread,
Nov 22, 2011, 8:54:42 AM11/22/11
to cloju...@googlegroups.com
On Nov 22, 2011, at 7:39 AM, Meikel Brandmeyer wrote:

> the question is: do we restrict ourselves to what bencode delivers or do we extend bencode with a custom extension for custom data. The problem is, that if you want to allow for example lists of jpeg images, you are in trouble if you don't have protocol support.

A slot whose value is a list of jpeg images is not a problem in e.g. bencode (or in the netstring-only approach I originally included in the strawman as the only detailed option). It's just that the data / media type for data in that slot isn't co-located, and must be matched up by the client.

> Another nicety of more complete protocol support for custom data, is that the other side can produce arbitrary clojure code. At the moment I have to construct strings which are again read by Clojure. Together with all the escape hell this implies this is quite painful. However with protocol support I could emit programmatically Clojure code without requireing a print or read implementation.
>
> printf("(macroexpand-1 '%)", someCodeSnippet) vs. BencodeExtended([Symbol("macroexpand-1"), [Symbol("quote"), someCodeSnippet]])
>
> While the first looks less convoluted the second one would be much more pleasant to work with.

Doesn't this lead to the need to implement a Clojure reader and printer that just uses some binary protocol instead of text? I don't see how that simplifies things at any level.

What is the escape hell you need to deal with? Assuming `someCodeSnippet` is defined by the user (e.g. by selecting a form or typing one in), what do you need to do beyond escape quotes?

- Chas

Chas Emerick

unread,
Nov 22, 2011, 9:14:11 AM11/22/11
to cloju...@googlegroups.com

You're right that a simple swap! doesn't ensure serial handling of messages, but it's not at all clear that such serial handling should be the default. (It certainly is not in nREPL currently.) Thus my attempt to disentangle the notion of REPL sessions (retention of dynamic scope across messages and connections) and the semantics of message ordering (undefined, unless explicitly placed onto an 'evaluation queue').

I certainly don't want to reinvent agents; in fact, I would likely use agents to implement those queues.

I'm confused as well, since you seem to agree that a race condition exists w.r.t. the order of message handling, but seem to disagree that this necessarily entails that we cannot therefore make any guarantees about the state of a session's state at any particular time. Re: the example you provide, _after all messages are processed_, we can guarantee that *warn-on-reflection* is true and *print-length* is 5, but we can't make any guarantees about session state before that point.

More concretely, here's two messages:

{:id "3" :session "A" :code (set! *print-length* 5)}
{:id "4" :session "A" :code (range 10)}

What will the printed result of message 4 be? Either "(0 1 2 3 4 5 6 7 8 9)" or "(0 1 2 3 4 ...)", depending on whether or not message 3 has been processed and the dynamic scope at the end of its evaluation set back into the atom holding A's session state. Alternatively, given a notion of evaluation queues:

{:id "5" :session "A" :queue "X" :code (set! *print-length* 5)}
{:id "6" :session "A" :queue "X" :code (range 10)}

…would guarantee that messages are evaluated and their effects on dynamic state are recognized and captured in the order that messages are received. Thus, the printed result of message 6 would always be "(0 1 2 3 4 ...)".

Does that make more sense?

- Chas

Meikel Brandmeyer

unread,
Nov 22, 2011, 1:18:49 PM11/22/11
to cloju...@googlegroups.com
Hi,

Am 22.11.2011 um 15:14 schrieb Chas Emerick:

> I'm confused as well, since you seem to agree that a race condition exists w.r.t. the order of message handling, but seem to disagree that this necessarily entails that we cannot therefore make any guarantees about the state of a session's state at any particular time. Re: the example you provide, _after all messages are processed_, we can guarantee that *warn-on-reflection* is true and *print-length* is 5, but we can't make any guarantees about session state before that point.

How do you make this guarantee? My point is that you can't without serializing the access to the repl state.

> Does that make more sense?

I understand the race conditions. What you say makes perfect sense. However, it's still my understanding that the queue is *not* optional. And by using agents you basically get that for free.

Meikel

Meikel Brandmeyer

unread,
Nov 22, 2011, 2:16:55 PM11/22/11
to cloju...@googlegroups.com
Hi,

Am 22.11.2011 um 14:54 schrieb Chas Emerick:

> A slot whose value is a list of jpeg images is not a problem in e.g. bencode (or in the netstring-only approach I originally included in the strawman as the only detailed option). It's just that the data / media type for data in that slot isn't co-located, and must be matched up by the client.

Then you have to traverse the data structure and line up the data conversion accordingly. To arbitrary depth.

I'm not afraid to take tnetstrings or bencode as a basis and extend things where necessary. Isn't that always the argument of s-expression proponents? “It has sets and stuff. JSON is so limited.” I prefer a protocol which supports jpeg data directly, that is the peer on the other side may decide what to do immediately upon encountering jpeg data instead of parsing a complex data structure, parsing a similar complex type information structure and then traverse the whole thing to convert the data.

But maybe this is just overengineering.

Meikel

Chas Emerick

unread,
Nov 22, 2011, 3:12:04 PM11/22/11
to cloju...@googlegroups.com
On Nov 22, 2011, at 1:18 PM, Meikel Brandmeyer wrote:

>> A slot whose value is a list of jpeg images is not a problem in e.g. bencode (or in the netstring-only approach I originally included in the strawman as the only detailed option). It's just that the data / media type for data in that slot isn't co-located, and must be matched up by the client.
>
> Then you have to traverse the data structure and line up the data conversion accordingly. To arbitrary depth.

Two things:

1. I wouldn't expect there to be any "data conversion". AFAICT, we're only talking about strings, and binary data that has some type (think MIME types, though whether we actually want to buy into that is a whole separate conversation). So, regardless of wire protocol, the tags / media type indicators are nothing more than hints to the client as to the nature of the data in slot X.

2. I don't think messages of arbitrary complexity are needed or desirable, if only because that poses serious issues in e.g. building a "Transport" protocol that can gracefully accommodate sockets, HTTP, JMX, et al. without significant mismatch. I added a note to this effect at the end of the bencode section earlier.

>> I'm confused as well, since you seem to agree that a race condition exists w.r.t. the order of message handling, but seem to disagree that this necessarily entails that we cannot therefore make any guarantees about the state of a session's state at any particular time. Re: the example you provide, _after all messages are processed_, we can guarantee that *warn-on-reflection* is true and *print-length* is 5, but we can't make any guarantees about session state before that point.
>
> How do you make this guarantee? My point is that you can't without serializing the access to the repl state.

If there's two messages, one set!'s *print-length* to 5, and both messages have been processed, then the relevant session state will reflect that. Maybe a third message's effect of set!'ing *print-length* to 10 will be swapped in the next millisecond, but that doesn't mean that the change to 5 doesn't occur.

>> Does that make more sense?
>
> I understand the race conditions. What you say makes perfect sense. However, it's still my understanding that the queue is *not* optional. And by using agents you basically get that for free.

Note that nREPL's current implementation has no notion of evaluation queues. All messages are processed as they arrive, but their respective code's runtime can cause them to finish in a different order, thus having undefined order of effect upon the "session" (which is connection-bound as you know).

My questions are:

1. Why should using a queue be mandatory?

2. If a queue is mandatory, should it *be* the session as I originally described it in the strawman proposal (therefore making it impossible to evaluate some code with the "same" dynamic state as a currently-executing evaluation), and why?

3. How well (or not) do queue semantics map onto other transports and REPL environment contexts that we know of as likely endpoints?

Thanks,

- Chas

Meikel Brandmeyer

unread,
Nov 22, 2011, 4:39:23 PM11/22/11
to cloju...@googlegroups.com
Hi,

Am 22.11.2011 um 21:12 schrieb Chas Emerick:

> If there's two messages, one set!'s *print-length* to 5, and both messages have been processed, then the relevant session state will reflect that. Maybe a third message's effect of set!'ing *print-length* to 10 will be swapped in the next millisecond, but that doesn't mean that the change to 5 doesn't occur.

How so?

We start with the state {:print-length 10 :warn-on-reflection false}. Two messages arrive. Their futures take the repl state from the atom. One completes and writes {:print-length 5 :warn-on-reflection false} back to the atom. The other completes and writes {:print-length 10 :warn-on-reflection true} to the atom. BOOM. Now either the print-length change is dropped. Or the warn-on-reflection is dropped. But there is no easy way to obtain {:print-length 5 :warn-on-reflection true}. You'd have to do some kind of three-way merge. How do you handle this without some form of synchronization between the messages?

A different example?

{:*1 nil :*2 nil :*3 nil}
{:code (foo)} arrives and is kicked off. foo count the atoms in the universe (read: takes ages to complete).
{:code (bar)} arrives and is kicked off.
(bar) completes => {:*1 42 :*2 nil :*3 nil}
(foo) completes => {:*1 124742671847564936218634565 :*2 nil :*3 nil}

How do you deduce that the result should now be {:*1 124742671847564936218634565 :*2 42 :*3 nil}?

> My questions are:
>
> 1. Why should using a queue be mandatory?

Maybe a queue is not mandatory. But then you have to solve the above problem in some other way.

> 2. If a queue is mandatory, should it *be* the session as I originally described it in the strawman proposal (therefore making it impossible to evaluate some code with the "same" dynamic state as a currently-executing evaluation), and why?

A given repl session can have only one execution running at a time. A repl session is an identity which has a state. If you want to the execute some other code in the same dynamic state as the already running (and “blocked”) session you have to clone that state into a new repl session, which is new identity with its own state.

In general repl interactions have side-effects, so you can't simply replay when you notice that the state changed meanwhile. The only clojure reference type which handles side-effects is an agent. And with the agent comes serialization of messages.

> 3. How well (or not) do queue semantics map onto other transports and REPL environment contexts that we know of as likely endpoints?

I don't know. I have no experience with message queues, UDP, HTTP or other stuff.

Meikel

Justin Balthrop

unread,
Nov 22, 2011, 5:32:23 PM11/22/11
to cloju...@googlegroups.com

I agree with Meikel. Agents are the simplest way to solve this problem, and most Clojurians already have some understanding of how they work. Inventing something new to solve it would be adding unnecessary complexity, IMO.

As for how to allow two asynchronous commands to use the same session context, fork/clone is a perfectly acceptable solution:

{:id "3" :session "B" :clone "A" :code (set! *print-length* 5)}

We could also allow for one-off clones that do not keep the cloned session around with something like this:

{:id "3" :session "B" :clone "A" :close true :code (set! *print-length* 5)}

Or we could even allow unnamed temporary sessions and automatically close them:

{:id "3" :clone "A" :code (set! *print-length* 5)}

-Justin

Meikel Brandmeyer

unread,
Nov 22, 2011, 5:47:15 PM11/22/11
to cloju...@googlegroups.com
Hi,

Am 22.11.2011 um 23:32 schrieb Justin Balthrop:

> I agree with Meikel. Agents are the simplest way to solve this problem, and most Clojurians already have some understanding of how they work. Inventing something new to solve it would be adding unnecessary complexity, IMO.
>
> As for how to allow two asynchronous commands to use the same session context, fork/clone is a perfectly acceptable solution:
>
> {:id "3" :session "B" :clone "A" :code (set! *print-length* 5)}
>
> We could also allow for one-off clones that do not keep the cloned session around with something like this:
>
> {:id "3" :session "B" :clone "A" :close true :code (set! *print-length* 5)}
>
> Or we could even allow unnamed temporary sessions and automatically close them:
>
> {:id "3" :clone "A" :code (set! *print-length* 5)}

Just for the record how VimClojure works.

1. You request the start of a repl session. The server returns an id.
2. You execute some code in the repl by providing the id.
3. Repeat 2. as necessary.
4. You stop the repl by providing the id and using a special command.

Should during 2. a second connection try to connect to the same repl, this is rejected. I use an atom to store the repl state between connections.

By not providing a repl id upon connection, you get a one-shot repl just to eval a defn or lookup some completion. This repl session is released automatically after closing the connection.

Meikel

Kevin Downey

unread,
Nov 22, 2011, 6:48:56 PM11/22/11
to cloju...@googlegroups.com
On Tue, Nov 22, 2011 at 2:36 AM, Chas Emerick <ceme...@snowtide.com> wrote:
> There's nothing like a comparison grid in the wiki, but the wiki page text does describe relative pros and cons.
>
> In summary: The failings of the current protocol are described in detail in the initial Problems section. The custom netstrings-based protocol resolves all of those issues, but invents its own treatment for expressing message values that are e.g. a list/vector of values.  bencode formalizes this; if we used it, any bencode implementation (of which there are many for all sorts of languages, a benefit all its own) would be able to read and write nREPL messages without modification.
>
> The tagged netstring notion is new to me, so I am less certain of its sits vs. bencode.  But, its promise would be that we could extend it to support inline identification of encodings / media types.  It is a very new spec though (i.e. very few implementations relative to bencode), and I worry about some aspects of it (incl. type indicators being at the *end* of values, which seems like it would be a complicating factor).
>
> Hope that helps,
>
> - Chas

two problems with the wire protocol in the wiki are:

* the current nREPL protocol is unnecessarily line-based, making
implementations challenging in contexts wherein processing lines of
text is not a sort of well-supported abstraction

* local APIs sometimes make working with fixed-length or
defined-length messages easier and/or more efficient

netstrings and bencoding both seem to be aimed at the former, the
latter is not addressed.

there is an implication that the do address the latter:
"the addition of the prefixed cumulative message length (making the
entire message a netstring) as discussed earlier so as to benefit
those that need to allocate read buffers efficiently. "

but netstrings are prefixed by ascii numerals, so back reading an
unknown number of bytes to figure out how many more bytes you need to
read.

there is a note:
"It has been suggested that message-size prefixes be padded to a fixed
length (e.g. 0000043 instead of 43). This is in conflict with the
specification of netstrings (and bencode, discussed later). Is there
any value in adopting such a fixed-length prefix?"

it is really hard to see the benefit of netstrings or bencoding.

the ideal seems to be some subset of clojure sexprs (maybe just lists,
numbers, and strings) prefixed with a byte count.

0x00001D:(("code" "(+ 1 2)") ("id" 1))

similar to what slime does.

parsing sexps is easy, you don't have line issues, don't have ugly
letter delimiters, you don't need to prefix every element with a byte
count. you can parse this for free in any lisp.

ensime for scala communicates via sexps with slime no problem even
though scala is not a lisp.

nrepl seems to be bending over backwards trying to avoid using sexps. why?

using a repl protocol to shovel around binary data seems like a bad
idea. if you want to shovel around binary data use the repl protocol
to negotiate a more suitable transport.

Hugo Duncan

unread,
Nov 22, 2011, 7:12:16 PM11/22/11
to cloju...@googlegroups.com
Kevin Downey <red...@gmail.com> writes:

> using a repl protocol to shovel around binary data seems like a bad
> idea. if you want to shovel around binary data use the repl protocol
> to negotiate a more suitable transport.

Maybe something on the lines of presentations could be of use:
http://common-lisp.net/project/slime/doc/html/Presentations.html

Chas Emerick

unread,
Nov 22, 2011, 10:16:25 PM11/22/11
to cloju...@googlegroups.com
The previous-value example is better for illustrating the dynamics involved. I'd not put a name to it, but the reconciliation would be a three-way merge — and you're right that it would not be sufficient.

Thank you for input and your patience. :-)

- Chas

Chas Emerick

unread,
Nov 22, 2011, 10:48:20 PM11/22/11
to cloju...@googlegroups.com

None of the options in the current strawman use line delimiters of any kind; the linebreaks on the wiki page are there for clarity for human readers only.

sexps are convenient for lisps, but a key objective is to make it as straightforward as possible for non-Clojure and non-lisp clients to be written. Not all Clojure tools are written using a lisp, and I'd rather not put up barriers for non-Clojure/non-lisp tools that might benefit from interacting or supporting interaction with Clojure REPLs. Having existing wire protocol implementations of e.g. bencode for a dozen different languages is easier than implicitly asking every potential client implementor to write a reader, etc.

In relative terms, elegance (or whatever the counter to "ugly" is) isn't an objective IMO.

Re: prefixed byte counts, what drives the desire for that? My understanding from your prior comments (earlier this year in the thread linked at the top of the wiki page) was that it was driven by efficiency concerns. Given that bencode does not do this, yet is the basis of bittorrent, that doesn't seem like something we need to worry about.

> nrepl seems to be bending over backwards trying to avoid using sexps. why?
>
> using a repl protocol to shovel around binary data seems like a bad
> idea. if you want to shovel around binary data use the repl protocol
> to negotiate a more suitable transport.

Assuming we do need to shovel around binary data, then we need a wire protocol capable of doing so efficiently; base64 coding of data (even using the most efficient implementation available) can be painfully slow.

"Negotiating more suitable transport" gets _very_ complicated if the REPL server is anywhere other than localhost. Port privileges, firewall policy, authentication, and another (set of) implementation hurdles for REPL clients are not minor issues.

- Chas

Meikel Brandmeyer

unread,
Nov 23, 2011, 1:43:19 AM11/23/11
to cloju...@googlegroups.com
Hi,


Am Mittwoch, 23. November 2011 04:48:20 UTC+1 schrieb Chas Emerick:

Re: prefixed byte counts, what drives the desire for that?  My understanding from your prior comments (earlier this year in the thread linked at the top of the wiki page) was that it was driven by efficiency concerns.  Given that bencode does not do this, yet is the basis of bittorrent, that doesn't seem like something we need to worry about.

Prefixed byte counts are a big relief for languages like C, where you have to manually allocate any buffers. There you can simple say: "Give a buffer of size x." and then shove data in it. Without byte count, you have to find a unique terminator (which most likely implies some sort of escaping). And you have to watch your buffer, reallocate it, copy over the data & c. & c. Much of this is already hidden libraries, but nevertheless it's always a chance to make a mistake while shuggling around with the different buffers.

In bencode this happens implicitly. When you encounter a "d" you create a map and simple keep adding stuff to it. So the map is basically the buffer. Here the byte count is not required.

The counter example in bencode are the integers. They don't have a bytecount and you have to invent a new reading strategy compared to strings or collections. Bleh.

A short sidenote, why this protocol stuff is interesting: I rewrote the factor client from old-nREPL to netstrings in about 15 minutes. I doubt that I write a lisp reader in factor in 15 minutes.

Sincerely
Meikel

Kevin Downey

unread,
Nov 23, 2011, 1:52:07 AM11/23/11
to cloju...@googlegroups.com

http://jamesnvc.blogspot.com/2008/05/factor-lisp-it-doesnt-get-much-better.html

suggests that factor comes with some nice parsing tools. what makes
you think parsing a limited subset of sexps (lists, strings, maybe
integers) would be difficult?

>
> Sincerely
> Meikel


>
> --
> You received this message because you are subscribed to the Google Groups
> "Clojure Dev" group.

> To view this discussion on the web visit

> https://groups.google.com/d/msg/clojure-dev/-/xyG1qy4EsvIJ.

Meikel Brandmeyer

unread,
Nov 23, 2011, 1:59:52 AM11/23/11
to cloju...@googlegroups.com
Hi,


Am Mittwoch, 23. November 2011 07:52:07 UTC+1 schrieb Kevin Downey:

what makes
you think parsing a limited subset of sexps (lists, strings, maybe
integers) would be difficult?

Probably the fact the I'm completely new factor and don't know all ins-and-outs of its ecosystem. Nevertheless I was able to rewrite a program in 15 minutes without some sufficiently smart library. That speaks for the protocol.

Meikel
 

Kevin Downey

unread,
Nov 23, 2011, 2:02:46 AM11/23/11
to cloju...@googlegroups.com

does it speak for the protocol? is writing up new clients the case we
want to optimize for?

http://www.bluishcoder.co.nz/sexp.factor

> Meikel
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Clojure Dev" group.
> To view this discussion on the web visit

> https://groups.google.com/d/msg/clojure-dev/-/36SoK8MWFv4J.

Kevin Downey

unread,
Nov 23, 2011, 2:11:34 AM11/23/11
to cloju...@googlegroups.com
On Tue, Nov 22, 2011 at 11:02 PM, Kevin Downey <red...@gmail.com> wrote:
> On Tue, Nov 22, 2011 at 10:59 PM, Meikel Brandmeyer <m...@kotka.de> wrote:
>> Hi,
>>
>> Am Mittwoch, 23. November 2011 07:52:07 UTC+1 schrieb Kevin Downey:
>>>
>>> what makes
>>> you think parsing a limited subset of sexps (lists, strings, maybe
>>> integers) would be difficult?
>>
>> Probably the fact the I'm completely new factor and don't know all
>> ins-and-outs of its ecosystem. Nevertheless I was able to rewrite a program
>> in 15 minutes without some sufficiently smart library. That speaks for the
>> protocol.
>
> does it speak for the protocol? is writing up new clients the case we
> want to optimize for?
>
> http://www.bluishcoder.co.nz/sexp.factor

what I mean is, sure you can code up a client in 15 minutes now, great
so now you have a client. when are you going to write a client again?
what is having to spend 30 (very padded overly conservative estimate)
spent having to write a client for a wire protocol with a life time
that should be measured in years?

do we really want to pick a protocol because it happens to be easy for
people who don't know language X to write a parser in language X for
it?

Chas Emerick

unread,
Nov 23, 2011, 7:20:32 AM11/23/11
to cloju...@googlegroups.com

On Nov 23, 2011, at 2:11 AM, Kevin Downey wrote:

> do we really want to pick a protocol because it happens to be easy for
> people who don't know language X to write a parser in language X for
> it?

No, we want to pick a protocol because it satisfies (or, doesn't preclude satisfying) all identified objectives with a minimum of unpleasant tradeoffs.

> http://jamesnvc.blogspot.com/2008/05/factor-lisp-it-doesnt-get-much-better.html
>
> suggests that factor comes with some nice parsing tools. what makes


> you think parsing a limited subset of sexps (lists, strings, maybe
> integers) would be difficult?


None of this stuff is technically difficult. But we shouldn't use sexps just because they warm the cockles of our hearts; i.e. "uses parens" isn't a characteristic that solves any problems for anyone.

- Chas

Justin Balthrop

unread,
Nov 23, 2011, 3:40:23 PM11/23/11
to cloju...@googlegroups.com
I have to agree with Kevin that writing a parser for the limited set of s-expressions he proposes would be pretty simple in any language. But interestingly, parsing internal strings is more difficult for s-expressions than in bencode, because they are just delimited by an unescaped terminating ". Of course you can allocate a single buffer for the entire message, but bencode allows you to allocate smaller buffers.

Regardless, the main downside I see with s-expressions is the lack of support for binary data. And it seems binary data support has become one of the requirements of the protocol. We could consider extending s-expressions to support inline binary data, as in [1], but then existing s-expression readers would be useless.

1. http://sexpr.sourceforge.net; The small, fast s-expression library. The actual format used for binary data is only described in the source
http://propirate.net/oracle/nle/repo/sexpr_1.0.0/src/sexp.h

> --
> You received this message because you are subscribed to the Google Groups "Clojure Dev" group.

Reply all
Reply to author
Forward
0 new messages