Maybe we need to link proposed ideas to problems, to ensure we have solutions to everything.
I think the layout is great, but wanted to make sure we have viable solutions to each problem.
--
You received this message because you are subscribed to the Google Groups "sbt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sbt-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sbt-dev/CAA3p8zCqYC2KnqkNrubKJ8eedYXa_fc20jb0ijgnMDhpAJy%3DzQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "sbt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sbt-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sbt-dev/20131009143158.38031c3e%40gmail.com.
On Wed, 9 Oct 2013 14:37:26 -0400I didn't think through it fully, mainly just to give you guys an idea of what you can expect from the facility. However, I'd say treat it as a synchronized {} block where you read/write the file (active.properties?) knowing you have exclusive access (assuming everyone else uses the locking facility of course). So, yes, always lock when reading/writing active.properties or whatever file contains the coordination information. In my pseudo-code, firing up the server was also done under lock. This avoids having another client read a port for a server that isn't ready yet or things like that.
Josh Suereth <joshua....@gmail.com> wrote:
> One question on the locking suggestion:
>
> Do clients need to lock when attempting to read the current port? Should
> they just always read in the active.properties file and look for a port or
> issue a request to the launcher to go launch the server? I assume that's
> what is implied, but wanted to clarify.
> We still need to have clients *and* server poll the active.propertiesI don't see why. Can you explain?
> randomnly (or upon network failure?) to ensure we only have one instance.
> I don't think this changes that too much....
Havoc
--
You received this message because you are subscribed to the Google Groups "sbt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sbt-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sbt-dev/CAA3p8zAf_VHyAWCSZzj6Acdh-M8Tx%2Bx_dUBUVjwyNHR_Vi5Q2w%40mail.gmail.com.
On Tue, 15 Oct 2013 13:13:32 -0400I agree with the listed assumptions, except I don't know why "network failure" is included. This should all be over the local loopback. That might technically be "network", but it isn't what you think of when you hear "network failure".
Josh Suereth <joshua....@gmail.com> wrote:
> So, I create a new (and not as fleshed out as hoped) proposal for
> leveraging the sbt launcher to do server startup:
> https://github.com/sbt/sbt/wiki/Client-server-discovery-lifecycle#proposal-3-a-combination-of-the-above
>
> The idea here is that clients who need to start a server just fork the
> sbt-launcher (detached) and it'll notify them where the current server is
> (or become the current server). Also, we should allow servers to open
> multiple ports and notify clients which ones are in use via the
> active.properties.
>
> In any case, I normally think through things via discussion, so I'm posting
> here to see if anyone can help think through the issues and whether or no
> it's a good idea.
Some questions:
1. Why does the client care about open ports? It should just be given an exclusive port to talk to the server on.
2. Who consumes ServerMain and who implements it?
3. Is it necessary to have a shutdown service? It will be necessary to handle a bad shutdown- does a shutdown service gain anything?
On Tue, Oct 15, 2013 at 1:13 PM, Josh Suereth <joshua....@gmail.com> wrote:Cool, thoughts while reading:
> So, I create a new (and not as fleshed out as hoped) proposal for leveraging
> the sbt launcher to do server startup:
> https://github.com/sbt/sbt/wiki/Client-server-discovery-lifecycle#proposal-3-a-combination-of-the-above
>
* I think network failure = something is broken (i.e. 127.0.0.1 is
not really "network" in the usual sense; only possible error given
both processes are alive and bug-free should be broken OS or maybe
out-of-memory)
* in "isAlive(ServerPort[] ports)" what if one port is alive and one
isn't? what does multiple ports mean in general, anyway, that the
client has a choice?
* s/ServerPort/ServerAddress/ ? ... or - how about java.net.URI ?
* ShutdownService.shutdownServer I think is a handle for the server
to tell the launcher it wants to shut down, possible clearer name
maybe `ServerLauncher.notifyShuttingDown`, or something?
* will the launcher try to kill off the server if the lock file goes
away or no longer refers to the current server? how does it kill the
server if so?
* I suspect deleting the lockfile just causes trouble, if the ping
works reliably it shouldn't be needed - and it might hurt
* if we write out the pid in the lockfile, users could benefit from
knowing what to kill or being able to implement a "kill sbt now"
utility. In fact the file could have helpful doc comments or whatever.
* it might be nice if sbt 0.13 and sbt.next shared a library used to
implement ServerMain, because there's going to be a lot of code for
the server spawn, http listening, and pinging, which is logically
separate from the actual requests we'd send over the protocol. Sort of
a "container" vs. the sbt-version-specific "app"
Havoc
--
You received this message because you are subscribed to the Google Groups "sbt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sbt-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sbt-dev/CAA3p8zD5yFv9i7qxkdLdBij6eCZVgV1SKCm0OGdxGDP9oQDV6A%40mail.gmail.com.
2. The server listens on a single port for new client requests.
3. These requests go through the server-launcher, so there is no need to handle diverse client connection types here.
4. When a request is received, the server starts up a new, random, unique port for the client.
5. The request can indicate old/new protocol or whatever.
6. There is no need to record the client ports in app, since they are not used again by the server-launching-service, right?
However, it occurs to me that I'm thinking in terms of a local socket based communication and you might be thinking REST-type communication. Is that right?
The keep alive part is unclear to me, although see the comment about local socket v. REST.
> > 2. Who consumes ServerMain and who implements it?
> >
>
> An artifact (like org.scala-sbt:sbt) would implement. It knows how to ping
> for keep-alive, and how to bind the necessary ports it wants. I'd like to
> use this for activator as well, where we only want to start up one-instance
> per-machine, but it would have similar requirements.
>
> The consumer is the sbt-launcher, who is only responsible for ensuring that
> only one server is running at a time.
My guess is this is a minor gain- you probably save attempting to open a socket, but that should fail quickly. My vote is to not worry about this unless it demonstrates an actual improvement in practice.
> > 3. Is it necessary to have a shutdown service? It will be necessary to
> > handle a bad shutdown- does a shutdown service gain anything?
> >
> >
> No. It's just an optimisation for the next startup. You can drop it
> entirely and the design remains unchanged.
-Mark
--
You received this message because you are subscribed to the Google Groups "sbt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sbt-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sbt-dev/20131015141232.2ae89ea1%40gmail.com.
On Tue, Oct 15, 2013 at 2:27 PM, Josh Suereth <joshua....@gmail.com> wrote:I guess a client could decide which address to use, but the launcher
>> * in "isAlive(ServerPort[] ports)" what if one port is alive and one
>> isn't? what does multiple ports mean in general, anyway, that the
>> client has a choice?
>
>
> I think it's up to the client to decide....
>
has to decide what "alive" means (all addresses work vs. any address
works) right.
scala> new java.net.URI("mymadeupproto://whatevs")
> Can we invent random "protocols" for java.net.URI? Or is it just URL that's
> odd?
res0: java.net.URI = mymadeupproto://whatevs
>> * will the launcher try to kill off the server if the lock file goesOh, I forgot that there's no need to fork off the server. so yeah, just exit.
>> away or no longer refers to the current server? how does it kill the
>> server if so?
>
>
> Hmm, probably. And killing probably => System.exit (so you need shutdown
> hooks?)
Shutdown hooks are basically always wrong since a design that requires
clean exit is wrong.
>> * I suspect deleting the lockfile just causes trouble, if the pingIt would hurt if we accidentally delete someone else's file, though
>> works reliably it shouldn't be needed - and it might hurt
>
>
> Why isn't locking the file needed? How would it hurt?
you're right this does mean locking would be broken.
Havoc
--
You received this message because you are subscribed to the Google Groups "sbt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sbt-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sbt-dev/CAA3p8zAAQyMQa-T8n%2Bzx4HTPytodxAq6xxfTkbk4U-BBoGSU6g%40mail.gmail.com.
> > 4. When a request is received, the server starts up a new, random, uniquelauncherService to server: I have a request for a client to use https, protocol v1.3
> > port for the client.
> >
> >
> ?
server: listens on a new port N
server to launcherService: I'm listening on port N
launcherService to client: server is listening on port N for you
> > 5. The request can indicate old/new protocol or whatever.Why?
> >
> >
> True, but this just means you embed versioning in your protocol.
I still don't see why multiple ports need to be listened on for unsolicited connections.
> > 6. There is no need to record the client ports in app, since they are not
> > used again by the server-launching-service, right?
> >
> >
> Right, the only ports we expose here are those we are actively listening on
> for connections. I'm not keeping UDP-style out of the mix for now, since
> we're on one machine.
On Tue, Oct 15, 2013 at 2:42 PM, Mark Harrah <dmha...@gmail.com> wrote:I guess what we're specifying in this scenario isn't *really* a
>
> launcherService to server: I have a request for a client to use https, protocol v1.3
> server: listens on a new port N
> server to launcherService: I'm listening on port N
> launcherService to client: server is listening on port N for you
>
protocol in the http sense, but the entire "client-server contract."
For example, v1.3 might mean that the http server has paths /ping and
/socket that do such-and-such.
I could also imagine that a "client-server contract" might imply
multiple ports or protocols or even some kind of usage of the
filesystem. It's the totality of expectations that client and server
have for one another.
There are three places I can think of for feature/contract negotiation:
- selecting the server jar to run (used for 0.13 vs. .next perhaps);
this is determined by the project's build properties presumably (?)
- selecting the client-server contract used to talk to that server
(as in above, e.g. http with such-and-such paths); one server process
might support multiple client-server contracts
- within a contract, negotiating protocol details: for example after
the WebSocket is live, messages could be exchanged to configure what
we'll send over it. We could also do things here like ask "what's your
sbt version?" or "do you have the run task?" or whatever.
It could be like this, translate to Java as needed,
case class ContractId(name: String, version: Int) // better name?
public interface ServerMain {
public java.net.URI start(AppConfiguration configuration, ContractId
contract);
public boolean isAlive(java.net.URI uri);
}
(Or can the ContractId thing be part of AppConfiguration?)
It's perhaps mixing issues to perform client-server contract selection
via URI selection. That is, a URI represents a concrete instance
that's running that we can connect to. One server may have N URIs. But
a contract represents a conceptual kind of thing we might want to
connect to, which would have a URI only when concretely instantiated.
A client would want to pick the client-server contract it has in mind,
but it doesn't give a crap about the URI other than it needs to obtain
one. URIs can be created on demand (potentially even per-client),
while available contracts are a fixed property of a particular server
implementation.
I'm not sure. I think a lot of this thread boils down to terminology
confusion and such.
Mark, maybe one different implication of what you're proposing above
would be to move the "server discovery" file and aliveness checking
inside the server rather than outside it. So I *think* you might be
saying let's just have:
public interface ServerMain {
public java.net.URI getOrStart(AppConfiguration configuration,
ContractId contract);
}
with the look up existing server file / isAlive handling *inside*
getOrStart ? The advantage of this is that a server implementation
could support multiple contracts but only start up URIs associated
with those contracts on demand when they are actually wanted? The
server would also have the option to generate a fresh URI each time.
Another question, is the "server singleton-ification mechanism" (the
lock file, basically) part of the client-server contract, or is it a
separately-identified "protocol"? When do we want servers to be able
to coexist vs. exclude each other? I would think that all servers
which are basically sbt build servers, and not some other kind of
thing, should mutually exclude each other.
<snip/>
> > >
> > > Maybe you should just code the simplest thing that could work and we
> > > can go from there... ;-)
> >
> > Ok, I wrote my version, which accomplishes:
> >
> > client: I want the application defined by boot.properties to run in cwd
> > and use version X of protocol N
> > server-service: the application is running and listening on port P
> >
> >
> > https://github.com/sbt/sbt/wiki/Client-server-discovery-lifecycle#sample-code
> >
> >
>
> Ok, so some notes here:
>
> (1) The "locking"/ensuring that we only have one server at a time is done
> via a separate JAR/separate process. THe server process itself will never
> touch the lock file.
Yes.
Yes.
> (2) We spin up servers via the regular sbt launching mechanism.
You could just as well write to the stdin of the launched server and it could write a line with the port back out. It is a protocol to the extent that Map[String,String] => Int is a protocol.
> (3) There is some base-level protocol from the server-service to any
> launched server (this is the sticking point to me).
In order to determine if the server is running, you have to know something about the protocol. This is why your proposal has the isAlive method. This means the launcher has to update/retrieve/load the server classes in a new process every time a client wants to connect so it can ask isAlive. If you are ok with an additional ~1 s startup time for each connecting client, that might be fine. I don't think it is find for the command line client, which is already too slow to start up.
I don't think it makes sense to make the protocol flexible in your proposal: specify HTTP and some path /ping for isAlive as Havoc suggested and there is no need for the full launcher process.
> (4) You're creating a new port for each client. We don't really need thatYou could use a single port if you are ok moving negotiation into the client/server protocol. You still have to specify a base-level protocol for negotiating the real protocol+version or you specify that everyone uses the same REST API and does .../v1/... Either way, that negotiation is something that becomes a public API (between server+client) and can't change if you want things to work across versions (just restating what Havoc has already said I guess).
> unless we're on old UDP connections. TCP/HTTP negotiate new ports
> automatically from a listening port. So you can do the above with just a
> single open port.
> So, what I think I may agree on (and actually can make things simpler):Yes.
>
> 1. The "server locator" process needs to launch the server in a fork, and
> should die after it reports the open port.
> 2. It seems, perhaps, that the server should not touch the lock file
> directly.
Why is a URI necessary? It seems to me that the only unknown is the port. The host is always local and the protocol is fixed for a particular server.
> Combined with Havoc's API
>
> trait ServerMain {
> def start(config: AppConfiguratoin): java.net.URI
> }
> We can have the launcher automatically dump STDERR/STDOUT into log files,Log files are a possibility, although you lose relative ordering of the output. If that could be important, the server should handle it. I assume you want to do it in the launcher to capture any output from the launcher itself?
> and specifically write this URI out to the "Server locator".
-Mark
> I'll update my proposal accordingly, to reflect us converging on a design.
--
You received this message because you are subscribed to the Google Groups "sbt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sbt-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sbt-dev/20131016113143.3a8efe8e%40gmail.com.
On Wed, 16 Oct 2013 12:45:21 -0400
Josh Suereth <joshua....@gmail.com> wrote:
<snip/>
>
> The "properties file + cwd" as the only things you have when connecting,
> and how you don't want to read the boot configuration except when starting
> a server. I'm not sure why that's a restriction we have in place. Is
> startup speed the concern? It seems rather inhibiting. Especially if I
> want to have different boot.properties refer to the same server.
>
> E.g. We're planning to use this mechanism to launch activator as well, and
> ensure only one instance is running of the UI. Then when you launch in a
> given directory, we can redirect you to the same server, but opening the
> app you wish. The idea there is the "lock" file specification would be
> something like ~/.activator/active.properties, rather than in the cwd.
>
> This is why I see propertiesFile + cwd as too restrictive in identifying a
> server. You really want server.module + server.artifact (and maybe
> server.mainClass) as a hash denoting what type of server.... *or* you want
> to pass in where the lock file should be generated (vs. where the CWD is)
> when starting a server....
I see your point about the lock file. I have no problem passing where the lock file should be instead of it being computed.
I also see your point about boot.properties, but I think there is more to it. For the boot.properties file, you can't just look at the module+artifact+mainClass. You have to check that all of the other settings are the same. If they aren't, what do you do other than say "incompatible boot.properties files"? If you ignore the difference, the user wonders why their repositories aren't being respected. (Starting up separate servers wouldn't make sense either, but I don't think anyone is proposing that.) But, how do you get activator+IDE+command line to use the same boot.properties? I think that is the real problem whether or not the File is exactly the same.
On Wed, 16 Oct 2013 11:52:27 -0400
Josh Suereth <joshua....@gmail.com> wrote:
<snip/>
> > In order to determine if the server is running, you have to know something
> > about the protocol. This is why your proposal has the isAlive method.
> > This means the launcher has to update/retrieve/load the server classes in
> > a new process every time a client wants to connect so it can ask isAlive.
> > If you are ok with an additional ~1 s startup time for each connecting
> > client, that might be fine. I don't think it is find for the command line
> > client, which is already too slow to start up.
> >
> > I don't think it makes sense to make the protocol flexible in your
> > proposal: specify HTTP and some path /ping for isAlive as Havoc suggested
> > and there is no need for the full launcher process.
> >
> >
> I actually think it does. Perhaps we can agree to disagree here. IF we
> have to consolidate, HTTP seems the right thing to consolidate on, with
> known resources.
This is related to the discussion of the bootstrap protocol/port. If you make the bootstrap flexible, you need to negotiate it- this opens up the negotiation issues again.
> When it comes to startup time, I agree that it's not ideal to reload theNot sure what you mean here. The launcher+application just for isAlive is going to be ~1 s and that's not really going to go down easily. The code to ping/fork is specialized and is a lot easier to optimize if necessary.
> classes. However, I don't see how this would slow down startup any further
> than it already is.
> IF we lock down to an HTTP protocol with known ping(I wouldn't propose dropping isAlive in favor of flakiness.)
> resource, that's fine. Otherwise, I think the hit is worth it. That way
> the service can make more intelligent estimates as to whether or not it is
> down.
Class loading guarantees from the jvm are only about correctness and not performance, so I don't really want to be worrying about whether my entry point, which is in the same codebase as the reset of my application, will load those other classes when isAlive is called.
> You'd also want to avoid having too many classes loaded in the "isAlive"
> method. Not ideal, but I personally like it better than the alternative.
> I'd be willing to drop it, if it's a sticking point.