You're right, Jenkins is higly tied to it's remoting agent. We have workaround for some calls (like launching a process) but generally speaking we need this agent running on slave.
Yoann and I have developed an alternate approach based on a set of containers (aka a "pod") here one of them comes with a JVM and slave.jar and will handle remoting stuff, and another one do host the build command. They share network and workspace as a volume. Can also have some more container added to this set, for sample to run a selenium browser, without need for xvnc hack during the build.
We also use plain docker run stdin/stdout as a channel between master and slave. No need for sshd in docker image, nor callback JNLP URL - which requires jenkins to be reachable from the slave. Removal for this Launcher complexity makes it trivial to run a docker container as a slave.
We have also considered a possible optimisation to create a docker container when jenkins starts to use a base image + Jvm, inject slave.jar (docker cp as you suggested) as well as jenkins jars into remoting cache, then commit the image. This image could then be used for all build, would perfectly match the jenkins installation, and as a result the remoting would start immediately without need for classes exchange. This is just an idea, not required, but something we have in mind for future.
The One-Shot logic has been designed for this exact scenario, it was initialy mixed into docker-slaves, but as it could benefit other plugins (Kubernetes, Amazon ECS, maybe mesos as well) it made sense to just extract it. It's feature complete but implementation details would need some polish and additions for new hooks into jenkins-core, but is usable today : you can wait for a future release for a "cleaner" implementation, but the API is well defined.