Hi Greg,
Am 09.11.2017 um 22:13 schrieb Gregory M. Kurtzer:
> Hi Oliver,
>
> On Thu, Nov 9, 2017 at 10:43 AM, 'Oliver Freyermuth' via singularity <
singu...@lbl.gov <mailto:
singu...@lbl.gov>> wrote:
>
> To add on this: Another possibility you may consider, especially if you talk about a "cluster", is using something like HTCondor.
>
> With HTCondor's current Singularity support, some bind-mount hacks, and an sshd installed inside the container, this is already possible now (if Singularity is running setuid root).
>
>
> While it is possible (in theory), we have to be very much aware of how we use our SetUID abilities, and we purposefully drop all capabilities when we exec to a process within the container. The only way someone could actually run an ssh daemon inside the container is to run the container as root.
We are already letting HTCondor run an sshd as user inside our Singularity containers.
So our users can run
condor_submit -interactive
on a desktop machine, and end up inside a Singularity container on a worker node.
This naturally means privilege separation is disabled, and sshd can of course only let the single user in - but this is exactly the way it's meant to be (in the HTCondor world) and the way HTCondor is also using when running bare-metal.
The main issues why we still need SetUID-capabilities are related to the permissions, owners and groups on the devpts filesystem in the container, which sshd is very picky about. In a user namespace, there's no easy fix (apart from patching sshd).
HTCondor takes care to set up such a temporary sshd with one-time credentials for each user who wants to attach to an owned, running job. Sadly, it does that in a *separate* container to the original job right now, which is a bit useless,
but at least allows to run interactive jobs at all.
>
>
> Then you can start an interactive job, and end up in a fresh container on the remote machine (as if ssh was used).
> Behind the scenes, sshd is actually used - the nice thing is they have magic implemented to make that work
> even if the cluster compute nodes are behind a NAT in a private network (by using a connection broker machine).
>
> For file transfer, HTCondor offers integrated techniques which are usually used for non-interactive jobs.
>
> My hope is the HTCondor people will at some point rework interactive jobs so the sshd can run outside the container and just use unprivileged nsenter to enter the container.
>
>
> You could do that but `nsenter` or rather the system call `setns()` implies a running container instance. With Singularity, you don't need that, and you can just use the Singularity shell idea I mentioned earlier, or the (very simple and clean) solution that Jeff mentioned. But... If you want to instead join an existing set of namespaces, Singularity supports that with instances, however I'm not sure a running instance is required.
I can't use the Singularity shell idea for our setup without doing one of two things:
- Grant the users SSH access to our worker nodes. I don't want to do that - they are in a private network behind a NAT gateway. The only access users get is via the reverse connections initiated by HTCondor.
This also ensures HTCondor is completely aware of anything user-related running on those machines.
- Rewriting the logic of HTCondor. Currently, it always (no matter whether bare metal or container) fires up a dummy job in case an interactive job is requested (it just runs a "sleep 180" job).
Then, the "ssh-to-job" approach is used, as if one would attach to a "real" running job. The idea is to enter the exact same environment of the job.
For this, of course either nsenter or instances would be fully working ideas. I suggested both on the HTCondor list a while ago, sadly there's still no reply...
But of course, I see many other applications in other environments for the Singularity shell, so I greet the idea ;-).
>
>
> Then, we could also use that with user namespaces. For Singularity, this also requires
https://github.com/singularityware/singularity/pull/934 <
https://github.com/singularityware/singularity/pull/934> to go in.
>
>
> We can use the user namespace, without SetUID, now for this (if you are not using file based images). But you are right, Cedric's PR there is highly advantageous.
With nsenter, there's a huge problem without Cedric's PR in: When using nsenter to attach the container, you end up in the same mount namespace, but with / being the root of the host.
With Cedric's PR, this is resolved (due to the use of pivot_root).
That's why I would say that at least the "nsenter" approach requires this PR to be in to be useful.
>
>
>
> If you're not into setting up a workload management system such as HTCondor, related work includes "ch-ssh" which is offered as part of Charliecloud.
> However, that's only for transparent ssh login, not file transfers.
>
>
> Yes, ch-ssh may offer a similar functionally to what I was describing (with a Singularity login shell), but I think that Jeff's idea is much cleaner.
>
> Great ideas!
Thanks!
Maybe another question, just since I did not test it yet - do Singularity's instances work also when Singularity is not setuid root?
Cheers,
Oliver
>
> Greg
>
>
>
>
>
> Cheers,
> Oliver
>
> Am 09.11.2017 um 17:05 schrieb Gregory M. Kurtzer:
> > I have considered this too, and I prototyped it using a custom Singularity login shell program. Not sure if it worked for contained file transfers, but might be worth coming back to!
> >
> > On Thu, Nov 9, 2017 at 6:37 AM, Paul Hopkins <
paul.lonnkv...@gmail.com <mailto:
paul.lonnkv...@gmail.com> <mailto:
paul.lonnkv...@gmail.com <mailto:
paul.lonnkv...@gmail.com>>> wrote:
> >
> > Is there a way of accessing a Singularity container transparently via ssh shell, remote ssh commands, and scp file transfer? If possible, I would like users to access a cluster and be instantly transported into the Singularity container. Steven Brandt posted some code recently, and I have some hacky code that allows remote command execution, but I am stuck on scp file transfer:
> >
> > if [ -z "$SINGULARITY_NAME" ]; then
> > if [[ "$-" == *i* ]]; then
> > exec singularity shell -s /bin/bash el7.img
> > else
> > COMMAND="$(ps -o args= $$ | cut -c8- )" # Cut "/bin/bash -c" off
> > exec singularity exec el7.img ${COMMAND}
> > fi
> > fi
> >
> > File transfer really only affects things that are in the container, not bound directories like /home or /data but I wonder if anybody else has figured this out yet? Furthermore, is this even sane, are other people considering this? If so, could Singularity handle all the logic about what to do itself, or does it need some boilerplate wrapper?
> >
> > Thanks,
> >
> > Paul
> >
> >
> > --
> > Paul Hopkins
> > Computational Infrastructure Scientist
> > Cardiff University
> >
> >
Hopk...@cardiff.ac.uk <mailto:
Hopk...@cardiff.ac.uk> <mailto:
Hopk...@cardiff.ac.uk <mailto:
Hopk...@cardiff.ac.uk>>
> > Office: +44 (0) 29 225 10043 <tel:%2B44%20%280%29%2029%20225%2010043>
> >
> > --
> > You received this message because you are subscribed to the Google Groups "singularity" group.
> > To unsubscribe from this group and stop receiving emails from it, send an email to
singularity...@lbl.gov <mailto:
singularity%2Bunsu...@lbl.gov> <mailto:
singularity...@lbl.gov <mailto:
singularity%2Bunsu...@lbl.gov>>.
> >
> >
> >
> >
> > --
> > Gregory M. Kurtzer
> > CEO, SingularityWare, LLC.
> > Senior Architect, RStor
> >
> > --
> > You received this message because you are subscribed to the Google Groups "singularity" group.
> > To unsubscribe from this group and stop receiving emails from it, send an email to
singularity...@lbl.gov <mailto:
singularity%2Bunsu...@lbl.gov> <mailto:
singularity...@lbl.gov <mailto:
singularity%2Bunsu...@lbl.gov>>.
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
singularity...@lbl.gov <mailto:
singularity%2Bunsu...@lbl.gov>.
>
>
>
>
> --
> Gregory M. Kurtzer
> CEO, SingularityWare, LLC.
> Senior Architect, RStor
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
singularity...@lbl.gov <mailto:
singularity...@lbl.gov>.