Access remote Singularity container transparently

865 views
Skip to first unread message

Paul Hopkins

unread,
Nov 9, 2017, 9:38:14 AM11/9/17
to singu...@lbl.gov
Is there a way of accessing a Singularity container transparently via ssh shell, remote ssh commands, and scp file transfer? If possible, I would like users to access a cluster and be instantly transported into the Singularity container. Steven Brandt posted some code recently, and I have some hacky code that allows remote command execution, but I am stuck on scp file transfer:

if [ -z "$SINGULARITY_NAME" ]; then
   if [[ "$-" == *i* ]]; then
    exec singularity shell -s /bin/bash el7.img
   else
     COMMAND="$(ps -o args= $$ | cut -c8- )" # Cut "/bin/bash -c" off
     exec singularity exec el7.img ${COMMAND}
  fi
fi

File transfer really only affects things that are in the container, not bound directories like /home or /data but I wonder if anybody else has figured this out yet? Furthermore, is this even sane, are other people considering this? If so, could Singularity handle all the logic about what to do itself, or does it need some boilerplate wrapper?

Thanks,

Paul


-- 
Paul Hopkins
Computational Infrastructure Scientist
Cardiff University

Office: +44 (0) 29 225 10043

Gregory M. Kurtzer

unread,
Nov 9, 2017, 11:05:19 AM11/9/17
to singu...@lbl.gov
I have considered this too, and I prototyped it using a custom Singularity login shell program. Not sure if it worked for contained file transfers, but might be worth coming back to!

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.



--
Gregory M. Kurtzer
CEO, SingularityWare, LLC.
Senior Architect, RStor

Oliver Freyermuth

unread,
Nov 9, 2017, 1:43:47 PM11/9/17
to singu...@lbl.gov
To add on this: Another possibility you may consider, especially if you talk about a "cluster", is using something like HTCondor.

With HTCondor's current Singularity support, some bind-mount hacks, and an sshd installed inside the container, this is already possible now (if Singularity is running setuid root).
Then you can start an interactive job, and end up in a fresh container on the remote machine (as if ssh was used).
Behind the scenes, sshd is actually used - the nice thing is they have magic implemented to make that work
even if the cluster compute nodes are behind a NAT in a private network (by using a connection broker machine).

For file transfer, HTCondor offers integrated techniques which are usually used for non-interactive jobs.

My hope is the HTCondor people will at some point rework interactive jobs so the sshd can run outside the container and just use unprivileged nsenter to enter the container.
Then, we could also use that with user namespaces. For Singularity, this also requires https://github.com/singularityware/singularity/pull/934 to go in.

If you're not into setting up a workload management system such as HTCondor, related work includes "ch-ssh" which is offered as part of Charliecloud.
However, that's only for transparent ssh login, not file transfers.

Cheers,
Oliver

Am 09.11.2017 um 17:05 schrieb Gregory M. Kurtzer:
> I have considered this too, and I prototyped it using a custom Singularity login shell program. Not sure if it worked for contained file transfers, but might be worth coming back to!
>
> On Thu, Nov 9, 2017 at 6:37 AM, Paul Hopkins <paul.lonnkv...@gmail.com <mailto:paul.lonnkv...@gmail.com>> wrote:
>
> Is there a way of accessing a Singularity container transparently via ssh shell, remote ssh commands, and scp file transfer? If possible, I would like users to access a cluster and be instantly transported into the Singularity container. Steven Brandt posted some code recently, and I have some hacky code that allows remote command execution, but I am stuck on scp file transfer:
>
> if [ -z "$SINGULARITY_NAME" ]; then
>    if [[ "$-" == *i* ]]; then
>     exec singularity shell -s /bin/bash el7.img
>    else
>      COMMAND="$(ps -o args= $$ | cut -c8- )" # Cut "/bin/bash -c" off
>      exec singularity exec el7.img ${COMMAND}
>   fi
> fi
>
> File transfer really only affects things that are in the container, not bound directories like /home or /data but I wonder if anybody else has figured this out yet? Furthermore, is this even sane, are other people considering this? If so, could Singularity handle all the logic about what to do itself, or does it need some boilerplate wrapper?
>
> Thanks,
>
> Paul
>
>
> -- 
> Paul Hopkins
> Computational Infrastructure Scientist
> Cardiff University
>
> Hopk...@cardiff.ac.uk <mailto:Hopk...@cardiff.ac.uk>
> Office: +44 (0) 29 225 10043
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity...@lbl.gov>.
>
>
>
>
> --
> Gregory M. Kurtzer
> CEO, SingularityWare, LLC.
> Senior Architect, RStor
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity...@lbl.gov>.

Jeff Kriske

unread,
Nov 9, 2017, 3:56:19 PM11/9/17
to singularity
Placing something like 
command="exec singularity exec /centos $SSH_ORIGINAL_COMMAND"
inside ~/.ssh/authorized_keys right before your key will do this

It allows ssh, scp, passing other commands etc.
This is connecting to an Ubuntu machine:

ssh 10.0.0.100 cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

scp wallpaper
.jpg 10.0.0.100:/home/jeff/
wallpaper
.jpg                                                          100%   42KB  19.7MB/s   00:00

Gregory M. Kurtzer

unread,
Nov 9, 2017, 4:13:20 PM11/9/17
to singu...@lbl.gov
Hi Oliver,

On Thu, Nov 9, 2017 at 10:43 AM, 'Oliver Freyermuth' via singularity <singu...@lbl.gov> wrote:
To add on this: Another possibility you may consider, especially if you talk about a "cluster", is using something like HTCondor.

With HTCondor's current Singularity support, some bind-mount hacks, and an sshd installed inside the container, this is already possible now (if Singularity is running setuid root).

While it is possible (in theory), we have to be very much aware of how we use our SetUID abilities, and we purposefully drop all capabilities when we exec to a process within the container. The only way someone could actually run an ssh daemon inside the container is to run the container as root.
 
Then you can start an interactive job, and end up in a fresh container on the remote machine (as if ssh was used).
Behind the scenes, sshd is actually used - the nice thing is they have magic implemented to make that work
even if the cluster compute nodes are behind a NAT in a private network (by using a connection broker machine).

For file transfer, HTCondor offers integrated techniques which are usually used for non-interactive jobs.

My hope is the HTCondor people will at some point rework interactive jobs so the sshd can run outside the container and just use unprivileged nsenter to enter the container.

You could do that but `nsenter` or rather the system call `setns()` implies a running container instance. With Singularity, you don't need that, and you can just use the Singularity shell idea I mentioned earlier, or the (very simple and clean) solution that Jeff mentioned. But... If you want to instead join an existing set of namespaces, Singularity supports that with instances, however I'm not sure a running instance is required.
 
Then, we could also use that with user namespaces. For Singularity, this also requires https://github.com/singularityware/singularity/pull/934 to go in.

We can use the user namespace, without SetUID, now for this (if you are not using file based images). But you are right, Cedric's PR there is highly advantageous.
 

If you're not into setting up a workload management system such as HTCondor, related work includes "ch-ssh" which is offered as part of Charliecloud.
However, that's only for transparent ssh login, not file transfers.

Yes, ch-ssh may offer a similar functionally to what I was describing (with a Singularity login shell), but I think that Jeff's idea is much cleaner.

Great ideas!

Greg


 

Cheers,
Oliver

Am 09.11.2017 um 17:05 schrieb Gregory M. Kurtzer:
> I have considered this too, and I prototyped it using a custom Singularity login shell program. Not sure if it worked for contained file transfers, but might be worth coming back to!
>
> On Thu, Nov 9, 2017 at 6:37 AM, Paul Hopkins <paul.lonnkvist.hopkins@gmail.com <mailto:paul.lonnkvist.hop...@gmail.com>> wrote:
>
>     Is there a way of accessing a Singularity container transparently via ssh shell, remote ssh commands, and scp file transfer? If possible, I would like users to access a cluster and be instantly transported into the Singularity container. Steven Brandt posted some code recently, and I have some hacky code that allows remote command execution, but I am stuck on scp file transfer:
>
>     if [ -z "$SINGULARITY_NAME" ]; then
>        if [[ "$-" == *i* ]]; then
>         exec singularity shell -s /bin/bash el7.img
>        else
>          COMMAND="$(ps -o args= $$ | cut -c8- )" # Cut "/bin/bash -c" off
>          exec singularity exec el7.img ${COMMAND}
>       fi
>     fi
>
>     File transfer really only affects things that are in the container, not bound directories like /home or /data but I wonder if anybody else has figured this out yet? Furthermore, is this even sane, are other people considering this? If so, could Singularity handle all the logic about what to do itself, or does it need some boilerplate wrapper?
>
>     Thanks,
>
>     Paul
>
>
>     -- 
>     Paul Hopkins
>     Computational Infrastructure Scientist
>     Cardiff University
>
>     Hopk...@cardiff.ac.uk <mailto:Hopk...@cardiff.ac.uk>
>     Office: +44 (0) 29 225 10043
>
>     --
>     You received this message because you are subscribed to the Google Groups "singularity" group.
>     To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov <mailto:singularity+unsub...@lbl.gov>.

>
>
>
>
> --
> Gregory M. Kurtzer
> CEO, SingularityWare, LLC.
> Senior Architect, RStor
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov <mailto:singularity+unsub...@lbl.gov>.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

v

unread,
Nov 9, 2017, 4:30:24 PM11/9/17
to singu...@lbl.gov
Comment from the peanut gallery...

this is the greatest idea!! It will be akin to this "clusterjob" that implemented http://clusterjob.org/documentation/book.html but didn't quite hit the mark because it required very specific needs for the executables and how they were formatted. 

Can't wait to see some of these examples in action!

Hi Oliver,

>     To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity+unsubscribe...@lbl.gov>.

>
>
>
>
> --
> Gregory M. Kurtzer
> CEO, SingularityWare, LLC.
> Senior Architect, RStor
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity+unsubscribe...@lbl.gov>.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.



--
Gregory M. Kurtzer
CEO, SingularityWare, LLC.
Senior Architect, RStor

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.



--
Vanessa Villamia Sochat
Stanford University '16

Oliver Freyermuth

unread,
Nov 9, 2017, 5:57:05 PM11/9/17
to singu...@lbl.gov, Gregory M. Kurtzer
Hi Greg,

Am 09.11.2017 um 22:13 schrieb Gregory M. Kurtzer:
> Hi Oliver,
>
> On Thu, Nov 9, 2017 at 10:43 AM, 'Oliver Freyermuth' via singularity <singu...@lbl.gov <mailto:singu...@lbl.gov>> wrote:
>
> To add on this: Another possibility you may consider, especially if you talk about a "cluster", is using something like HTCondor.
>
> With HTCondor's current Singularity support, some bind-mount hacks, and an sshd installed inside the container, this is already possible now (if Singularity is running setuid root).
>
>
> While it is possible (in theory), we have to be very much aware of how we use our SetUID abilities, and we purposefully drop all capabilities when we exec to a process within the container. The only way someone could actually run an ssh daemon inside the container is to run the container as root.
We are already letting HTCondor run an sshd as user inside our Singularity containers.
So our users can run
condor_submit -interactive
on a desktop machine, and end up inside a Singularity container on a worker node.

This naturally means privilege separation is disabled, and sshd can of course only let the single user in - but this is exactly the way it's meant to be (in the HTCondor world) and the way HTCondor is also using when running bare-metal.
The main issues why we still need SetUID-capabilities are related to the permissions, owners and groups on the devpts filesystem in the container, which sshd is very picky about. In a user namespace, there's no easy fix (apart from patching sshd).

HTCondor takes care to set up such a temporary sshd with one-time credentials for each user who wants to attach to an owned, running job. Sadly, it does that in a *separate* container to the original job right now, which is a bit useless,
but at least allows to run interactive jobs at all.

>  
>
> Then you can start an interactive job, and end up in a fresh container on the remote machine (as if ssh was used).
> Behind the scenes, sshd is actually used - the nice thing is they have magic implemented to make that work
> even if the cluster compute nodes are behind a NAT in a private network (by using a connection broker machine).
>
> For file transfer, HTCondor offers integrated techniques which are usually used for non-interactive jobs.
>
> My hope is the HTCondor people will at some point rework interactive jobs so the sshd can run outside the container and just use unprivileged nsenter to enter the container.
>
>
> You could do that but `nsenter` or rather the system call `setns()` implies a running container instance. With Singularity, you don't need that, and you can just use the Singularity shell idea I mentioned earlier, or the (very simple and clean) solution that Jeff mentioned. But... If you want to instead join an existing set of namespaces, Singularity supports that with instances, however I'm not sure a running instance is required.
I can't use the Singularity shell idea for our setup without doing one of two things:
- Grant the users SSH access to our worker nodes. I don't want to do that - they are in a private network behind a NAT gateway. The only access users get is via the reverse connections initiated by HTCondor.
This also ensures HTCondor is completely aware of anything user-related running on those machines.
- Rewriting the logic of HTCondor. Currently, it always (no matter whether bare metal or container) fires up a dummy job in case an interactive job is requested (it just runs a "sleep 180" job).
Then, the "ssh-to-job" approach is used, as if one would attach to a "real" running job. The idea is to enter the exact same environment of the job.

For this, of course either nsenter or instances would be fully working ideas. I suggested both on the HTCondor list a while ago, sadly there's still no reply...

But of course, I see many other applications in other environments for the Singularity shell, so I greet the idea ;-).
>  
>
> Then, we could also use that with user namespaces. For Singularity, this also requires https://github.com/singularityware/singularity/pull/934 <https://github.com/singularityware/singularity/pull/934> to go in.
>
>
> We can use the user namespace, without SetUID, now for this (if you are not using file based images). But you are right, Cedric's PR there is highly advantageous.
With nsenter, there's a huge problem without Cedric's PR in: When using nsenter to attach the container, you end up in the same mount namespace, but with / being the root of the host.
With Cedric's PR, this is resolved (due to the use of pivot_root).
That's why I would say that at least the "nsenter" approach requires this PR to be in to be useful.
>  
>
>
> If you're not into setting up a workload management system such as HTCondor, related work includes "ch-ssh" which is offered as part of Charliecloud.
> However, that's only for transparent ssh login, not file transfers.
>
>
> Yes, ch-ssh may offer a similar functionally to what I was describing (with a Singularity login shell), but I think that Jeff's idea is much cleaner.
>
> Great ideas!
Thanks!

Maybe another question, just since I did not test it yet - do Singularity's instances work also when Singularity is not setuid root?

Cheers,
Oliver
>
> Greg
>
>
>  
>
>
> Cheers,
> Oliver
>
> Am 09.11.2017 um 17:05 schrieb Gregory M. Kurtzer:
> > I have considered this too, and I prototyped it using a custom Singularity login shell program. Not sure if it worked for contained file transfers, but might be worth coming back to!
> >
> > On Thu, Nov 9, 2017 at 6:37 AM, Paul Hopkins <paul.lonnkv...@gmail.com <mailto:paul.lonnkv...@gmail.com> <mailto:paul.lonnkv...@gmail.com <mailto:paul.lonnkv...@gmail.com>>> wrote:
> >
> >     Is there a way of accessing a Singularity container transparently via ssh shell, remote ssh commands, and scp file transfer? If possible, I would like users to access a cluster and be instantly transported into the Singularity container. Steven Brandt posted some code recently, and I have some hacky code that allows remote command execution, but I am stuck on scp file transfer:
> >
> >     if [ -z "$SINGULARITY_NAME" ]; then
> >        if [[ "$-" == *i* ]]; then
> >         exec singularity shell -s /bin/bash el7.img
> >        else
> >          COMMAND="$(ps -o args= $$ | cut -c8- )" # Cut "/bin/bash -c" off
> >          exec singularity exec el7.img ${COMMAND}
> >       fi
> >     fi
> >
> >     File transfer really only affects things that are in the container, not bound directories like /home or /data but I wonder if anybody else has figured this out yet? Furthermore, is this even sane, are other people considering this? If so, could Singularity handle all the logic about what to do itself, or does it need some boilerplate wrapper?
> >
> >     Thanks,
> >
> >     Paul
> >
> >
> >     -- 
> >     Paul Hopkins
> >     Computational Infrastructure Scientist
> >     Cardiff University
> >
> >     Hopk...@cardiff.ac.uk <mailto:Hopk...@cardiff.ac.uk> <mailto:Hopk...@cardiff.ac.uk <mailto:Hopk...@cardiff.ac.uk>>
> >     Office: +44 (0) 29 225 10043 <tel:%2B44%20%280%29%2029%20225%2010043>
> >
> >     --
> >     You received this message because you are subscribed to the Google Groups "singularity" group.
> >     To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov>>.
> >
> >
> >
> >
> > --
> > Gregory M. Kurtzer
> > CEO, SingularityWare, LLC.
> > Senior Architect, RStor
> >
> > --
> > You received this message because you are subscribed to the Google Groups "singularity" group.
> > To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov>>.
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov>.
>
>
>
>
> --
> Gregory M. Kurtzer
> CEO, SingularityWare, LLC.
> Senior Architect, RStor
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity...@lbl.gov>.

Gregory M. Kurtzer

unread,
Nov 9, 2017, 7:15:39 PM11/9/17
to Oliver Freyermuth, singu...@lbl.gov
On Thu, Nov 9, 2017 at 2:56 PM, Oliver Freyermuth <o.frey...@googlemail.com> wrote:
Hi Greg,

Am 09.11.2017 um 22:13 schrieb Gregory M. Kurtzer:
> Hi Oliver,
>
> On Thu, Nov 9, 2017 at 10:43 AM, 'Oliver Freyermuth' via singularity <singu...@lbl.gov <mailto:singu...@lbl.gov>> wrote:
>
>     To add on this: Another possibility you may consider, especially if you talk about a "cluster", is using something like HTCondor.
>
>     With HTCondor's current Singularity support, some bind-mount hacks, and an sshd installed inside the container, this is already possible now (if Singularity is running setuid root).
>
>
> While it is possible (in theory), we have to be very much aware of how we use our SetUID abilities, and we purposefully drop all capabilities when we exec to a process within the container. The only way someone could actually run an ssh daemon inside the container is to run the container as root.
We are already letting HTCondor run an sshd as user inside our Singularity containers.
So our users can run
condor_submit -interactive
on a desktop machine, and end up inside a Singularity container on a worker node.

This naturally means privilege separation is disabled, and sshd can of course only let the single user in - but this is exactly the way it's meant to be (in the HTCondor world) and the way HTCondor is also using when running bare-metal.
The main issues why we still need SetUID-capabilities are related to the permissions, owners and groups on the devpts filesystem in the container, which sshd is very picky about. In a user namespace, there's no easy fix (apart from patching sshd).

HTCondor takes care to set up such a temporary sshd with one-time credentials for each user who wants to attach to an owned, running job. Sadly, it does that in a *separate* container to the original job right now, which is a bit useless,
but at least allows to run interactive jobs at all.

I am going to ask Brian Bockelman to step in here, as he is our resident HTCondor expert.
 

>  
>
>     Then you can start an interactive job, and end up in a fresh container on the remote machine (as if ssh was used).
>     Behind the scenes, sshd is actually used - the nice thing is they have magic implemented to make that work
>     even if the cluster compute nodes are behind a NAT in a private network (by using a connection broker machine).
>
>     For file transfer, HTCondor offers integrated techniques which are usually used for non-interactive jobs.
>
>     My hope is the HTCondor people will at some point rework interactive jobs so the sshd can run outside the container and just use unprivileged nsenter to enter the container.
>
>
> You could do that but `nsenter` or rather the system call `setns()` implies a running container instance. With Singularity, you don't need that, and you can just use the Singularity shell idea I mentioned earlier, or the (very simple and clean) solution that Jeff mentioned. But... If you want to instead join an existing set of namespaces, Singularity supports that with instances, however I'm not sure a running instance is required.
I can't use the Singularity shell idea for our setup without doing one of two things:
- Grant the users SSH access to our worker nodes. I don't want to do that - they are in a private network behind a NAT gateway. The only access users get is via the reverse connections initiated by HTCondor.
  This also ensures HTCondor is completely aware of anything user-related running on those machines.
- Rewriting the logic of HTCondor. Currently, it always (no matter whether bare metal or container) fires up a dummy job in case an interactive job is requested (it just runs a "sleep 180" job).
  Then, the "ssh-to-job" approach is used, as if one would attach to a "real" running job. The idea is to enter the exact same environment of the job.

For this, of course either nsenter or instances would be fully working ideas. I suggested both on the HTCondor list a while ago, sadly there's still no reply...

But of course, I see many other applications in other environments for the Singularity shell, so I greet the idea ;-).

Gotcha, that makes perfect sense.

 
>  
>
>     Then, we could also use that with user namespaces. For Singularity, this also requires https://github.com/singularityware/singularity/pull/934 <https://github.com/singularityware/singularity/pull/934> to go in.
>
>
> We can use the user namespace, without SetUID, now for this (if you are not using file based images). But you are right, Cedric's PR there is highly advantageous.
With nsenter, there's a huge problem without Cedric's PR in: When using nsenter to attach the container, you end up in the same mount namespace, but with / being the root of the host.
With Cedric's PR, this is resolved (due to the use of pivot_root).
That's why I would say that at least the "nsenter" approach requires this PR to be in to be useful.

Understood. Cedric's patch will be included as soon as possible, but not part of the 2.4.1 release which will be announced as alpha just in time for Supercomputing.
 
>  
>
>
>     If you're not into setting up a workload management system such as HTCondor, related work includes "ch-ssh" which is offered as part of Charliecloud.
>     However, that's only for transparent ssh login, not file transfers.
>
>
> Yes, ch-ssh may offer a similar functionally to what I was describing (with a Singularity login shell), but I think that Jeff's idea is much cleaner.
>
> Great ideas!
Thanks!

Maybe another question, just since I did not test it yet - do Singularity's instances work also when Singularity is not setuid root?

Assuming you are referring to using the kernel user namespace, I have no tried it and Michael Bauer has been the primary devel on that project. I don't see why it would not, but I must defer to Michael.

Greg

 
>     >     To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity+unsub...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov>>.

>     >
>     >
>     >
>     >
>     > --
>     > Gregory M. Kurtzer
>     > CEO, SingularityWare, LLC.
>     > Senior Architect, RStor
>     >
>     > --
>     > You received this message because you are subscribed to the Google Groups "singularity" group.
>     > To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity+unsub...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov>>.

>
>     --
>     You received this message because you are subscribed to the Google Groups "singularity" group.
>     To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov>.
>
>
>
>
> --
> Gregory M. Kurtzer
> CEO, SingularityWare, LLC.
> Senior Architect, RStor
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov <mailto:singularity+unsub...@lbl.gov>.

Oliver Freyermuth

unread,
Nov 9, 2017, 7:23:04 PM11/9/17
to Gregory M. Kurtzer, singu...@lbl.gov
Hi Greg,

Am 10.11.2017 um 01:15 schrieb Gregory M. Kurtzer:
>
>
> On Thu, Nov 9, 2017 at 2:56 PM, Oliver Freyermuth <o.frey...@googlemail.com <mailto:o.frey...@googlemail.com>> wrote:
>
> Hi Greg,
>
> Am 09.11.2017 um 22:13 schrieb Gregory M. Kurtzer:
> > Hi Oliver,
> >
> > On Thu, Nov 9, 2017 at 10:43 AM, 'Oliver Freyermuth' via singularity <singu...@lbl.gov <mailto:singu...@lbl.gov> <mailto:singu...@lbl.gov <mailto:singu...@lbl.gov>>> wrote:
> >
> >     To add on this: Another possibility you may consider, especially if you talk about a "cluster", is using something like HTCondor.
> >
> >     With HTCondor's current Singularity support, some bind-mount hacks, and an sshd installed inside the container, this is already possible now (if Singularity is running setuid root).
> >
> >
> > While it is possible (in theory), we have to be very much aware of how we use our SetUID abilities, and we purposefully drop all capabilities when we exec to a process within the container. The only way someone could actually run an ssh daemon inside the container is to run the container as root.
> We are already letting HTCondor run an sshd as user inside our Singularity containers.
> So our users can run
> condor_submit -interactive
> on a desktop machine, and end up inside a Singularity container on a worker node.
>
> This naturally means privilege separation is disabled, and sshd can of course only let the single user in - but this is exactly the way it's meant to be (in the HTCondor world) and the way HTCondor is also using when running bare-metal.
> The main issues why we still need SetUID-capabilities are related to the permissions, owners and groups on the devpts filesystem in the container, which sshd is very picky about. In a user namespace, there's no easy fix (apart from patching sshd).
>
> HTCondor takes care to set up such a temporary sshd with one-time credentials for each user who wants to attach to an owned, running job. Sadly, it does that in a *separate* container to the original job right now, which is a bit useless,
> but at least allows to run interactive jobs at all.
>
>
> I am going to ask Brian Bockelman to step in here, as he is our resident HTCondor expert.
Thanks! You may as well direct him to the thread on the HTCondor-users mailing list:
https://www-auth.cs.wisc.edu/lists/htcondor-users/2017-October/msg00114.shtml
It includes all the context and suggestions.

>  
>
>
> >  
> >
> >     Then you can start an interactive job, and end up in a fresh container on the remote machine (as if ssh was used).
> >     Behind the scenes, sshd is actually used - the nice thing is they have magic implemented to make that work
> >     even if the cluster compute nodes are behind a NAT in a private network (by using a connection broker machine).
> >
> >     For file transfer, HTCondor offers integrated techniques which are usually used for non-interactive jobs.
> >
> >     My hope is the HTCondor people will at some point rework interactive jobs so the sshd can run outside the container and just use unprivileged nsenter to enter the container.
> >
> >
> > You could do that but `nsenter` or rather the system call `setns()` implies a running container instance. With Singularity, you don't need that, and you can just use the Singularity shell idea I mentioned earlier, or the (very simple and clean) solution that Jeff mentioned. But... If you want to instead join an existing set of namespaces, Singularity supports that with instances, however I'm not sure a running instance is required.
> I can't use the Singularity shell idea for our setup without doing one of two things:
> - Grant the users SSH access to our worker nodes. I don't want to do that - they are in a private network behind a NAT gateway. The only access users get is via the reverse connections initiated by HTCondor.
>   This also ensures HTCondor is completely aware of anything user-related running on those machines.
> - Rewriting the logic of HTCondor. Currently, it always (no matter whether bare metal or container) fires up a dummy job in case an interactive job is requested (it just runs a "sleep 180" job).
>   Then, the "ssh-to-job" approach is used, as if one would attach to a "real" running job. The idea is to enter the exact same environment of the job.
>
> For this, of course either nsenter or instances would be fully working ideas. I suggested both on the HTCondor list a while ago, sadly there's still no reply...
>
> But of course, I see many other applications in other environments for the Singularity shell, so I greet the idea ;-).
>
>
> Gotcha, that makes perfect sense.
>
>  
>
> >  
> >
> >     Then, we could also use that with user namespaces. For Singularity, this also requires https://github.com/singularityware/singularity/pull/934 <https://github.com/singularityware/singularity/pull/934> <https://github.com/singularityware/singularity/pull/934 <https://github.com/singularityware/singularity/pull/934>> to go in.
> >
> >
> > We can use the user namespace, without SetUID, now for this (if you are not using file based images). But you are right, Cedric's PR there is highly advantageous.
> With nsenter, there's a huge problem without Cedric's PR in: When using nsenter to attach the container, you end up in the same mount namespace, but with / being the root of the host.
> With Cedric's PR, this is resolved (due to the use of pivot_root).
> That's why I would say that at least the "nsenter" approach requires this PR to be in to be useful.
>
>
> Understood. Cedric's patch will be included as soon as possible, but not part of the 2.4.1 release which will be announced as alpha just in time for Supercomputing.
Understood, certainly, the patch is really important, but also rather large - so I totally understand review will take a bit ;-).
>  
>
> >  
> >
> >
> >     If you're not into setting up a workload management system such as HTCondor, related work includes "ch-ssh" which is offered as part of Charliecloud.
> >     However, that's only for transparent ssh login, not file transfers.
> >
> >
> > Yes, ch-ssh may offer a similar functionally to what I was describing (with a Singularity login shell), but I think that Jeff's idea is much cleaner.
> >
> > Great ideas!
> Thanks!
>
> Maybe another question, just since I did not test it yet - do Singularity's instances work also when Singularity is not setuid root?
>
>
> Assuming you are referring to using the kernel user namespace, I have no tried it and Michael Bauer has been the primary devel on that project. I don't see why it would not, but I must defer to Michael.
Thanks, I'm looking forward to that - in our setup with containers on CVMFS, we don't need overlayfs, so user namespaces would certainly be preferred (if HTCondor learns to handle them well).
> >     >     Hopk...@cardiff.ac.uk <mailto:Hopk...@cardiff.ac.uk> <mailto:Hopk...@cardiff.ac.uk <mailto:Hopk...@cardiff.ac.uk>> <mailto:Hopk...@cardiff.ac.uk <mailto:Hopk...@cardiff.ac.uk> <mailto:Hopk...@cardiff.ac.uk <mailto:Hopk...@cardiff.ac.uk>>>
> >     >     Office: +44 (0) 29 225 10043 <tel:%2B44%20%280%29%2029%20225%2010043> <tel:%2B44%20%280%29%2029%20225%2010043>
> >     >
> >     >     --
> >     >     You received this message because you are subscribed to the Google Groups "singularity" group.
> >     >     To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity%2Bunsu...@lbl.gov <mailto:singularity%252Buns...@lbl.gov>> <mailto:singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity%2Bunsu...@lbl.gov <mailto:singularity%252Buns...@lbl.gov>>>.
> >     >
> >     >
> >     >
> >     >
> >     > --
> >     > Gregory M. Kurtzer
> >     > CEO, SingularityWare, LLC.
> >     > Senior Architect, RStor
> >     >
> >     > --
> >     > You received this message because you are subscribed to the Google Groups "singularity" group.
> >     > To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity%2Bunsu...@lbl.gov <mailto:singularity%252Buns...@lbl.gov>> <mailto:singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity%2Bunsu...@lbl.gov <mailto:singularity%252Buns...@lbl.gov>>>.
> >
> >     --
> >     You received this message because you are subscribed to the Google Groups "singularity" group.
> >     To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity%2Bunsu...@lbl.gov <mailto:singularity%252Buns...@lbl.gov>>.
> >
> >
> >
> >
> > --
> > Gregory M. Kurtzer
> > CEO, SingularityWare, LLC.
> > Senior Architect, RStor
> >
> > --
> > You received this message because you are subscribed to the Google Groups "singularity" group.
> > To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov> <mailto:singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov>>.
>
>
>
>
> --
> Gregory M. Kurtzer
> CEO, SingularityWare, LLC.
> Senior Architect, RStor


--
Oliver Freyermuth
Universität Bonn
Physikalisches Institut, Raum 1.047
Nußallee 12
53115 Bonn
--
Tel.: +49 228 73 2367
Fax: +49 228 73 7869
--

Paul Hopkins

unread,
Nov 14, 2017, 9:39:29 AM11/14/17
to singu...@lbl.gov, Gregory M. Kurtzer
Hi all,

Thanks for the suggestions. Actually what I posted does work correctly, but I think what killed scp file transfers were debug messages that I had added.

I was also interested in what Oliver was suggesting as we do use HTCondor, but I had not considered starting remote interactive jobs. I do plan to add something so that all HTCondor jobs run using the default image inline with the Singularity login shell. So I will be following this closely.

Thanks,

Paul

>     >     >     To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsubscri...@lbl.gov> <mailto:singularity%2Bunsubscri...@lbl.gov <mailto:singularity%252Bunsubscr...@lbl.gov>> <mailto:singularity+unsubscribe...@lbl.gov <mailto:singularity%2Bunsubscri...@lbl.gov> <mailto:singularity%2Bunsubscri...@lbl.gov <mailto:singularity%252Bunsubscr...@lbl.gov>>>.

>     >     >
>     >     >
>     >     >
>     >     >
>     >     > --
>     >     > Gregory M. Kurtzer
>     >     > CEO, SingularityWare, LLC.
>     >     > Senior Architect, RStor
>     >     >
>     >     > --
>     >     > You received this message because you are subscribed to the Google Groups "singularity" group.

>     >
>     >     --
>     >     You received this message because you are subscribed to the Google Groups "singularity" group.
>     >     To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsubscri...@lbl.gov> <mailto:singularity%2Bunsubscri...@lbl.gov <mailto:singularity%252Bunsubscr...@lbl.gov>>.
>     >
>     >
>     >
>     >
>     > --
>     > Gregory M. Kurtzer
>     > CEO, SingularityWare, LLC.
>     > Senior Architect, RStor
>     >
>     > --
>     > You received this message because you are subscribed to the Google Groups "singularity" group.
>     > To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsubscri...@lbl.gov> <mailto:singularity+unsubscribe...@lbl.gov <mailto:singularity%2Bunsubscri...@lbl.gov>>.

>
>
>
>
> --
> Gregory M. Kurtzer
> CEO, SingularityWare, LLC.
> Senior Architect, RStor


--
Oliver Freyermuth
Universität Bonn
Physikalisches Institut, Raum 1.047
Nußallee 12
53115 Bonn
--
Tel.: +49 228 73 2367
Fax:  +49 228 73 7869
--
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

Oliver Schulz

unread,
Jan 1, 2020, 6:23:15 PM1/1/20
to singularity
I've also had some success with using something like this in "~/.ssh/config":

```
Host somehost-with-mycontainer
  ProxyCommand ssh -q somehost singularity exec /path/to/mycontainer.sqsh /usr/sbin/sshd -i -o "UsePAM=no"
```

Needs sshd installed in the container and SSH keys set up for password-less login. Then

```
ssh somehost-with-mycontainer
```

will ssh into a shell in the container. One problem is that "/.singularity.d/env" is not used automatically.
Reply all
Reply to author
Forward
0 new messages