$HOME mounting in a container-modification session

547 views
Skip to first unread message

Chih-Song Kuo

unread,
Feb 24, 2017, 5:13:14 AM2/24/17
to singularity
Hello,

If I understood correct, modification of the OS filesystem of container always needs to be done by a privileged user, be it root or by means of sudo.

What I realized was that when I launched singularity with sudo, then my actual $HOME (/home/$USER) was not bind mounted but rather /root was mounted. Is this the expected behavior? Since one usually saves data in /home/$USER, in order to retrieve the data there, one needs to manually bind her own home with the -B flag, which is cumbersome and counter-intuitive.  ( -H /home/<user> did not work. I got "Home directory ownership incorrect: /home/<user>). Is that also a problem for you?

Furthermore, on my workstation /home is an NFS share  (I think this is pretty usual, isn't it?) When I am granted sudo right and am allowed to launch singularity with sudo to bind-mount my home with the -B option, I also found myself able to bind-mount any user's home with the -B option. Since I am elevated to root inside the container, I can destroy any data (of my colleagues) in the NFS share. Is this a known issue or did I make something wrong?

Chih-Song

Robin Goldstone

unread,
Feb 28, 2017, 9:33:03 PM2/28/17
to singularity
I would say whoever gave you sudo privs to run singularity on a system with a shared NFS home directory is the one who did something wrong.  Because as you noted, at that point you can mount anyone's home directory and destroy their data.

This is, IMO, one of the challenges with singularity.  Users need to build their container on a system where they have root and unless they are a sysadmin, users shouldn't be given root on systems with shared home directories (NFS, GPFS, Lustre, etc.)

Chihsong

unread,
Mar 1, 2017, 5:29:52 AM3/1/17
to singu...@lbl.gov
Well, on production systems, it is normal that some users are given sudo privs to run a few commands like dmidecode, cache-flushing script, some GPFS commands (many can only be run by root). In such cases there is no worry that these commands would destroy the NFS. But for the singularity case, being given sudo privs to the singularity command is almost the same as being given the root password.

Hmmm, big challenge!

Chih-Song

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Thomas Maier

unread,
Mar 1, 2017, 6:14:34 AM3/1/17
to singularity
Hi,

to be honest I don't find the behaviour of not automatically binding your home directory when singularity is executed with root privileges counterintuitive (singularity sees you as root in this case not as user xyz). Also, normally root is not allowed to write in user home directories in the case of NFS mounted home folders anyway (well, at least this is the case in the systems I work in) , so it wouldn't really make sense to have them bound into the singularity image automatically.
I don't want to make any judgements regarding the system you're working in (since I don't know it), but I have to second the comment from Robin that generally speaking giving users these kind of rights is really dangerous. Also, I don't understand your comment "... being given sudo privs to the singularity command is almost the same as being given the root password". Singularity gives you the same privileges inside the container that you have when executing it. So if you run singularity with sudo rights (which you should only do when making any changes to the container environment itself, in my opinion) you have to act with the same amount of care when doing anything else with sudo.

Cheers,

Thomas

David Godlove

unread,
Mar 1, 2017, 7:10:18 AM3/1/17
to singu...@lbl.gov
I think what Chihsong is alluding to is the fact that sudo can be configured to give users fine grained permissions.  Strictly speaking, sudo != root (or sudo <= root).  It is possible as a sys admin (and not necessarily a bad strategy) to add users to the sudoers group, but to restrict what they can actually do with sudo.  


Apparently, that is what Chihsong's sys admin has done.  But it sounds like singularity is not respecting those fine grained permissions.  It sees sudo and thinks "root" so once Chihsong is in a container they experience a de facto privilege escalation.  Is this correct?  Maybe because the container has an /etc/sudoers file that differs from the host?  If so, my $0.02 is that it's a bug in Singularity.  But I don't really see how to fix it...  <Points out a problem with no suggestion for solution and then runs and hides.>

--

Tru Huynh

unread,
Mar 1, 2017, 7:26:05 AM3/1/17
to singu...@lbl.gov
Hi,

On Wed, Mar 01, 2017 at 07:09:35AM -0500, David Godlove wrote:
> I think what Chihsong is alluding to is the fact that sudo can be
> configured to give users fine grained permissions. Strictly speaking, sudo
> != root (or sudo <= root). It is possible as a sys admin (and not
> necessarily a bad strategy) to add users to the sudoers group, but to
> restrict what they can actually do with sudo.
>
> http://www.techrepublic.com/article/limiting-root-access-with-sudo-part-1/
>
> Apparently, that is what Chihsong's sys admin has done. But it sounds like
> singularity is not respecting those fine grained permissions. It sees sudo
> and thinks "root" so once Chihsong is in a container they experience a de
> facto privilege escalation. Is this correct? Maybe because the container
> has an /etc/sudoers file that differs from the host? If so, my $0.02 is
> that it's a bug in Singularity. But I don't really see how to fix it...
> <Points out a problem with no suggestion for solution and then runs and
> hides.>


The admin granting sudo just need to be aware that
"sudo singularity" == "sudo bash" in terms of privileges on the running
host, not much different that giving allowing users to "sudo docker ..."

Building host: user's laptop or computer where he/she is root.
Running hosts: no sudo singularity, except for the usual admins.

ymmv

cheers

Tru
--
Dr Tru Huynh | http://www.pasteur.fr/research/bis
mailto:t...@pasteur.fr | tel/fax +33 1 45 68 87 37/19
Institut Pasteur, 25-28 rue du Docteur Roux, 75724 Paris CEDEX 15 France

Chihsong

unread,
Mar 1, 2017, 8:26:56 AM3/1/17
to singu...@lbl.gov
Thomas: Well, I would say whether it is counter-intuitive that Singularity binds the user's home directory in a sudo session is more like a personal issue. Just want to remind that normally a user would save source codes in one's own home (/home/$USER) even when one has root access. When one tries to install stuffs into a container, one expects to install from home as well (for example by issuing make install), not from /root.

David: You got my point! BTW, I work for a system provider and am actually a user and a system administrator at the same time. We have plenty of clusters for internal use and there we implement fine-grained sudo access permissions. In the past I worked in a few German and Japanese universities where I am given a workstation and remote access to a few really large HPC clusters.

Tru: That "sudo singularity" == "sudo bash" is correct and needs to be made clear to everyone (at least I was not aware of that in the beginning). With your assumption that a user is root on her own workstation where an image is built and modified, the way how Singularity behaves is all valid. I do want to point out that such an assumption is pretty strong in that many people simply do not get root access on their workstations (because the workstations share some resources (NFS, printer ...) or the IT department is worried about that a user would change some core settings which causes troubles -- even just local -- that needs to be solved by the IT helpdesk). This was why I thought about fine-grained sudo, which is used frequently in these situations. However, the discussion here concluded that even such an approach would still be inappropriate.

Chih-Song


--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

Robin Goldstone

unread,
Mar 1, 2017, 12:07:03 PM3/1/17
to singularity
Wow, I didn't realize this would generate so much discussion.  I probably should not have posted such a flippant response, sorry. I have read all of the replies and basically Tru has it right.  Yes, sudo provides more fine-grained permissions than just giving someone root but whoever is granting sudo privileges needs to understand what the particular command does.  In this case, sudo singularity lets you, among other things, bind-mount any file system into the container and then gives you a shell with uid 0 in the container.  If the filesystem is a shared NFS mount that is not set up to squash root then you will have the ability to do "root stuff" in that file system from inside the container.

This brings me to what I see as the biggest challenge in our environment (HPC facility).  If we want to let users run singularity containers on our systems, they need a place where they are root so they can build the container.  And we don't give users root on our systems for a myriad of reasons, one being shared filesystems.  So they need to find some place else to build their container.  Some of them have access to a Linux desktop where they have root, others have to get more creative.  And if they build it somewhere else, they won't have access to their home directory, which probably contains stuff they need to build their application.

The other part of this problem is if you want a container to be portable, meaning a  container that you can to give to other people to run, you can't make any assumptions about their home directory path.  So you need to make sure that any applications you build in your container are ultimately not dependent on anything in your home dir.  In our environment, the mount point we use for home dirs (/g) does not exist in the container so the bind mount fails.  I could certainly create that mount point in my container, but if I give it to someone else with a different home directory path, all bets are off as to whether their home dir will mount or not.

Gregory M. Kurtzer

unread,
Mar 1, 2017, 12:18:38 PM3/1/17
to singularity
On Wed, Mar 1, 2017 at 9:07 AM, Robin Goldstone <golds...@llnl.gov> wrote:
Wow, I didn't realize this would generate so much discussion.  I probably should not have posted such a flippant response, sorry. I have read all of the replies and basically Tru has it right.  Yes, sudo provides more fine-grained permissions than just giving someone root but whoever is granting sudo privileges needs to understand what the particular command does.  In this case, sudo singularity lets you, among other things, bind-mount any file system into the container and then gives you a shell with uid 0 in the container.  If the filesystem is a shared NFS mount that is not set up to squash root then you will have the ability to do "root stuff" in that file system from inside the container. 

This brings me to what I see as the biggest challenge in our environment (HPC facility).  If we want to let users run singularity containers on our systems, they need a place where they are root so they can build the container.  And we don't give users root on our systems for a myriad of reasons, one being shared filesystems.  So they need to find some place else to build their container.  Some of them have access to a Linux desktop where they have root, others have to get more creative.  And if they build it somewhere else, they won't have access to their home directory, which probably contains stuff they need to build their application.

I have some plans, fixes and ideas for this:

1. In the newest development work we are doing, a user can create an image and import to that image as non root. That makes this possible: `singularity create tensorflow.img; singularity import tensorflow.img docker://tensorflow:latest` without being root. note: This does not work for bootstrapping, only importing.

2. Singularity Hub (and/or DockerHub) can be used to build images today. Singularity Hub (shub) integrates with ones GitHub repository, and using continuous integration, will generate a new container that you can then reference via `singularity shell shub://....`.

3. I have a vision for a build service (that has yet to be built). The build service would integrate directly with Singularity and allow someone to remote build a container using the generalized bootstrap syntax as we use now. For example: `singularity bootstrap-remote container.img file.def`. that command would send the file.def to a build server, and would wait until the build server was done, at which point it would download the resultant container.img. For all practical purposes, the build process could have been local.
 


The other part of this problem is if you want a container to be portable, meaning a  container that you can to give to other people to run, you can't make any assumptions about their home directory path.  So you need to make sure that any applications you build in your container are ultimately not dependent on anything in your home dir.  In our environment, the mount point we use for home dirs (/g) does not exist in the container so the bind mount fails.  I could certainly create that mount point in my container, but if I give it to someone else with a different home directory path, all bets are off as to whether their home dir will mount or not.

This is what the OverlayFS feature is supposed to mitigate. In 2.2 it was not enabled by default (as it tickled a bug in RH7's kernel), but that bug has been fixed, and it is now enabled in the development versions. If you want to test it, search for 'overlay' in the singularity.conf, and enable it.

BTW, you are 100% correct though about the practice of making portable containers. I usually recommend for people to build completely with a bootstrap definition (recipe) for this reason, but even then, there are some best practices which must be followed! And before they can be followed.... they should be written! haha

Greg



 
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.



--
Gregory M. Kurtzer
HPC Systems Architect and Technology Developer
Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)
Warewulf Cluster Management (http://warewulf.lbl.gov/)

Stefan Kombrink

unread,
Mar 1, 2017, 3:06:11 PM3/1/17
to singularity
This thread is indeed highly interesting to me since it reassures me of a lot of what I experience and deal with during my attempts to establish a containerized HPC container building environment.
I see a strong demand for a safe way an privileged admin group is able to create and maintain singularity containers as non-root, ideally even unprivileged users.

I recently tested shifter for that purpose, too, and found a few things which can be done there which are nice, especially the image gateway service which is accessible by users, and that when I import docker containers some pre/post conversion trigger scripts can be executed on a user level.
But all in all, it is not feature complete, and I prefer singularity which seems more mature.

Hence, a container building service tightly coupled to the singularity runtime would be great!
I tested remote solutions such as docker hub and singularity hub and both have the problem that local data / SW is not directly accessible and must therefore be transferred beforehand.
Also a singularity/docker installation on my local PC as an requirement to maintain and create customized containers is IMO too much of an requirement if we want to convince our user base to switch from established non-container workflows to containerized ones.

Greets&thanks Stefan
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

Todd Gamblin

unread,
Mar 1, 2017, 6:12:15 PM3/1/17
to singularity
Greg:


On Wednesday, March 1, 2017 at 9:18:38 AM UTC-8, Gregory M. Kurtzer wrote:

On Wed, Mar 1, 2017 at 9:07 AM, Robin Goldstone <golds...@llnl.gov> wrote:
This brings me to what I see as the biggest challenge in our environment (HPC facility).  If we want to let users run singularity containers on our systems, they need a place where they are root so they can build the container.  And we don't give users root on our systems for a myriad of reasons, one being shared filesystems.  So they need to find some place else to build their container.  Some of them have access to a Linux desktop where they have root, others have to get more creative.  And if they build it somewhere else, they won't have access to their home directory, which probably contains stuff they need to build their application.

I have some plans, fixes and ideas for this:

1. In the newest development work we are doing, a user can create an image and import to that image as non root. That makes this possible: `singularity create tensorflow.img; singularity import tensorflow.img docker://tensorflow:latest` without being root. note: This does not work for bootstrapping, only importing.

2. Singularity Hub (and/or DockerHub) can be used to build images today. Singularity Hub (shub) integrates with ones GitHub repository, and using continuous integration, will generate a new container that you can then reference via `singularity shell shub://....`.

3. I have a vision for a build service (that has yet to be built). The build service would integrate directly with Singularity and allow someone to remote build a container using the generalized bootstrap syntax as we use now. For example: `singularity bootstrap-remote container.img file.def`. that command would send the file.def to a build server, and would wait until the build server was done, at which point it would download the resultant container.img. For all practical purposes, the build process could have been local.

Those all sound exciting!

I've got another use case to bounce off of you that I think will address a lot of peoples' concerns about, "containerized HPC container build environments", as Stefan put it above.

First some points:

1. I think most container users want to be root so they can do things like install packages using the system package manager.  That is great, but it's hard when singularity sudo == bash sudo, as mentioned above.  

2. I suspect that most of our users don't necessarily need more system packages.  They need to build their custom HPC packages and snapshot *those* to send to someone else.  They could do that in their home directory.

3. If you had a user space package manager (like Spack, https://spack.io) that could install all your HPC dependencies in your home directory, you could really easily build your dependencies AND your application in your home directory.

Current issues:

1. Home directories have different mount points across different systems.
2. Users have different uids across systems.

Both of these make it hard to build in your home directory and then share what you did with someone else.

What if you had a well known path, like /home/mysingularity, that was *always* owned by the user running the container.  So, if I launch a container as tgamblin, that mount point shows up in the container, and it is owned by tgamblin.  If I build something there as tgamblin, it'll stay be in the container, and when I give it to user rgoldstone, she launches the container and the /home/mysingularity directory is owned by rgoldstone.  Now I have a location where I can build the same way I'm used to, even without root, and I can easily share that mount with other users.

This would allow our users to build containers the same way they currently do on our clusters, without them having to understand any fancy filesystem mounts, overlay semantics, or external build farms.  Coupled with Spack (an entirely user-space package manager) they could also easily build their dependencies in the "myuser" directory, without worrying about portability concerns that arise from differently named home directories on different systems.

Does that make sense, and would it be an easy tweak?  Or am I missing something?

-Todd


Gregory M. Kurtzer

unread,
Mar 1, 2017, 6:28:18 PM3/1/17
to singularity
Hi Todd,

Comments inline

I think that is a great idea! If I am understanding correctly, could an example implementation look like this:

$ singularity shell --writable ~/container.img
Singularity container.img> echo $PATH
/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/applications/bin
Singularity container.img> cd ~/git/awesomeness
Singularity container.img> ./configure --prefix=/applications && make && make install

If so, that would be pretty straight forward to implement.

Thanks!
 

Gamblin, Todd

unread,
Mar 1, 2017, 7:02:45 PM3/1/17
to singu...@lbl.gov
Greg:

Comments below.

On Mar 1, 2017, at 3:28 PM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:

This would allow our users to build containers the same way they currently do on our clusters, without them having to understand any fancy filesystem mounts, overlay semantics, or external build farms.  Coupled with Spack (an entirely user-space package manager) they could also easily build their dependencies in the "myuser" directory, without worrying about portability concerns that arise from differently named home directories on different systems.

Does that make sense, and would it be an easy tweak?  Or am I missing something?

I think that is a great idea! If I am understanding correctly, could an example implementation look like this:

$ singularity shell --writable ~/container.img
Singularity container.img> echo $PATH
/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/applications/bin
Singularity container.img> cd ~/git/awesomeness
Singularity container.img> ./configure --prefix=/applications && make && make install

If so, that would be pretty straight forward to implement.

Yes!  if /applications is always owned by <whoever launched the container> then I think this is exactly my use case!

-Todd






Thanks!
 
-- 
Gregory M. Kurtzer
HPC Systems Architect and Technology Developer
Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)
Warewulf Cluster Management (http://warewulf.lbl.gov/)

Gregory M. Kurtzer

unread,
Mar 1, 2017, 7:34:32 PM3/1/17
to singularity
Hi Todd,

So I am thinking about this... If the invocation of Singularity changes the ownership of the directory `/applications` to the calling user, and then the calling user (let's assume UID=1234) installs files into that path, when the container exits, would it be considered tolerable that the files would always be owned by UID 1234?

I ask because there are some things I am rather OCD on, and file ownerships and correct permissions are one of them.

Going off on a related tangent... I think the best way to address this is actually for people not ever to modify containers once they have been bootstrapped. The --writable option is a doubled edged sword because it breaks reproducibility and if we can find a way for non-root users to bootstrap, and then integrate something like Spack or EasyBuild into the bootstrap recipe, I think that would be the ideal way to go, but I'm not sure exactly what that integration layer would look like. Thoughts?

Thanks!

Greg:

Comments below.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Goldstone, Robin J.

unread,
Mar 1, 2017, 8:02:30 PM3/1/17
to singu...@lbl.gov

Re: not being able to modify containers once bootstrapped, I can tell you that at least in my early experience, it took a lot of trial and error to get everything the way I wanted it in my container.  My approach was to bootstrap a minimal container then shell in and manually muck around with things until I had everything working, then stuff the final recipe into the def file once I figured it all out.

 

If the only way I could build my container was by iteratively modifying the def file and rebuilding the container, I am afraid I would have to shoot myself.  L

Greg:

 

Comments below.

 

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

 

--
You received this message because you are subscribed to the Google Groups "singularity" group.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.



 

--

Gregory M. Kurtzer

HPC Systems Architect and Technology Developer

Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)

Warewulf Cluster Management (http://warewulf.lbl.gov/)

--
You received this message because you are subscribed to the Google Groups "singularity" group.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

Goldstone, Robin J.

unread,
Mar 1, 2017, 8:12:08 PM3/1/17
to singu...@lbl.gov

With regard to /applications, I like the idea of having a directory that always inherits file ownership from the calling user. But how would you implement that?  If it requires a recursive chown, that seems like it would add a lot of overhead to container startup.  In terms of leaving the file ownership as-is when the container exits, I agree that seems a bit weird, though not sure it is a security issue.   One solution could be to change the ownership to nobody any time the container shuts down.  But again, if that requires chown –R it could add significant overhead.

 

From: "Gregory M. Kurtzer" <gmku...@lbl.gov>
Reply-To: "singu...@lbl.gov" <singu...@lbl.gov>
Date: Wednesday, March 1, 2017 at 4:34 PM
To: singularity <singu...@lbl.gov>
Subject: Re: [Singularity] $HOME mounting in a container-modification session

 

Hi Todd,

Greg:

 

Comments below.

 

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

 

--
You received this message because you are subscribed to the Google Groups "singularity" group.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.



 

--

Gregory M. Kurtzer

HPC Systems Architect and Technology Developer

Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)

Warewulf Cluster Management (http://warewulf.lbl.gov/)

--
You received this message because you are subscribed to the Google Groups "singularity" group.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

Gamblin, Todd

unread,
Mar 1, 2017, 8:48:44 PM3/1/17
to singu...@lbl.gov
Greg:

On Mar 1, 2017, at 4:34 PM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:

So I am thinking about this... If the invocation of Singularity changes the ownership of the directory `/applications` to the calling user, and then the calling user (let's assume UID=1234) installs files into that path, when the container exits, would it be considered tolerable that the files would always be owned by UID 1234?

No.  Basically I’m trying to enable users to package user-space things, which is what they currently do on our systems.  An app developer (say, user1) would build up a software stack in their home directory, then snapshot that, and they try to share it with other users.  I want user2 to be able to log in and see exactly what user1 built, but as user2.  

Basically I want it to be like I called in some build expert, they sat at my terminal and built stuff for me, and then they handed the terminal back to me.  This is why I want /applications to be owned by <whoever launched the container>.

If /applications is always owned by 1234, and I share that with someone running on a system where there is no user 1234, then the person using the container wouldn’t be able to write to /applications.

I ask because there are some things I am rather OCD on, and file ownerships and correct permissions are one of them.

I can understand that.  But I think that most container are not well suited to HPC environments because they don’t containerize user space (where HPC people work).  They containerize system space.  In system space there are a fixed set of known user ids (root, etc.) that it makes sense to freeze in an absolute sense.  You’re snapshotting what the vendor did.  The vendor is always someone other than the user.  But for any use case, the “user” could be lots of different people, but I want to hand my environment off to them and have them use it as themselves.

I don’t think this violates reproducibility — it allows me to reproduce user-space things for different users.

Going off on a related tangent... I think the best way to address this is actually for people not ever to modify containers once they have been bootstrapped. The --writable option is a doubled edged sword because it breaks reproducibility and if we can find a way for non-root users to bootstrap, and then integrate something like Spack or EasyBuild into the bootstrap recipe, I think that would be the ideal way to go, but I'm not sure exactly what that integration layer would look like. Thoughts?

I agree in principle.  I think people should shoot to make a bootstrap recipe.  But like Robin said, I think you want to be able to make the thing writable so that people can iterate in the container environment, THEN make a recipe.  Or iterate on a build, THEN snapshot it.

Put differently, I want interactive use so I can debug my setup before I snapshot it for production use.  That’s especially important if the container environment is some OS I’m not used to, or some environment I’ve never tried.

-Todd



To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

Robert Schmidt

unread,
Mar 1, 2017, 9:08:23 PM3/1/17
to singu...@lbl.gov
On Wed, Mar 1, 2017 at 8:48 PM Gamblin, Todd <gamb...@llnl.gov> wrote:

No.  Basically I’m trying to enable users to package user-space things, which is what they currently do on our systems.  An app developer (say, user1) would build up a software stack in their home directory, then snapshot that, and they try to share it with other users.  I want user2 to be able to log in and see exactly what user1 built, but as user2.  


Yeah, I can see this being an important feature.

Essentially having a mounted directory that is "special" and writable and always owned by the running user.

The way I work around this is by packaging my builds using EB and FPM, but a more general solution could be very handy. I think it would be especially handy if it was exportable in some way so that it could be transferred separate from the main image (which would just be the OS with some interesting deps).

Bennet Fauber

unread,
Mar 1, 2017, 10:13:17 PM3/1/17
to singu...@lbl.gov
Greg,

> I agree in principle. I think people should shoot to make a bootstrap
> recipe. But like Robin said, I think you want to be able to make the thing
> writable so that people can iterate in the container environment, THEN make
> a recipe. Or iterate on a build, THEN snapshot it.
>
> Put differently, I want interactive use so I can debug my setup before I
> snapshot it for production use. That’s especially important if the
> container environment is some OS I’m not used to, or some environment I’ve
> never tried.

I agree that container recipes are a laudable goal, but they are an
end-product and not a good mechanism for figuring out what needs to go
in the container.

Our scenario was to make copy the desired application into the
container, then figure out which libraries and subsidiary programs
were finally needed. I think Robin is spot on -- it would lead to a
frustrating, time-consuming 'development' cycle and would discourage
use more than encourage it.

I think there's a very good balance now between being able to use all
the existing Docker stuff and being able to make a very lean container
from scratch without ever having to touch Doker.

-- bennet

Gregory M. Kurtzer

unread,
Mar 1, 2017, 11:07:50 PM3/1/17
to singularity
Haha, ok. point taken, but I should have clarified. Debugging and testing through a recipe would of course be masochistic, but once it has been worked out, recapitulating that within a definition file (recipe) is good practice and very valuable. Metaphorically, I see it kind of like doing local system administration Vs. config management. :)



Greg:

 

Comments below.

 

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

 

--
You received this message because you are subscribed to the Google Groups "singularity" group.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.



 

--

Gregory M. Kurtzer

HPC Systems Architect and Technology Developer

Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)

Warewulf Cluster Management (http://warewulf.lbl.gov/)

--
You received this message because you are subscribed to the Google Groups "singularity" group.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Gregory M. Kurtzer

unread,
Mar 1, 2017, 11:12:30 PM3/1/17
to singularity
On Wed, Mar 1, 2017 at 5:12 PM, Goldstone, Robin J. <golds...@llnl.gov> wrote:

With regard to /applications, I like the idea of having a directory that always inherits file ownership from the calling user. But how would you implement that?  If it requires a recursive chown, that seems like it would add a lot of overhead to container startup.  In terms of leaving the file ownership as-is when the container exits, I agree that seems a bit weird, though not sure it is a security issue.   One solution could be to change the ownership to nobody any time the container shuts down.  But again, if that requires chown –R it could add significant overhead.


Yes, this was exactly what I was getting to regarding UIDs. Even if the directory itself get's chown'ed to the calling user, I would prefer not to recurse through the directory and change everything to root. And/or when a different user mounts the container with --writable, changing it all back to them. This is why I like the idea of doing it during bootstrap when the calling user is by definition root.

The other option is to just make `/applications` within the container 0777 and not pay any attention to the resulting UIDs and just ignore the entire UID mashup in that directory. As I mentioned the OCD part of my personality hates this, but perhaps I can distract msyelf if others feel this is acceptable. Or better yet... perhaps someone has a suggestion that I don't see.

Thanks!


 

Greg:

 

Comments below.

 

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

 

--
You received this message because you are subscribed to the Google Groups "singularity" group.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.



 

--

Gregory M. Kurtzer

HPC Systems Architect and Technology Developer

Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)

Warewulf Cluster Management (http://warewulf.lbl.gov/)

--
You received this message because you are subscribed to the Google Groups "singularity" group.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Gregory M. Kurtzer

unread,
Mar 1, 2017, 11:23:48 PM3/1/17
to singularity
On Wed, Mar 1, 2017 at 5:48 PM, Gamblin, Todd <gamb...@llnl.gov> wrote:
Greg:

On Mar 1, 2017, at 4:34 PM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:

So I am thinking about this... If the invocation of Singularity changes the ownership of the directory `/applications` to the calling user, and then the calling user (let's assume UID=1234) installs files into that path, when the container exits, would it be considered tolerable that the files would always be owned by UID 1234?

No.  Basically I’m trying to enable users to package user-space things, which is what they currently do on our systems.  An app developer (say, user1) would build up a software stack in their home directory, then snapshot that, and they try to share it with other users.  I want user2 to be able to log in and see exactly what user1 built, but as user2.  

Unless I am misunderstanding, that would involve a recursive chown. Something I would prefer to avoid if at all possible as it kinda abuses the SUID privilege that Singularity has. It also would defy the principal of least astonishment... Well at least for me it will. Or perhaps I am confused. haha... If so, sorry, can you elaborate.
 

Basically I want it to be like I called in some build expert, they sat at my terminal and built stuff for me, and then they handed the terminal back to me.  This is why I want /applications to be owned by <whoever launched the container>.

I can do a non recursive chown on just the directory `/applications` as a compromise.
 

If /applications is always owned by 1234, and I share that with someone running on a system where there is no user 1234, then the person using the container wouldn’t be able to write to /applications.

Yep. Except that they could write to the `/applications` directory, just not any other files that are already there (unless I recursively chown).
 

I ask because there are some things I am rather OCD on, and file ownerships and correct permissions are one of them.

I can understand that.  But I think that most container are not well suited to HPC environments because they don’t containerize user space (where HPC people work).  They containerize system space.  In system space there are a fixed set of known user ids (root, etc.) that it makes sense to freeze in an absolute sense.  You’re snapshotting what the vendor did.  The vendor is always someone other than the user.  But for any use case, the “user” could be lots of different people, but I want to hand my environment off to them and have them use it as themselves.

I didn't consider the differentiation between user spaces (from the kernel perspective, anything not kernel space is user space), but there is value in doing that as you pointed out. So we have the "system" space which is the non-kernel components of the operating system. Then we have a user's environment (e.g. $HOME and scratch directories). In Singularity terms, $HOME and scratch is shared with the host, but (again, if I am following) you are suggesting another space that kind of sits between the two; user controlled applications that exist within a container, maybe akin to a software module farm?
 

I don’t think this violates reproducibility — it allows me to reproduce user-space things for different users.

If the application space follows the container, it doesn't violate reproducibility of the container itself, but what about reproducibility of creating that container (e.g. for Singularity Hub, or a build service, or someone that wants to leverage all of your work but make some minor changes to the recipe)? For me, the golden nugget of reproducibility is two fold,... one is the container itself, but on the other hand, it is the bootstrap definition file.
 

Going off on a related tangent... I think the best way to address this is actually for people not ever to modify containers once they have been bootstrapped. The --writable option is a doubled edged sword because it breaks reproducibility and if we can find a way for non-root users to bootstrap, and then integrate something like Spack or EasyBuild into the bootstrap recipe, I think that would be the ideal way to go, but I'm not sure exactly what that integration layer would look like. Thoughts?

I agree in principle.  I think people should shoot to make a bootstrap recipe.  But like Robin said, I think you want to be able to make the thing writable so that people can iterate in the container environment, THEN make a recipe.  Or iterate on a build, THEN snapshot it.

Put differently, I want interactive use so I can debug my setup before I snapshot it for production use.  That’s especially important if the container environment is some OS I’m not used to, or some environment I’ve never tried.

Yes, I agree, and apologies for my previous wording. Debugging and development is far easier when done interactively.

Thanks, great ideas you guys!

Greg




 

-Todd




Thanks!

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.



--
Gregory M. Kurtzer
HPC Systems Architect and Technology Developer
Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)
Warewulf Cluster Management (http://warewulf.lbl.gov/)

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Stefan Kombrink

unread,
Mar 2, 2017, 2:08:26 AM3/2/17
to singularity
chown is destructive, especially recursively. Is a GID / UID mapping maybe an alternative?
Let's say I can map the calling GID/UID to a well-defined SINGULARITY UID/GID which apply during shelling/execution?
When storing user content in a container they'd be owned by SINGULARITY UID/GID
I usually do not want to preserve the original UID/GID because then another user might have trouble accessing this data.

-Todd




Thanks!

Greg:

Comments below.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.


--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.



--
Gregory M. Kurtzer
HPC Systems Architect and Technology Developer
Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)
Warewulf Cluster Management (http://warewulf.lbl.gov/)

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

Tru Huynh

unread,
Mar 2, 2017, 3:22:27 AM3/2/17
to 'Stefan Kombrink' via singularity
Hi,

On Wed, Mar 01, 2017 at 11:08:26PM -0800, 'Stefan Kombrink' via singularity wrote:
> chown is destructive, especially recursively. Is a GID / UID mapping maybe
> an alternative?
I don't really like the chown idea either.
> Let's say I can map the calling GID/UID to a well-defined SINGULARITY
> UID/GID which apply during shelling/execution?
> When storing user content in a container they'd be owned by SINGULARITY
> UID/GID
+1 for the mapping
- can we have that in an on the side application container? ie not
inside the OS container?
- configurable (as in singularity.conf or command line) mount point with uid/gid mapping
and permissions override (ie only execute/readable) if the uid/gid are
not matching between producer and end-user

ie singularity exec --application my_c7.img --override-uid=root:tru centos7.img

centos.img + overlay of my_c7.img + permissions inside the my_c7.img for
user root mapped to tru ?

use cases/advantages:
- allow point release QA without the disk space penalty
- easy "fix" for multi end-users uid/gid mismatch (laptop, HPC
centers, cloud instances, ...)
- separate application layer for non OS provided software (EB/spack/...)
- compatibily testing across distributions of major release within the
same distrib.
...

Cheers

Gamblin, Todd

unread,
Mar 2, 2017, 3:31:50 AM3/2/17
to singu...@lbl.gov
On Mar 1, 2017, at 11:08 PM, 'Stefan Kombrink' via singularity <singu...@lbl.gov> wrote:

chown is destructive, especially recursively. Is a GID / UID mapping maybe an alternative?
Let's say I can map the calling GID/UID to a well-defined SINGULARITY UID/GID which apply during shelling/execution?
When storing user content in a container they'd be owned by SINGULARITY UID/GID
I usually do not want to preserve the original UID/GID because then another user might have trouble accessing this data.

I was actually thinking of an implementation much like this.  chown would only cover /applications, or other hard-coded, special places known to singularity.  uid mapping would be more versatile, as I could imagine creating containers with many different directories chowned to the $USER account.  

For example, we might want to create a directories for our standard LLNL user environment: /usr/tce, /usr/gapps, and /usr/global, then hand that over to a user so that they could reproduce an LLNL app deployment layout as a user, building in a container on our cluster.

In real life, those directories are owned by many different users on the real machine.  With uid mapping as Stefan suggests, admins could build arbitrary such environments as root, then allow users to build on them as $USER within a container.

Am Donnerstag, 2. März 2017 05:23:48 UTC+1 schrieb Gregory M. Kurtzer:
On Wed, Mar 1, 2017 at 5:48 PM, Gamblin, Todd <gamb...@llnl.gov> wrote:
Greg:
Basically I want it to be like I called in some build expert, they sat at my terminal and built stuff for me, and then they handed the terminal back to me.  This is why I want /applications to be owned by <whoever launched the container>.

I can do a non recursive chown on just the directory `/applications` as a compromise.

I don’t think this would work.  I can think of testing and deployment use cases where I want to preserve the writability of the created directories. 

I ask because there are some things I am rather OCD on, and file ownerships and correct permissions are one of them.

I can understand that.  But I think that most container are not well suited to HPC environments because they don’t containerize user space (where HPC people work).  They containerize system space.  In system space there are a fixed set of known user ids (root, etc.) that it makes sense to freeze in an absolute sense.  You’re snapshotting what the vendor did.  The vendor is always someone other than the user.  But for any use case, the “user” could be lots of different people, but I want to hand my environment off to them and have them use it as themselves.

I didn't consider the differentiation between user spaces (from the kernel perspective, anything not kernel space is user space), but there is value in doing that as you pointed out. So we have the "system" space which is the non-kernel components of the operating system. Then we have a user's environment (e.g. $HOME and scratch directories). In Singularity terms, $HOME and scratch is shared with the host, but (again, if I am following) you are suggesting another space that kind of sits between the two; user controlled applications that exist within a container, maybe akin to a software module farm?

Yes, totally!  In many ways this is what differentiates the HPC usage model from what people do in the cloud.  The spaces you describe are determined by the roles at the HPC facility, and they correspond to different privilege levels.  “system” stuff is typically installed by facility staff using the system package manager, but HPC users don’t ever really use that.  Users work in $HOME and have to build their universe there, but I want *reproducible* $HOME that I can share with other users and abstract the details of how $HOME is mounted on my system.

I’m not sure I 100% know what you mean by “software module farm” — but I think we are on the same page here.

Containers come from a world where the devops guys get to be root all the time and use the system package manager to set up containers.  root’s environment is the whole system, and it can set things up however it wants in /root or / or /usr.  Users get their home directory, but it’s named different things on different systems.

I don’t think this violates reproducibility — it allows me to reproduce user-space things for different users.

If the application space follows the container, it doesn't violate reproducibility of the container itself, but what about reproducibility of creating that container (e.g. for Singularity Hub, or a build service, or someone that wants to leverage all of your work but make some minor changes to the recipe)? For me, the golden nugget of reproducibility is two fold,... one is the container itself, but on the other hand, it is the bootstrap definition file.

Well, this is why I want /mystuff to follow the user.  If it’s always owned by the user running the container, whoever that is, then the person leveraging the work can always pick up where I left off, because they’re basically dropping into my environment the way I left it.  And they can refer to /mysuff, /applications, or whatever the same way I would.  Think of it as reproducing a first-person environment, whereas root really reproduces a third-person environment.

Doing uid mapping for just the user running the container and making the running user always be a separate uid *within* the container would preserve the binary reproducibility  of the container, right?

Caveat to this: I haven’t thought through what this means for gid, only uid. I’m not sure whether the gid should follow the user the same way.

-Todd

Rémy Dernat

unread,
Mar 2, 2017, 6:32:49 AM3/2/17
to singu...@lbl.gov
Hi,
Interesting ideas ! Specially the building farm.

Concerning /application, I prefer the tru idea, with a chown/chgrp modification from root with a singularity command instead of a 777 directory, even if it would be easier to set at the beginning.
BTW, perhaps user software installations can also be thinking as a nested container or with something like a conda manager + virtualenv ?

Remy

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Reply all
Reply to author
Forward
0 new messages