--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
Wow, I didn't realize this would generate so much discussion. I probably should not have posted such a flippant response, sorry. I have read all of the replies and basically Tru has it right. Yes, sudo provides more fine-grained permissions than just giving someone root but whoever is granting sudo privileges needs to understand what the particular command does. In this case, sudo singularity lets you, among other things, bind-mount any file system into the container and then gives you a shell with uid 0 in the container. If the filesystem is a shared NFS mount that is not set up to squash root then you will have the ability to do "root stuff" in that file system from inside the container.
This brings me to what I see as the biggest challenge in our environment (HPC facility). If we want to let users run singularity containers on our systems, they need a place where they are root so they can build the container. And we don't give users root on our systems for a myriad of reasons, one being shared filesystems. So they need to find some place else to build their container. Some of them have access to a Linux desktop where they have root, others have to get more creative. And if they build it somewhere else, they won't have access to their home directory, which probably contains stuff they need to build their application.
The other part of this problem is if you want a container to be portable, meaning a container that you can to give to other people to run, you can't make any assumptions about their home directory path. So you need to make sure that any applications you build in your container are ultimately not dependent on anything in your home dir. In our environment, the mount point we use for home dirs (/g) does not exist in the container so the bind mount fails. I could certainly create that mount point in my container, but if I give it to someone else with a different home directory path, all bets are off as to whether their home dir will mount or not.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
On Wed, Mar 1, 2017 at 9:07 AM, Robin Goldstone <golds...@llnl.gov> wrote:This brings me to what I see as the biggest challenge in our environment (HPC facility). If we want to let users run singularity containers on our systems, they need a place where they are root so they can build the container. And we don't give users root on our systems for a myriad of reasons, one being shared filesystems. So they need to find some place else to build their container. Some of them have access to a Linux desktop where they have root, others have to get more creative. And if they build it somewhere else, they won't have access to their home directory, which probably contains stuff they need to build their application.I have some plans, fixes and ideas for this:1. In the newest development work we are doing, a user can create an image and import to that image as non root. That makes this possible: `singularity create tensorflow.img; singularity import tensorflow.img docker://tensorflow:latest` without being root. note: This does not work for bootstrapping, only importing.2. Singularity Hub (and/or DockerHub) can be used to build images today. Singularity Hub (shub) integrates with ones GitHub repository, and using continuous integration, will generate a new container that you can then reference via `singularity shell shub://....`.3. I have a vision for a build service (that has yet to be built). The build service would integrate directly with Singularity and allow someone to remote build a container using the generalized bootstrap syntax as we use now. For example: `singularity bootstrap-remote container.img file.def`. that command would send the file.def to a build server, and would wait until the build server was done, at which point it would download the resultant container.img. For all practical purposes, the build process could have been local.
1. I think most container users want to be root so they can do things like install packages using the system package manager. That is great, but it's hard when singularity sudo == bash sudo, as mentioned above.2. I suspect that most of our users don't necessarily need more system packages. They need to build their custom HPC packages and snapshot *those* to send to someone else. They could do that in their home directory.3. If you had a user space package manager (like Spack, https://spack.io) that could install all your HPC dependencies in your home directory, you could really easily build your dependencies AND your application in your home directory.
1. Home directories have different mount points across different systems.2. Users have different uids across systems.
On Mar 1, 2017, at 3:28 PM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:
This would allow our users to build containers the same way they currently do on our clusters, without them having to understand any fancy filesystem mounts, overlay semantics, or external build farms. Coupled with Spack (an entirely user-space package manager) they could also easily build their dependencies in the "myuser" directory, without worrying about portability concerns that arise from differently named home directories on different systems.
Does that make sense, and would it be an easy tweak? Or am I missing something?
I think that is a great idea! If I am understanding correctly, could an example implementation look like this:
$ singularity shell --writable ~/container.imgSingularity container.img> echo $PATH/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/applications/binSingularity container.img> cd ~/git/awesomenessSingularity container.img> ./configure --prefix=/applications && make && make install
If so, that would be pretty straight forward to implement.
--
Thanks!
Gregory M. KurtzerHPC Systems Architect and Technology DeveloperLawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)Warewulf Cluster Management (http://warewulf.lbl.gov/)
Greg:
Comments below.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
Re: not being able to modify containers once bootstrapped, I can tell you that at least in my early experience, it took a lot of trial and error to get everything the way I wanted it in my container. My approach was to bootstrap a minimal container then shell in and manually muck around with things until I had everything working, then stuff the final recipe into the def file once I figured it all out.
If the only way I could build my container was by iteratively modifying the def file and rebuilding the container, I am afraid I would have to shoot myself. L
Greg:
Comments below.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
--
Gregory M. Kurtzer
HPC Systems Architect and Technology Developer
Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)
Warewulf Cluster Management (http://warewulf.lbl.gov/)
GitHub: https://github.com/gmkurtzer, Twitter: https://twitter.com/gmkurtzer
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
singularity...@lbl.gov.
With regard to /applications, I like the idea of having a directory that always inherits file ownership from the calling user. But how would you implement that? If it requires a recursive chown, that seems like it would add a lot of overhead to container startup. In terms of leaving the file ownership as-is when the container exits, I agree that seems a bit weird, though not sure it is a security issue. One solution could be to change the ownership to nobody any time the container shuts down. But again, if that requires chown –R it could add significant overhead.
From:
"Gregory M. Kurtzer" <gmku...@lbl.gov>
Reply-To: "singu...@lbl.gov" <singu...@lbl.gov>
Date: Wednesday, March 1, 2017 at 4:34 PM
To: singularity <singu...@lbl.gov>
Subject: Re: [Singularity] $HOME mounting in a container-modification session
Hi Todd,
Greg:
Comments below.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
--
Gregory M. Kurtzer
HPC Systems Architect and Technology Developer
Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)
Warewulf Cluster Management (http://warewulf.lbl.gov/)
GitHub: https://github.com/gmkurtzer, Twitter: https://twitter.com/gmkurtzer
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
singularity...@lbl.gov.
On Mar 1, 2017, at 4:34 PM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:
So I am thinking about this... If the invocation of Singularity changes the ownership of the directory `/applications` to the calling user, and then the calling user (let's assume UID=1234) installs files into that path, when the container exits, would it be considered tolerable that the files would always be owned by UID 1234?
I ask because there are some things I am rather OCD on, and file ownerships and correct permissions are one of them.
Going off on a related tangent... I think the best way to address this is actually for people not ever to modify containers once they have been bootstrapped. The --writable option is a doubled edged sword because it breaks reproducibility and if we can find a way for non-root users to bootstrap, and then integrate something like Spack or EasyBuild into the bootstrap recipe, I think that would be the ideal way to go, but I'm not sure exactly what that integration layer would look like. Thoughts?
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
No. Basically I’m trying to enable users to package user-space things, which is what they currently do on our systems. An app developer (say, user1) would build up a software stack in their home directory, then snapshot that, and they try to share it with other users. I want user2 to be able to log in and see exactly what user1 built, but as user2.
Greg:
Comments below.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
Gregory M. Kurtzer
HPC Systems Architect and Technology Developer
Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)Warewulf Cluster Management (http://warewulf.lbl.gov/)
GitHub: https://github.com/gmkurtzer, Twitter: https://twitter.com/gmkurtzer
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
With regard to /applications, I like the idea of having a directory that always inherits file ownership from the calling user. But how would you implement that? If it requires a recursive chown, that seems like it would add a lot of overhead to container startup. In terms of leaving the file ownership as-is when the container exits, I agree that seems a bit weird, though not sure it is a security issue. One solution could be to change the ownership to nobody any time the container shuts down. But again, if that requires chown –R it could add significant overhead.
Greg:
Comments below.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
Gregory M. Kurtzer
HPC Systems Architect and Technology Developer
Lawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)Warewulf Cluster Management (http://warewulf.lbl.gov/)
GitHub: https://github.com/gmkurtzer, Twitter: https://twitter.com/gmkurtzer
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
Greg:
On Mar 1, 2017, at 4:34 PM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:
So I am thinking about this... If the invocation of Singularity changes the ownership of the directory `/applications` to the calling user, and then the calling user (let's assume UID=1234) installs files into that path, when the container exits, would it be considered tolerable that the files would always be owned by UID 1234?
No. Basically I’m trying to enable users to package user-space things, which is what they currently do on our systems. An app developer (say, user1) would build up a software stack in their home directory, then snapshot that, and they try to share it with other users. I want user2 to be able to log in and see exactly what user1 built, but as user2.
Basically I want it to be like I called in some build expert, they sat at my terminal and built stuff for me, and then they handed the terminal back to me. This is why I want /applications to be owned by <whoever launched the container>.
If /applications is always owned by 1234, and I share that with someone running on a system where there is no user 1234, then the person using the container wouldn’t be able to write to /applications.
I ask because there are some things I am rather OCD on, and file ownerships and correct permissions are one of them.
I can understand that. But I think that most container are not well suited to HPC environments because they don’t containerize user space (where HPC people work). They containerize system space. In system space there are a fixed set of known user ids (root, etc.) that it makes sense to freeze in an absolute sense. You’re snapshotting what the vendor did. The vendor is always someone other than the user. But for any use case, the “user” could be lots of different people, but I want to hand my environment off to them and have them use it as themselves.
I don’t think this violates reproducibility — it allows me to reproduce user-space things for different users.
Going off on a related tangent... I think the best way to address this is actually for people not ever to modify containers once they have been bootstrapped. The --writable option is a doubled edged sword because it breaks reproducibility and if we can find a way for non-root users to bootstrap, and then integrate something like Spack or EasyBuild into the bootstrap recipe, I think that would be the ideal way to go, but I'm not sure exactly what that integration layer would look like. Thoughts?
I agree in principle. I think people should shoot to make a bootstrap recipe. But like Robin said, I think you want to be able to make the thing writable so that people can iterate in the container environment, THEN make a recipe. Or iterate on a build, THEN snapshot it.
Put differently, I want interactive use so I can debug my setup before I snapshot it for production use. That’s especially important if the container environment is some OS I’m not used to, or some environment I’ve never tried.
-Todd
Thanks!
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
--
Gregory M. KurtzerHPC Systems Architect and Technology DeveloperLawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)Warewulf Cluster Management (http://warewulf.lbl.gov/)
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
-Todd
Thanks!
Greg:
Comments below.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
--
Gregory M. KurtzerHPC Systems Architect and Technology DeveloperLawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)Warewulf Cluster Management (http://warewulf.lbl.gov/)
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
On Mar 1, 2017, at 11:08 PM, 'Stefan Kombrink' via singularity <singu...@lbl.gov> wrote:
chown is destructive, especially recursively. Is a GID / UID mapping maybe an alternative?
Let's say I can map the calling GID/UID to a well-defined SINGULARITY UID/GID which apply during shelling/execution?
When storing user content in a container they'd be owned by SINGULARITY UID/GID
I usually do not want to preserve the original UID/GID because then another user might have trouble accessing this data.
Am Donnerstag, 2. März 2017 05:23:48 UTC+1 schrieb Gregory M. Kurtzer:
Basically I want it to be like I called in some build expert, they sat at my terminal and built stuff for me, and then they handed the terminal back to me. This is why I want /applications to be owned by <whoever launched the container>.
I can do a non recursive chown on just the directory `/applications` as a compromise.
I ask because there are some things I am rather OCD on, and file ownerships and correct permissions are one of them.
I can understand that. But I think that most container are not well suited to HPC environments because they don’t containerize user space (where HPC people work). They containerize system space. In system space there are a fixed set of known user ids (root, etc.) that it makes sense to freeze in an absolute sense. You’re snapshotting what the vendor did. The vendor is always someone other than the user. But for any use case, the “user” could be lots of different people, but I want to hand my environment off to them and have them use it as themselves.
I didn't consider the differentiation between user spaces (from the kernel perspective, anything not kernel space is user space), but there is value in doing that as you pointed out. So we have the "system" space which is the non-kernel components of the operating system. Then we have a user's environment (e.g. $HOME and scratch directories). In Singularity terms, $HOME and scratch is shared with the host, but (again, if I am following) you are suggesting another space that kind of sits between the two; user controlled applications that exist within a container, maybe akin to a software module farm?
I don’t think this violates reproducibility — it allows me to reproduce user-space things for different users.
If the application space follows the container, it doesn't violate reproducibility of the container itself, but what about reproducibility of creating that container (e.g. for Singularity Hub, or a build service, or someone that wants to leverage all of your work but make some minor changes to the recipe)? For me, the golden nugget of reproducibility is two fold,... one is the container itself, but on the other hand, it is the bootstrap definition file.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.