What are the main differences between Podman and Singularity?

449 views
Skip to first unread message

Erik Sjölund

unread,
Oct 1, 2020, 12:44:36 PM10/1/20
to singu...@lbl.gov
A few months ago I asked this question to the Podman mailing list:

"What are the main differences between Podman and Singularity? I think
in the academic world Singularity has become quite popular. The PhD
students in my work place build the SIF (Singularity Image Format)
file on their local computer and then copy it to the cluster with the
scp command and run it there. (In some research HPC compute clusters
they have installed Singularity)."

I got some replies in the Podman mailing list but I would be
interested to hear opinions here too (to get a more balanced view of
the topic).

Reference:
https://lists.podman.io/archives/list/pod...@lists.podman.io/thread/6VOBLOTWWNZFCVY4ESACKYACFZ4YHGDB/#6VOBLOTWWNZFCVY4ESACKYACFZ4YHGDB

Best regards,
Erik Sjölund

v

unread,
Oct 1, 2020, 2:05:41 PM10/1/20
to singu...@lbl.gov
Hey Erik,

I think there are two reasons that Singularity still dominates for HPC. The first is obvious - the effect of Podman being more recent and sysadmins (generally) being more conservative means that if they have a working solution (Singularity) there isn't huge incentive to try another technology. If Singularity serves as a container for HPC, then users aren't going to be asking judiciously for the admins to take any action. This is the current state of the world, a result of actual needs and culture... "if it ain't broken, don't fix it."

But now let's say that we have some group that isn't happy with Singularity, or let's just say that they like Singularity but they really love Podman. My thinking is that many large centers that use parallel filesystems are going to stop reading here. E.g.,

Other distributed file systems (for example: Lustre, Spectrum Scale, the General Parallel File System (GPFS)) are also not supported when running in rootless mode as these file systems do not understand user namespace.

I don't remember the details, but I do remember talking to a sysadmin, and he saw this being an issue. And since Singularity works, why invest the time, because there are so many other things to do? But it's been a long time since this evaluation - I think if the Podman developers are wanting to expand / get feedback from HPC centers, they might reach out and have discussions with actual admins to get feedback. (Most of us) on the list are not admins and wouldn't be able to do any kind of testing other than local machines, which don't have these filesystems.

Best,

Vanessa

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
To view this discussion on the web visit https://groups.google.com/a/lbl.gov/d/msgid/singularity/CAB%2B1q0QLF%2Brd-%3DJRit-vx81q%3De%3DintgDVeYx9%2Bpj4_d1jFj_HA%40mail.gmail.com.

Oliver Freyermuth

unread,
Oct 1, 2020, 2:40:02 PM10/1/20
to singu...@lbl.gov, v
Dear Vanessa,

Am 01.10.20 um 20:05 schrieb v:
> Hey Erik,
>
> I think there are two reasons that Singularity still dominates for HPC. The first is obvious - the effect of Podman being more recent and sysadmins (generally) being more conservative means that if they have a working solution (Singularity) there isn't huge incentive to try another technology. If Singularity serves as a container for HPC, then users aren't going to be asking judiciously for the admins to take any action. This is the current state of the world, a result of actual needs and culture... "if it ain't broken, don't fix it."

I fully agree with this point (sysadmin here) ;-). We are glancing at Podman since it started to exist, but for us, the main blocker of evaluating it fully is that our community is focussing on Singularity (almost exclusively)
and there is lack of good integration with Podman in other tools (at the moment), for example HTCondor.

> But now let's say that we have some group that isn't happy with Singularity, or let's just say that they like Singularity but they really love Podman. My thinking is that many large centers that use parallel filesystems are going to stop reading here <https://github.com/containers/podman/blob/master/troubleshooting.md#14-rootless-podman-build-fails-eperm-on-nfs>. E.g.,
>
> Other distributed file systems (for example: Lustre, Spectrum Scale, the General Parallel File System (GPFS)) are also not supported when running in rootless mode as these file systems do not understand user namespace.
>
> I don't remember the details, but I do remember talking to a sysadmin, and he saw this being an issue. And since Singularity works, why invest the time, because there are so many other things to do? But it's been a long time since this evaluation - I think if the Podman developers are wanting to expand / get feedback from HPC centers, they might reach out and have discussions with actual admins to get feedback. (Most of us) on the list are not admins and wouldn't be able to do any kind of testing other than local machines, which don't have these filesystems.

I don't think this point is an issue at all, though: This is a limitation of unprivileged "podman build". If I understand correctly,
Singularity always requires sudo / root privileges when building, unless using "--fakeroot", which is still comparatibely recent.
While I did not test build --fakeroot on cluster filesystems with Singularity yet, I would expect similar limitations, and users will likely build their containers locally or on hubs in any case
(where there is either no cluster filesystem or where they are root ;-) ).

Cheers,
Oliver

>
> Best,
>
> Vanessa
>
> On Thu, Oct 1, 2020 at 10:44 AM Erik Sjölund <erik.s...@gmail.com <mailto:erik.s...@gmail.com>> wrote:
>
> A few months ago I asked this question to the Podman mailing list:
>
> "What are the main differences between Podman and Singularity? I think
> in the academic world Singularity has become quite popular. The PhD
> students in my work place build the SIF (Singularity Image Format)
> file on their local computer and then copy it to the cluster with the
> scp command and run it there. (In some research HPC compute clusters
> they have installed Singularity)."
>
> I got some replies in the Podman mailing list but I would be
> interested to hear opinions here too (to get a more balanced view of
> the topic).
>
> Reference:
> https://lists.podman.io/archives/list/pod...@lists.podman.io/thread/6VOBLOTWWNZFCVY4ESACKYACFZ4YHGDB/#6VOBLOTWWNZFCVY4ESACKYACFZ4YHGDB
>
> Best regards,
> Erik Sjölund
>
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity%2Bunsu...@lbl.gov>.
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity...@lbl.gov>.
> To view this discussion on the web visit https://groups.google.com/a/lbl.gov/d/msgid/singularity/CAM%3Dpu%2BJ6a63%2BSRvp%2BbSm99ZsikyyM4Xv_Gu-sU8sJFrS7LoqFA%40mail.gmail.com <https://groups.google.com/a/lbl.gov/d/msgid/singularity/CAM%3Dpu%2BJ6a63%2BSRvp%2BbSm99ZsikyyM4Xv_Gu-sU8sJFrS7LoqFA%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Rémy Dernat

unread,
Oct 6, 2020, 1:30:35 AM10/6/20
to singu...@lbl.gov, v
Hi,

Thanks for this very interesting thread ! Although I am also on the podman mailing list (!), I did not see the other thread.

I fully agree with Vanessa and Oliver about our "conservative" behaviour. When it works, why trying anything else ? Podman is moving really fast (even Singularity is moving fast, but not as fast as podman); they added cgroup v2 recently, and many kernels are not compatible.

However, I am using podman for "service" kind of jobs. IMO, singularity is more focused on a job runtime, which means it has a lifetime (like udocker, charlicloud, shifter...), contrary to a service, which should run indefinitely like podman or docker (or katacontainer, gvisor, or whatever...). That being said, those original limitations are, now, not so true. Indeed singularity can do service jobs (with "instance" keyword); I am using it for a specific web service (with many instances (3), like I would do with {docker,podman}-compose), but I would not use it in a k8s/swarm environment. When you need all the container's orchestrator environment, I think it is better to switch to a fully compliant OCI solution, which has many resources for a full setup (k8s dashboard, portainer, network virtual middlewares, etc...). Maybe it is also possible with Singularity now.

Best regards,
Rémy

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
To view this discussion on the web visit https://groups.google.com/a/lbl.gov/d/msgid/singularity/6e44c909-f8eb-7ab6-3d04-335eaa63a878%40googlemail.com.

Priedhorsky, Reid

unread,
Oct 6, 2020, 4:10:55 PM10/6/20
to 'Priedhorsky, Reid' via singularity

> On Oct 1, 2020, at 12:05 PM, v <vso...@gmail.com> wrote:
>
> Other distributed file systems (for example: Lustre, Spectrum Scale, the General Parallel File System (GPFS)) are also not supported when running in rootless mode as these file systems do not understand user namespace.

I don’t think this is true for Lustre at least; we’ve extensively tested permissions enforcement in unprivileged usernames and Lustre did everything correctly, just like local filesystems e.g. tmpfs and ext4. If anyone would like to look over our tests, they are on GitHub [1]. The key files IIRC are examples/chtest/fs_perms.py and test/make-perms-test.py.in. You would run them with bin/ch-test, which has a man page [2].

HTH,
Reid

[1]: https://github.com/hpc/charliecloud
[2]: https://hpc.github.io/charliecloud/command-usage.html#ch-test


he/his

Dave Dykstra

unread,
Oct 8, 2020, 2:07:34 PM10/8/20
to 'Oliver Freyermuth' via singularity
I believe there are fundamental reasons why the grid/HTC community is
focussed on singularity and not podman, not just because singularity
came first. Currently podman has some significant limitations that
prevent it from being usable from the way we want to use it in grid/HTC
environments, which is I think the main reason why HTCondor does not and
cannot support it at this point. The limitations include:

1. It requires the use of a couple of setuid/setgid programs (that is,
newuidmap and newgidmap) in order to do anything, and every user id
has to be configured with an additional set of unique user ids that
they can use to simulate multiple users inside the container.
Perhaps there will be tools one day to make managing this reasonably
easy, but for now this is a significant issue we can't expect system
administrators to accept. I was told by one of the podman
developers that OCI requires having at least two user ids, one for
root and one for the unprivileged user. Singularity does not
require that.
2. It doesn't work well with the container in a read-only filesystem,
which means we can't easily distribute containers in CVMFS. I
think there were plans to support that sometime, but I haven't
seen it yet, and there's more issues with containers in CVMFS
(see reasons 3&4).
3. It requires the root filesystem to be writable, so it depends on
fuse-overlayfs. This is a performance impact for all file acceses
that I doubt will be acceptable to users when there's a solid
alternative (singularity) that does not require this overhead.
4. It assumes that it manages the containers; it does not make it easy
to run a container from an arbitrary path. There's supposed to be
a way to do it, the developer tried to help me to do it, but I tried
and ran into an error. He thought it was likely to do with the fact
that I wanted the process to start up as the unprivileged user
rather than as the fake root user which it is supposed to support
as an option (--userns=keep-id). At this point because of all the
other issues I gave up.

podman is designed as a docker replacement; it is not intended for the
same use cases that singularity is designed for. It would take
significant effort to make it fit the HTC use case where we want special
unprivileged users (pilot jobs) to run unprivileged but isolated jobs
from multiple other users.

I do not know for sure whether or not it would fit better into HPC use
cases. However, in my opinion one of the biggest reasons why
singularity took off so fast in HPC centers was the fact that it
supports a monolithic container image file that it mounts as a loopback
filesystem. This moves all the metadata operations for the tons of
small files to be done on the compute node rather than on the
filesystem's metadata server, which is a huge performance win. HPC
admins demonstrated superior performance with singularity than with
unpacked files on the shared filesystem for this reason. podman does
not support monolothic container images, it only supports unpacked
filesystems, and I think this feature alone will block it from being
accepted by HPC centers.

By the way, not coincidentally I think, CVMFS has the same key feature
as loopback-mounted container images of moving metadata operations to
the compute node and away from the file server. This is an important
reason (along with local caching) why it scales so well over long
distances.

Dave

David Trudgian

unread,
Oct 8, 2020, 3:50:52 PM10/8/20
to singularity, reidpr
Hi Reid,
 
> Other distributed file systems (for example: Lustre, Spectrum Scale, the General Parallel File System (GPFS)) are also not supported when running in rootless mode as these file systems do not understand user namespace.

I don’t think this is true for Lustre at least; we’ve extensively tested permissions enforcement in unprivileged usernames and Lustre did everything correctly, just like local filesystems e.g. tmpfs and ext4. If anyone would like to look over our tests, they are on GitHub [1]. The key files IIRC are examples/chtest/fs_perms.py and test/make-perms-test.py.in. You would run them with bin/ch-test, which has a man page [2].

The case we see that doesn't work on Lustre as well as NFS, GPFS etc. is articulated nicely in this Red Hat article:

https://www.redhat.com/sysadmin/rootless-podman-nfs#:~:text=The%20NFS%20protocol%20has%20no,into%20the%20same%20user%20namespace.

Generally non-root things work in a container on these file systems when you are using a user namespace. However, when you become a 'fake root' user in the container through a subuid/subgid configuration (you are unpriv on the host still, but effectively root in the container), you cannot e.g. `chown` files etc. in the container. That relies on some knowledge about the subuid mappings and capabilities on the local host that the file server does not have. The upshot is that things like package installs as the 'fake' root in your user namespace container fail if the container is on a network FS - while it works with a the container on a local filesystem.

Would definitely be interesting, though, if you are testing this type of scenario and see it working - in case there has been support added to some network filesystems somehow.

DT
 

Priedhorsky, Reid

unread,
Oct 8, 2020, 6:15:09 PM10/8/20
to singularity

On Oct 8, 2020, at 1:50 PM, David Trudgian <dtr...@sylabs.io> wrote:

Hi Reid,
 
> Other distributed file systems (for example: Lustre, Spectrum Scale, the General Parallel File System (GPFS)) are also not supported when running in rootless mode as these file systems do not understand user namespace.

I don’t think this is true for Lustre at least; we’ve extensively tested permissions enforcement in unprivileged usernames and Lustre did everything correctly, just like local filesystems e.g. tmpfs and ext4. If anyone would like to look over our tests, they are on GitHub [1]. The key files IIRC are examples/chtest/fs_perms.py and test/make-perms-test.py.in. You would run them with bin/ch-test, which has a man page [2].

The case we see that doesn't work on Lustre as well as NFS, GPFS etc. is articulated nicely in this Red Hat article:

https://www.redhat.com/sysadmin/rootless-podman-nfs#:~:text=The%20NFS%20protocol%20has%20no,into%20the%20same%20user%20namespace.

Generally non-root things work in a container on these file systems when you are using a user namespace. However, when you become a 'fake root' user in the container through a subuid/subgid configuration (you are unpriv on the host still, but effectively root in the container)

Aha, as soon as you add the subuid/subgid stuff, it is not fully unprivileged any more. You have to use a privileged process to set up the namespaces; newuidmap and newgidmap are setuid root, or possibly setcap. We have not tested this configuration, but it makes me nervous for a lot of reasons.

I was referring to fully unprivileged user namespaces, i.e., where the namespaces are set up by an unprivileged user. In this case you only get one UID and one GID inside the user namespace, which can be UID 0 and/or GID 0 if you want. In this configuration, permissions enforcement is fine on all the file systems we tested.

HTH,
Reid

he/his

Erik Sjölund

unread,
Oct 11, 2021, 2:17:05 PM10/11/21
to singu...@lbl.gov
Thanks for all the feedback! Sorry for the delayed answer.
Dave, the points 1,2,3,4 might have changed since last year.
Podman got some new features added lately so I did a test to see if
Podman could fulfill the
points 1,2,3,4. The test was done with Podman built from the main branch.

First an Alpine container image was pulled and a tar archive of it was
created. The tar archive was untarred to
an arbitrary path (/home/testpodman/myrootfs)

After that the setuid programs were hidden under new paths and a
read-only bind-mount of /home/testpodman/myrootfs was set up.

mv /usr/bin/newuidmap /usr/bin/newuidmap.moved
mv /usr/bin/newgidmap /usr/bin/newgidmap.moved
mount --bind -o ro /home/testpodman/myrootfs /home/testpodman/myrootfs_readonly/

The lines for the user testpodman in /etc/subuid and /etc/subgid were
commented out.

The test user (testpodman) was then able to run the Alpine container.

I tried running with user=root and group=root inside the container

podman run --security-opt label=disable -it --user 0:0 --uidmap 0:0:1
--gidmap 0:0:1 --rootfs myrootfs_readonly/:O sh

and running with user=guest (uid=405) and group=users (gid=100) inside
the container

podman run --security-opt label=disable -it --user 405:100 --uidmap
405:0:1 --gidmap 100:0:1 --sysctl net.ipv4.ping_group_range="100 100"
--rootfs myrootfs_readonly/:O sh

(I had to add "--security-opt label=disable" to make it work. Podman
then runs without SELINUX protection but I hope to find a way to avoid
that)

See the attached file for a full version of the terminal session.

Yes, I agree that fuse-overlayfs comes with a performance penalty, but
fuse-overlayfs is not needed anymore.
Podman can now use native overlay file system when using a recent Linux kernel.
See https://www.redhat.com/sysadmin/podman-rootless-overlay

To me it seems Singularity and Podman are becoming more similar as time goes.
At my work we use both Singularity and Podman so it's quite
interesting to compare them.
The feature of Singularity to be able to run a container from a single
file (a SIF image) is very useful.
Podman does not have that feature.

Best regards,
Erik Sjölund
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
> To view this discussion on the web visit https://groups.google.com/a/lbl.gov/d/msgid/singularity/20201008180729.GA15994%40fnal.gov.
terminal_session.txt

Dave Dykstra

unread,
Oct 12, 2021, 12:18:41 PM10/12/21
to singu...@lbl.gov
That sounds promising for when all those things make it in to the
mainstream, Erik. How does it manage to have multiple user & group ids
without newuidmap/newgidmap? Or is it running with only one uid? I
thought OCI didn't allow that.

The rootless overlayfs within a namespace will be helpful. I expect
someday it will get into RHEL8, is there any word on that?

Do you know how much of the rest of the features are in the standard
package available to RHEL8?

Dave
> To view this discussion on the web visit https://groups.google.com/a/lbl.gov/d/msgid/singularity/CAB%2B1q0R0mr0bQPJ0GN1bKtWsiXAt0mELueA9u50fjb%3DhRk2Z%3DQ%40mail.gmail.com .

> [root@localhost ~]# useradd testpodman
> [root@localhost ~]# machinectl shell testpodman@
> Connected to the local host. Press ^] three times within 1s to exit session.
> [testpodman@localhost ~]$ podman pull alpine
> Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
> Trying to pull docker.io/library/alpine:latest...
> Getting image source signatures
> Copying blob a0d0a0d46f8b done
> Copying config 14119a10ab done
> Writing manifest to image destination
> Storing signatures
> 14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab
> [testpodman@localhost ~]$ podman export $(podman create alpine) --output alpine.tar
> [testpodman@localhost ~]$ mkdir myrootfs
> [testpodman@localhost ~]$ mkdir myrootfs_readonly
> [testpodman@localhost ~]$ tar -xf alpine.tar -C myrootfs
> [testpodman@localhost ~]$ exit
> [root@localhost ~]# mv /usr/bin/newuidmap /usr/bin/newuidmap.moved
> [root@localhost ~]# mv /usr/bin/newgidmap /usr/bin/newgidmap.moved
> [root@localhost ~]# # comment out the lines for testpodman in /etc/subuid and /etc/subgid
> [root@localhost ~]# sed -i s/testpodman/#testpodman/g /etc/subuid
> [root@localhost ~]# sed -i s/testpodman/#testpodman/g /etc/subgid
> [root@localhost ~]# grep testpodman /etc/sub*id
> /etc/subgid:#testpodman:362144:65536
> /etc/subuid:#testpodman:362144:65536
> [root@localhost ~]# mount --bind -o ro /home/testpodman/myrootfs /home/testpodman/myrootfs_readonly/
> [root@localhost ~]# machinectl shell testpodman@
> Connected to the local host. Press ^] three times within 1s to exit session.
> [testpodman@localhost ~]$ podman run --security-opt label=disable -it --user 0:0 --uidmap 0:0:1 --gidmap 0:0:1 --rootfs myrootfs_readonly/:O sh
> / # echo > /tmp/file.txt
> / # ls -l /tmp/file.txt
> -rw-r--r-- 1 root root 1 Oct 11 17:56 /tmp/file.txt
> / # ls -ln /tmp/file.txt
> -rw-r--r-- 1 0 0 1 Oct 11 17:56 /tmp/file.txt
> / # exit
> [testpodman@localhost ~]$ podman run --security-opt label=disable -it --user 0:0 --uidmap 0:0:1 --gidmap 0:0:1 --rootfs myrootfs_readonly/:O cat /etc/passwd | grep 405
> guest:x:405:100:guest:/dev/null:/sbin/nologin
> [testpodman@localhost ~]$ podman run --security-opt label=disable -it --user 405:100 --uidmap 405:0:1 --gidmap 100:0:1 --sysctl net.ipv4.ping_group_range="100 100" --rootfs myrootfs_readonly/:O sh
> / $ echo hello > /tmp/file.txt
> / $ ls -l /tmp/file.txt
> -rw-r--r-- 1 guest users 6 Oct 11 17:33 /tmp/file.txt
> / $ ls -ln /tmp/file.txt
> -rw-r--r-- 1 405 100 6 Oct 11 17:33 /tmp/file.txt
> / $
> [testpodman@localhost ~]$ podman version
> Version: 4.0.0-dev
> API Version: 4.0.0-dev
> Go Version: go1.16.7
> Git Commit: bd4d9a09520b2329b1cf3dd8cdf8194b8bdeab67
> Built: Mon Oct 11 18:15:21 2021
> OS/Arch: linux/amd64
> [testpodman@localhost ~]$ cat /etc/fedora-release
> Fedora release 34 (Thirty Four)
> [testpodman@localhost ~]$

Erik Sjölund

unread,
Oct 12, 2021, 3:46:07 PM10/12/21
to singu...@lbl.gov
The commands I used only use one UID. That is achieved by using the
command-line option --uidmap.
(The normal way to run Podman is to use multiple UIDs with the help of
newuidmap)

Rootless overlay is expected in RHEL 8.5
https://github.com/containers/podman/issues/11859#issuecomment-937099561

RHEL 8.5 has not yet been released but there is a beta version:
https://www.redhat.com/en/blog/red-hat-enterprise-linux-85-beta-now-available

The feature to use --rootfs together with ":O" (overlay) is only available in
the Git main branch and is not yet in any Podman release
but there are other features (configuration parameters) that can
be helpful for rootless usage:
ignore_chown_errors, rootless_storage_path, additionalimagestores

Erik
> To view this discussion on the web visit https://groups.google.com/a/lbl.gov/d/msgid/singularity/YWW1W3Udjkf9TG2G%40fnal.gov.
Reply all
Reply to author
Forward
0 new messages