sandbox container with system symlinks

252 views
Skip to first unread message

Thomas Hartmann

unread,
Jul 1, 2021, 11:43:05 AM7/1/21
to singu...@lbl.gov
Hi all,

I am running in a problem with a sandboxed container I just build, that
seems to break portability.
I.e., after building with 3.8.0-1.fc33, I can shell without problems
into the container.

However, after copying the whole sandbox tree to another instance with
3.7.4-1.el7 (targeting CVMFS deployement), all action calls fail with [1].

To exclude files/links/... missed during copying, I already tried to tar
the sandbox dir to copy and untar it - without success.

A sif image of the same recipe works without issues.

An ad hoc suspicion is, that it might be related to system symlinks -
e.g., in the sandbox /var I have symlinks to /run etc. (the container is
based on a Ubuntu base image).
I am not sure, how these symlinks are resolved in the container's
context (--contain(all) does not change the behaviour)

Cheers,
Thomas


[1]
> singularity --verbose shell /var/tmp/tmp.d/
...
VERBOSE: Not updating passwd/group files, running as root!
VERBOSE: /root found within container
VERBOSE: rpc server exited with status 0
VERBOSE: Execute stage 2
FATAL: exec /bin/bash failed: a shared library is likely missing in
the image

[2]
> ls -all tmp.d/var/
...
lrwxrwxrwx 1 hartmath root 9 Jun 29 07:02 lock -> /run/lock
...
lrwxrwxrwx 1 hartmath root 4 Jun 29 07:02 run -> /run
...

> stat tmp.d/var/run
File: tmp.d/var/run -> /run
Size: 4 Blocks: 0 IO Block: 4096 symbolic link
Device: fd02h/64770d Inode: 11164988 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 1000/hartmath) Gid: ( 0/ root)
Access: 2021-07-01 16:12:11.290508900 +0200
Modify: 2021-06-29 07:02:32.000000000 +0200
Change: 2021-07-01 16:10:17.989151325 +0200
Birth: 2021-07-01 16:07:59.481936672 +0200

Dave Dykstra

unread,
Jul 1, 2021, 1:24:28 PM7/1/21
to singu...@lbl.gov
Hi Thomas,

Can you please give a complete set of steps for someone else to
reproduce the problem?

Dave
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
> To view this discussion on the web visit https://groups.google.com/a/lbl.gov/d/msgid/singularity/2bba4314-254d-079b-6e9d-f7bc57ad2b79%40desy.de.

Steffen Bollmann

unread,
Jul 2, 2021, 1:31:45 AM7/2/21
to singu...@lbl.gov
Dear Thomas,

I am deploying singularity containers as well on cvmfs and haven't seen the problem you mentioned. However, I am starting from docker containers and import these via the cvmfs ducc tool: first generating my wishlist https://github.com/NeuroDesk/neurodesk/blob/master/gen_cvmfs_wishlist.sh and then importing this on the cvmfs stratum 0: cvmfs_ducc convert recipe_neurodesk_auto.yaml - maybe this could be a work-around for you - I am not sure if they are doing anything special in the ducc conversion to make this work?

Kind regards
Steffen 

Thomas Hartmann

unread,
Jul 2, 2021, 4:10:32 AM7/2/21
to singu...@lbl.gov, Dave Dykstra
Hi Dave,

it seems, that I shot myself in the foot...

I build the attached recipe (not overly complex).
However, I have been using a small wrapper for building, where I
afterwards own everything in the container directory tree to my local
user [1] (wrote it quite some time ago to 'fix' an issue)

I just build it directly again and tar'ed/copied the container without
any re-owning etc. (i.e., skipping files not readable to my user but
only wheel) and I do not encounter the issue...

...I guess, that my re-owning might have screwed up the portability. I
have to check in detail, what happened - hard-links from the Docker
layer tarballs should only be preserved within each layer tarballs, I
think. But I am unsure, how the resulting hard/symlinks look like in the
assembled sandbox directory tree.

btw: un'taring the tarball for deploying on CVMFS throws some errors
like [2] - but AFAIS the sandbox looks fine and publishing throws no
errors (various hard links getting broken)

Cheers,
Thomas

[1] effectively
> sudo
singularity build --sandbox ffmpeg.d Singularity
chown -R hartlocal ffmpeg.d
chmod -R a+r ffmpeg.d/
find ffmpeg.d/ -type d -exec chmod 0755 {} \;
tar -cvf ffmpeg.d.tar ffmpeg.d
# tar @ 1.32 on build; tar @ 1.26 on cvmfs


[2]
[grid@grid-cvmfs]/cvmfs/grid.desy.de/container/releases/ffmpeg/20210701%
tar -xvf ffmpeg.d.tar
...
tar: ffmpeg.d/usr/lib/x86_64-linux-gnu/perl/cross-config-5.26.1:
Directory renamed before its status could be extracted
tar: ffmpeg.d/usr/lib/apt/planners: Directory renamed before its status
could be extracted
tar: ffmpeg.d/usr/lib/python3.6: Directory renamed before its status
could be extracted
tar: ffmpeg.d/etc/systemd/system/timers.target.wants: Directory renamed
before its status could be extracted
tar: ffmpeg.d/etc/systemd/system: Directory renamed before its status
could be extracted
tar: ffmpeg.d/etc/systemd: Directory renamed before its status could be
extracted
tar: ffmpeg.d/etc/alternatives: Directory renamed before its status
could be extracted
tar: ffmpeg.d/etc/ssl/certs: Directory renamed before its status could
be extracted
tar: ffmpeg.d/etc/ssl: Directory renamed before its status could be
extracted
tar: ffmpeg.d/etc/rcS.d: Directory renamed before its status could be
extracted
tar: ffmpeg.d/etc: Directory renamed before its status could be extracted
tar: Exiting with failure status due to previous errors
Singularity

Oliver Freyermuth

unread,
Jul 2, 2021, 8:51:14 AM7/2/21
to singu...@lbl.gov, Thomas Hartmann, Dave Dykstra
Hi Thomas,

Am 02.07.21 um 10:10 schrieb Thomas Hartmann:
> Hi Dave,
>
> it seems, that I shot myself in the foot...
>
> I build the attached recipe (not overly complex).
> However, I have been using a small wrapper for building, where I afterwards own everything in the container directory tree to my local user [1] (wrote it quite some time ago to 'fix' an issue)
>
> I just build it directly again and tar'ed/copied the container without any re-owning etc. (i.e., skipping files not readable to my user but only wheel) and I do not encounter the issue...
>
> ...I guess, that my re-owning might have screwed up the portability. I have to check in detail, what happened - hard-links from the Docker layer tarballs should only be preserved within each layer tarballs, I think. But I am unsure, how the resulting hard/symlinks look like in the assembled sandbox directory tree.
>
> btw: un'taring the tarball for deploying on CVMFS throws some errors like [2] - but AFAIS the sandbox looks fine and publishing throws no errors (various hard links getting broken)

this is a known problem with "old" enterprise linux kernels and OverlayFS (used by CVMFS during deployment),
I believe it is related to inode numbers not being constant, see e.g. these issues / discussions:
- https://lwn.net/Articles/721470/
- https://github.com/docker/hub-feedback/issues/727
- https://github.com/moby/moby/issues/19647
To avoid this misleading warning, we've switched to bsdtar which is less sensitive in this regard.

Another workaround would be to use CVMFS tarball ingestion ("cvmfs_server ingest", we've also still switch to that in production, but it's ideal for such use cases).

Cheers,
Oliver

Dave Dykstra

unread,
Jul 2, 2021, 11:09:56 AM7/2/21
to Thomas Hartmann, singu...@lbl.gov
Ok, it's good you figured that out. That happens around half the time
when I ask people to come up with reproducing instructions.

Dave
> Bootstrap: docker
> From: linuxserver/ffmpeg
>
> %labels
> MAINTAINER Thomas Hartmann @ DESY
> VERSION v20210701
>
> %help
> Foo Bar Documentation missing doing later
>
> %post
> apt-get update
> apt-get -y install git python3-json-tricks python3-ujson python3.8 python3.7 python3-simplejson python-git python3-argh python-github python3-git git-lfs


Reply all
Reply to author
Forward
0 new messages