how to source a script and remain in the bash shell of the container

1,902 views
Skip to first unread message

Asoka De Silva

unread,
Dec 10, 2017, 7:13:26 PM12/10/17
to singularity
Hi,

$ singularity --version
2.3.1-dist



will result in an interactive bash shell of a singularity container.  What I would like to do is to source a script automatically - e.g. do these lines, and then continue with the interactive shell:

cat mySetup.sh

if [ -z $ATLAS_LOCAL_ROOT_BASE ]; then
  export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
fi
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh



Is there a way to do this and continue with the interactive bash shell env ?  I tried 


and also


but it sources mySetup.sh script and then exits the container.


(The container images are not user modifiable.)


Thanks in advance for any help !

regards,
Asoka

v

unread,
Dec 10, 2017, 7:18:34 PM12/10/17
to singu...@lbl.gov
Hey Asoka,

Have you tried including your source lines in the %environment section? That will be sourced when you shell / run etc. the container. If you need it to only be specific to some context (and not source for any shell at all) then you could use a SCI-F app to do it, eg:

%appenv mycontext

(write code here)

and then when you run/shell

           # will source the environment above
singularity run --app mycontext container.simg

           # will not
singularity run container.simg

Best,

Vanessa

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.



--
Vanessa Villamia Sochat
Stanford University '16

Asoka De Silva

unread,
Dec 10, 2017, 7:27:27 PM12/10/17
to singu...@lbl.gov
Hi Vanessa,

Thanks for the super fast reply !

This is a pre-built image and all I can do is to run it.  Sorry for the naive question (new at this) but how do I add to the %environment or %appenv section or is there a way to override ?

Thanks !

regards,
Asoka

On Sun, Dec 10, 2017 at 4:18 PM, v <vso...@gmail.com> wrote:
Hey Asoka,

Have you tried including your source lines in the %environment section? That will be sourced when you shell / run etc. the container. If you need it to only be specific to some context (and not source for any shell at all) then you could use a SCI-F app to do it, eg:

%appenv mycontext

(write code here)

and then when you run/shell

           # will source the environment above
singularity run --app mycontext container.simg

           # will not
singularity run container.simg

Best,

Vanessa
On Sun, Dec 10, 2017 at 4:13 PM, Asoka De Silva <asoka.desilva@computecanada.ca> wrote:
Hi,

$ singularity --version
2.3.1-dist



will result in an interactive bash shell of a singularity container.  What I would like to do is to source a script automatically - e.g. do these lines, and then continue with the interactive shell:

cat mySetup.sh

if [ -z $ATLAS_LOCAL_ROOT_BASE ]; then
  export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
fi
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh



Is there a way to do this and continue with the interactive bash shell env ?  I tried 


and also


but it sources mySetup.sh script and then exits the container.


(The container images are not user modifiable.)


Thanks in advance for any help !

regards,
Asoka

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.



--
Vanessa Villamia Sochat
Stanford University '16

v

unread,
Dec 10, 2017, 7:39:01 PM12/10/17
to singu...@lbl.gov
Hey Asoka,

If you are just using shell, is there any reason you can't source it after entering the container? You could also have something in a bashrc or profile that is used from your home, if you don't want to do that. We can definitely think of other ways - but arguably the best (and more reproducible way) is to get the build recipe (that %environment section I was talking about is there) and modify it to be correct. Because if someone finds your container and needs to do what you did, they would have a hard time.

It could also be that the ATLAS_LOCAL_ROOT_BASE isn't being found so it's not sourcing at all, you can also pass it into the container like ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase singularity shell...

the way you can test that the variable is getting in is to do something like:

 singularity exec container.simg env | grep ATLAS_LOCAL_ROOT_BASE

and then do the same for the script, maybe use cat to look at it.

 singularity exec container.simg cat ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh

I'm also wondering why the container is exiting if you are shelling. It would be logical for it to exit on exec or run, but probably not shell. If there is a source that runs cleanly it should not technically exit. I would check 1) if there is any exit logic in the source and 2) that you aren't still actually in the container.

Best,

Vanessa

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Asoka De Silva

unread,
Dec 10, 2017, 8:07:37 PM12/10/17
to singu...@lbl.gov
Hi Vanessa,

I am trying to extend the container for other users / use cases so I cannot modify .bashrc / profile or even manually enter it since I am considering making the startup of the container to be transparent to the user.   The end result of this is to provide users with an ATLAS Tier3 environment.   I would like to avoid rebuilding it but as a very last resort can can ask if the container developers can build future versions with  %appenv.

(ATLAS_LOCAL_ROOT_BASE should not be passed to the container unless the container does not define it but this doe snot seem to be the issue as the sourcing will fail if it does not exist.)  If you have /cvmfs available , you can try it:


e.g.

[desilva@cdr818 ~]$ singularity shell  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img             
Singularity: Invoking an interactive shell within container...

bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory
Singularity.x86_64-centos6.img> exit
exit


notice the prompt shows that you are in the bash env prior to exit.

[desilva@cdr818 ~]$ singularity shell  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img < mySetup.sh
Singularity: Invoking an interactive shell within container...

lsetup               lsetup <tool1> [ <tool2> ...] (see lsetup -h):
 lsetup agis          ATLAS Grid Information System
 lsetup asetup        (or asetup) to setup an Athena release
 lsetup atlantis      Atlantis: event display
 lsetup eiclient      Event Index 
 lsetup emi           EMI: grid middleware user interface 
 lsetup fax           Federated XRootD data storage access (FAX)
 lsetup ganga         Ganga: job definition and management client
 lsetup lcgenv        lcgenv: setup tools from cvmfs SFT repository
 lsetup panda         Panda: Production ANd Distributed Analysis
 lsetup pod           Proof-on-Demand (obsolete)
 lsetup pyami         pyAMI: ATLAS Metadata Interface python client
 lsetup rcsetup       (or rcSetup) to setup an ASG release
 lsetup root          ROOT data processing framework
 lsetup rucio         distributed data management system client
 lsetup sft           setup tools from SFT repo (use lcgenv instead)
 lsetup xcache        XRootD local proxy cache
 lsetup xrootd        XRootD data access
advancedTools        advanced tools menu
diagnostics          diagnostic tools menu
helpMe               more help
printMenu            show this menu
showVersions         show versions of installed software
[desilva@cdr818 ~]$ 

and above it exits.


In fact, make it a very simple script that just does 

[desilva@cedar5 ~]$ cat -v hello.sh
echo "hello"


[desilva@cedar5 ~]$ 




and it will do:

[desilva@cdr818 ~]$ singularity shell  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img < ./hello.sh
Singularity: Invoking an interactive shell within container...

hello
[desilva@cdr818 ~]$ 


i.e. it exists.

regards,
Asoka




On Sun, Dec 10, 2017 at 4:38 PM, v <vso...@gmail.com> wrote:
Hey Asoka,

If you are just using shell, is there any reason you can't source it after entering the container? You could also have something in a bashrc or profile that is used from your home, if you don't want to do that. We can definitely think of other ways - but arguably the best (and more reproducible way) is to get the build recipe (that %environment section I was talking about is there) and modify it to be correct. Because if someone finds your container and needs to do what you did, they would have a hard time.

It could also be that the ATLAS_LOCAL_ROOT_BASE isn't being found so it's not sourcing at all, you can also pass it into the container like ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase singularity shell...

the way you can test that the variable is getting in is to do something like:

 singularity exec container.simg env | grep ATLAS_LOCAL_ROOT_BASE

and then do the same for the script, maybe use cat to look at it.

 singularity exec container.simg cat ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh

I'm also wondering why the container is exiting if you are shelling. It would be logical for it to exit on exec or run, but probably not shell. If there is a source that runs cleanly it should not technically exit. I would check 1) if there is any exit logic in the source and 2) that you aren't still actually in the container.

Best,

Vanessa
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

v

unread,
Dec 10, 2017, 8:37:40 PM12/10/17
to singu...@lbl.gov
Hi Vanessa,

I am trying to extend the container for other users / use cases so I cannot modify .bashrc / profile or even manually enter it since I am considering making the startup of the container to be transparent to the user.   The end result of this is to provide users with an ATLAS Tier3 environment.   I would like to avoid rebuilding it but as a very last resort can can ask if the container developers can build future versions with  %appenv.

ah understood. You can also bootstrap their container (use it as a base) and add your custom code to it, here are the docs:

 
(ATLAS_LOCAL_ROOT_BASE should not be passed to the container unless the container does not define it but this doe snot seem to be the issue as the sourcing will fail if it does not exist.)  If you have /cvmfs available , you can try it:

I don't have it, unfortunately :( 
It sounds like the script must exit and thus exit the shell. Just for clarity of information: can you look at the runscript, and envrionment? eg:


singularity inspect -e container.simg 
singularity inspect -r container.simg 

What those scripts are looking at:

singularity exec container.simg cat /.singularity.d/runscript
#!/bin/sh

exec "/bin/bash"

singularity exec container.simg cat /.singularity.d/env/90-environment.sh


 
In fact, make it a very simple script that just does 

[desilva@cedar5 ~]$ cat -v hello.sh
echo "hello"


[desilva@cedar5 ~]$ 




and it will do:

[desilva@cdr818 ~]$ singularity shell  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img < ./hello.sh
Singularity: Invoking an interactive shell within container...

hello
[desilva@cdr818 ~]$ 


i.e. it exists.

Yeah that is wonky! I think the best thing to do is start with walking through the pieces that are called. Since we don't have the recipe the inspection should be a good start. I'm going off to bed but likely others can pipe in, and I can help more tomorrow if needed.

Best,

Vanessa
 
regards,
Asoka



To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Asoka De Silva

unread,
Dec 10, 2017, 9:04:38 PM12/10/17
to singu...@lbl.gov
Hi Vanessa,

Thanks but bootstrap is not going to be optimal here as we don't want every user / instance doing that.  Would be nicer and  easier to just ask the maintainer to fix / include the %appenv context you referred to earlier.

(I tried singularity version 2.4.1 but no luck.)  

As to your queries, I am not very successful with inspecting the images and the dir structure looks different (no /.singularity dir)

Here are the results:

[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1 inspect -e /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img
ERROR: The Singularity metadata directory does not exist in image
ABORT: Aborting with RETVAL=255

[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1 inspect -r /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img
ERROR: The Singularity metadata directory does not exist in image
ABORT: Aborting with RETVAL=255

[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1  exec  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img cat /.singularity.d/runscript
cat: /.singularity.d/runscript: No such file or directory

[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1  exec  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img ls  /.singularity.d/         
ls: cannot access /.singularity.d/: No such file or directory

In fact:
[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1  exec  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img ls -a  /
. bin  etc lost+found  opt      selinux   usr
.. creation-date-2017101101  gpfs media     proc     singularity  var
.exec cvmfs  home misc     root     srv
.run dev  lib mnt     sbin     sys
.shell environment  lib64  net     scratch  tmp


but:



[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1  exec  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img cat   /.run
#!/bin/bash
. /environment
if test -x /singularity; then
    exec /singularity "$@"
else
    echo "No Singularity runscript found, executing /bin/sh"
    exec /bin/sh "$@"
fi


[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1  exec  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img cat   /.exec
#!/bin/bash
. /environment
exec "$@"


[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1  exec  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img cat   /.shell
#!/bin/bash
. /environment
if test -n "$\SHELL" -a -x "$SHELL"; then
    exec "$SHELL" "$@"
else
    echo "ERROR: Shell does not exist in container: $SHELL" 1>&2
    echo "ERROR: Using /bin/sh instead..." 1>&2
fi
if test -x /bin/sh; then
    SHELL=/bin/sh
    export SHELL
    exec /bin/sh "$@"
else
    echo "ERROR: /bin/sh does not exist in container" 1>&2
fi
exit 1



[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1  exec  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img cat   /environment
# Define any environment init code here

if test -z "$SINGULARITY_INIT"; then
    PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
    PS1="Singularity.$SINGULARITY_CONTAINER> $PS1"
    SINGULARITY_INIT=1
    export PATH PS1 SINGULARITY_INIT
fi


A quick test shows that the /.shell file will br problematic.

e.g.if I run it interactive

/.shell 
  will produce a new shell env

./shell < hello.sh
  will say Hello and exit to the caller.

I guess I have to contact the maintainer to fix this.


Many thanks for all your help !


regards,
Asoka

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Dave Dykstra

unread,
Dec 10, 2017, 10:15:43 PM12/10/17
to singu...@lbl.gov
Hi Asoka,

You can do it by binding-in a directory from the outside, and executing
a script you've placed there. E.g.

$ mkdir $HOME/root/home
$ echo ". /cvmfs/cms.cern.ch/cmsset_default.sh" >$HOME/root/home/cms-bash
$ echo "exec bash" >>$HOME/root/home/cms-bash
$ chmod +x $HOME/root/home/cms-bash
$ singularity exec -C -H $HOME/root/home:/srv -B /cvmfs /cvmfs/cernvm-prod.cern.ch/cvm3 ./cms-bash
WARNING: Container does not have an exec helper script, calling './cms-bash' directly
bash-4.1$ echo $PATH
PATH=/cvmfs/cms.cern.ch/common:/cvmfs/cms.cern.ch/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

Vanessa: cvmfs is easy for anybody to install; it is open source and has
pre-built packages for many platforms.
https://cernvm.cern.ch/portal/filesystem/quickstart

Dave
> >>>>> <http://containers-ftw.org/SCI-F/> to do it, eg:

Asoka De Silva

unread,
Dec 10, 2017, 10:51:49 PM12/10/17
to singularity
Hi Dave,

Many thanks for the tip.  I did a variation of it and it worked for me :-)

cat /home/desilva/contTest/.bashrc
if [ -z $ATLAS_LOCAL_ROOT_BASE ]; then
  export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
fi
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh


/opt/software/bin/singularity-2.4.1  exec -H /home/desilva/contTest:/srv  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img bash


regards,
Asoka

Oliver Freyermuth

unread,
Dec 11, 2017, 3:34:03 AM12/11/17
to singu...@lbl.gov, Asoka De Silva
Hi Asoka,

this may now be becoming somewhat offtopic (but not so much, image distribution is likely of interest
to all people on this list), but you made me curious.

Is there a reason you are using the image file from
/cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos6.img
instead of the filesystem from
/cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos6 ?
If CVMFS is available at the site and on the clients, it is significantly more efficient
to use the extracted image file (which is living in "fs") instead of the big image blob.

The advantages are:
- CVMFS can do deduplication and compression of the single files.
- Clients only have to load the files they actually need, on demand.
- Updates of the containers are significantly smaller (only the changed blocks of changed files have to be loaded
by the CVMFS clients).
The image blobs break those features.

I'd say images are only really useful if a site does not have CVMFS, and other means of distribution have to be used,
so for a HPC / HTC site with CVMFS, the "sandbox format" (as singularity calls it) would be the standard way.
It also allows for usage with unprivileged singularity (while images do not).

Or is this just for testing / am I missing something?

Cheers,
Oliver
> --
> You received this message because you are subscribed to the Google Groups "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov <mailto:singularity...@lbl.gov>.


--
Oliver Freyermuth
Universität Bonn
Physikalisches Institut, Raum 1.047
Nußallee 12
53115 Bonn
--
Tel.: +49 228 73 2367
Fax: +49 228 73 7869
--

Asoka De Silva

unread,
Dec 11, 2017, 7:14:40 AM12/11/17
to singularity, asoka....@computecanada.ca
Hi Oliver,

It is ignorance on my part of what is available and a desire to get a proof of concept tested fast.

However I can't seem to get it to work; the bind mounts do not seem to be available and this is important as we want cvmfs to be available through the host.

[desilva@cdr818 ~]$ /opt/software/bin/singularity-2.4.1 exec  -H /home/desilva/contTest:/srv  -B /cvmfs/atlas-condb.cern.ch,/cvmfs/atlas-nightlies.cern.ch,/cvmfs/atlas.cern.ch,/cvmfs/sft.cern.ch /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos6  bash
WARNING: Skipping user bind, could not create bind point /cvmfs/atlas-condb.cern.ch: Operation not supported
WARNING: Skipping user bind, could not create bind point /cvmfs/atlas-nightlies.cern.ch: Operation not supported
WARNING: Skipping user bind, could not create bind point /cvmfs/atlas.cern.ch: Operation not supported
WARNING: Skipping user bind, could not create bind point /cvmfs/sft.cern.ch: Operation not supported

Singularity.centos6-20171011223620-da8dded823dac5266a1b97bd4e224741a5a413343bbc519c6cdbed0a431e5bc4> ls /cvmfs
Singularity.centos6-20171011223620-da8dded823dac5266a1b97bd4e224741a5a413343bbc519c6cdbed0a431e5bc4> ls /cvmfs/atlas.cern.ch
ls: cannot access /cvmfs/atlas.cern.ch: No such file or directory
Singularity.centos6-20171011223620-da8dded823dac5266a1b97bd4e224741a5a413343bbc5

So if you have any ideas on that, please let me know.

Thanks !

regards,
Asoka 

Oliver Freyermuth

unread,
Dec 11, 2017, 7:27:52 AM12/11/17
to singu...@lbl.gov, Asoka De Silva
Dear Asoka,

I'm using:

$ singularity exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos6 bash
Singularity.centos6-20171011223620-da8dded823dac5266a1b97bd4e224741a5a413343bbc519c6cdbed0a431e5bc4> ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/user/atlasLocalSetup.sh
/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/user/atlasLocalSetup.sh
Singularity.centos6-20171011223620-da8dded823dac5266a1b97bd4e224741a5a413343bbc519c6cdbed0a431e5bc4>

The host has CVMFS setup with autofs. You may want to specify "autofs_bug_path" in the Singularity configuration which ensures the
CVMFS mountpoints are already automounted and stay mounted when entering the container (see https://github.com/singularityware/singularity/commit/445152390173becaa7a1b3ccaaf76bcad7a69bff ).
This appears to be necessary with SL6 at least. I couldn't reproduce any such issue on CentOS 7.

Alternatively: The warnings you see are, I think, caused by https://github.com/singularityware/singularity/issues/943 .
"CVMFS_HIDE_MAGIC_XATTRS=yes" should help until a kernel fix is available.

Cheers,
Oliver

Asoka De Silva

unread,
Dec 11, 2017, 7:46:53 AM12/11/17
to singularity, asoka....@computecanada.ca
Thanks Oliver ...

[desilva@cdr818 ~]$ singularity exec -H /home/desilva/contTest:/srv -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos6  bash

lsetup               lsetup <tool1> [ <tool2> ...] (see lsetup -h):

 lsetup agis          ATLAS Grid Information System

 lsetup asetup        (or asetup) to setup an Athena release

 lsetup atlantis      Atlantis: event display

 lsetup eiclient      Event Index 

 lsetup emi           EMI: grid middleware user interface 

 lsetup fax           Federated XRootD data storage access (FAX)

 lsetup ganga         Ganga: job definition and management client

 lsetup lcgenv        lcgenv: setup tools from cvmfs SFT repository

 lsetup panda         Panda: Production ANd Distributed Analysis

 lsetup pod           Proof-on-Demand (obsolete)

 lsetup pyami         pyAMI: ATLAS Metadata Interface python client

 lsetup rcsetup       (or rcSetup) to setup an ASG release

 lsetup root          ROOT data processing framework

 lsetup rucio         distributed data management system client

 lsetup sft           setup tools from SFT repo (use lcgenv instead)

 lsetup xcache        XRootD local proxy cache

 lsetup xrootd        XRootD data access

advancedTools        advanced tools menu

diagnostics          diagnostic tools menu

helpMe               more help

printMenu            show this menu

showVersions         show versions of installed software


It works now.

regards,
Asoka 
Reply all
Reply to author
Forward
0 new messages