--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
Thanks for your quick reply:3) I /swear/ tried this both ways actually (found that answer in earlier reading), but it's now working as expected.Thanks again.
On Thursday, September 8, 2016 at 7:17:06 PM UTC-4, Gregory M. Kurtzer wrote:
Hi Ryan,1. Yes, we are aware that the EPEL version needs to be updated and Bennet is working on that. Hopefully it will be updated with the release of 2.2.2. Yes, bug and fixed but not in the 2.1.2 release. Sorry, my bad!3. Because bind points occur as bind mounts, the target must be available. So you will need to create ./HPCTMP_NOBKUP directory within the container. The 2.2 release has a solution for this, but it only works on new'ish kernels (e.g. RHEL7).4. I've been considering that... And wondering how best to handle. I asked some other projects if we could leverage their existing mailman implementations, but was unable to secure an email list home. I am also considering www.group.io. Does anyone have experience with them?Thanks Ryan!
So a few things in no particular order -- thanks for this software, BTW -- I finally have had a use case for it:--1) I downloaded 2.0.9 from EPEL and my Lustre file system (mounted at /HPCTMP_NOBKUP) gave an error if you try to use the image from one of the directories, and you can't work with any files from the whole tree. I discovered that there are bind path settings to use, but this 2.0.9 RPM doesn't appear to have a singularity.conf file, and doesn't appear to pay any attention to one if you add one to /etc/singularity (which does exist).2) I downloaded 2.1.2 as a .tar.gz and went through the instructions to create an RPM. It creates a non-ideally-named RPM: singularity-2.1-0.1.el6.x86_64.rpm. Shouldn't it be 2.1.2-0.1 or something like that?3) Still having trouble using files in my Lustre directories with 2.1.2. I can now see the current directory well enough it seems. /HPCTMP_NOBKUP is still empty. So I tried adding it to the now-existing singularity.conf file. Then I started getting "WARNING: Non existant 'bind point' in container: '/HPCTMP_NOBKUP'" without it working any better.4) Is there any way to sign up for this list with a regular e-mail address? My work has a Google domain but I'm not allowed to use it as my primary e-mail (a restriction placed on some staff -- long stupid story). I can't seem to figure out a way to sign up as my real work address, without I guess creating another non-Gmail Google account using my work e-mail address. Is there something smarter?Thanks again. If you wouldn't mind copying novo...@rutgers.edu, I'd appreciate it.
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
--Gregory M. KurtzerHPC Systems Architect and Technology DeveloperLawrence Berkeley National Laboratory HPCS
University of California Berkeley Research IT
Singularity Linux Containers (http://singularity.lbl.gov/)Warewulf Cluster Management (http://warewulf.lbl.gov/)
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
Ryan Novosielski <novo...@scarletmail.rutgers.edu> writes:
> So a few things in no particular order -- thanks for this software, BTW --
> I finally have had a use case for it:
>
> 1) I downloaded 2.0.9 from EPEL
?? It's not released for EPEL (and it's unfortunate that it got into
Fedora). I need to consult on what to do about that.
> and my Lustre file system (mounted at
> /HPCTMP_NOBKUP) gave an error if you try to use the image from one of the
> directories,
As far as I remember, it requires flock, so won't work on a parallel
filesystem tuned to be fast. (People may be saved by flock being
necessary for MPI-IO on Lustre, at least using ROMIO.)
The problems I've had with mounts/loopback in EL6 seem to be connected
with dbus interfering with loop devices in some way I've not figured
out. Restarting dbus freed devices shown by losetup -a that were
apparently associated with dead processes.