I just started this script this morning and am about to jump on a plane, but wanted to share quickly, because (I think) it will be useful! It's a script to export a docker image via running to singularity:
It would probably be annoying for people to have to generate Singularity equivalent Docker files for images, but just being able to export a Docker image seems like a solid way to start! The only test I've done is to create ubuntu images for ubuntu:14.04 and ubuntu:latest and the containers create and I can connect to them successfully. I'll be making the script take in proper arguments, and further:- programatically determining the size- some integration of setting to set up a runscript
- can we programatically get more meta data / etc about the images to also help making the DESCRIPTION and MAINTAINER files?
It's a bit alarming that more information isn't readily available about what is included in an image via docker inspect. Perhaps Singularity can do better by generating some data structure to live with the image that better summarizes this. Likely developers don't have this immediate need to just deploy some Dockerized app, but for researchers it's very important to be able to do things like computationally compare different images. Given that I'm looking for an image to perform some need, I'd want to be able to immediately generate some kind of unsupervised clustering of images based on these things.
Another detail about the singularity-images repo - this is a great idea, but depending on the size of the images (given we have OS here) it might go over the Github file size limit (50MB), and further each user only gets 1GB for Github LFS. If it doesn't work, perhaps Github can be used to submit PRs and discuss new images, but then when they are added, they can be transferred to some other equivalent. In the long term we would want that other equivalent to have functionality akin to Docker Hub, but a lot better, heh :)
I'd like to help out making these images - I'm new to singularity and I'm just figuring out the basics. I'm leaving for a trip today but will be back to working on this soon, bon voyage! :D
Best,
Vanessa
On Tuesday, June 14, 2016 at 11:43:19 AM UTC-7, Gregory M. Kurtzer wrote:Hi all,I created a container image repository and I started working on the "rules" for uploading containers to this repository. Please have a look at it and send me feedback:Thank you!--Gregory M. Kurtzer
High Performance Computing Services (HPCS)
University of California
Lawrence Berkeley National Laboratory
One Cyclotron Road, Berkeley, CA 94720
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
"Gregory M. Kurtzer" <gmku...@lbl.gov> writes:
> Hi Vanessa!
>
> Funnily enough, right when you sent this email I was in a discussion with
> Christian Kniep about just this! (rest of the comments inline)
Also on my list to ask for expertise around tooling after playing around
a bit, inspired by the FAQ entry. More thanks for the script.
Would this basically make Shifter obsolete? I could imagine doing it on
the fly in a resource manager prolog.
> This came up specifically with my talk with Christian. He thinks that it
> may be possible to determine the Dockerfile CMD from within the Docker
> container root.
Doesn't docker inspect provide it, amongst other things? I assumed that
was the Right Way.
> I have not had a chance to investigate this, but it would
> indeed be very interesting if we could and then import it directly into a
> Singularity runscript (/singularity).
> Oh, thank you for telling me about the GitHub limitations! I am open to
> suggestions on where and how to host the images.
I don't know about its limitations, but perhaps the new, reformed
Sourceforge, if national labs can't do it? (I spent a long time in a
national lab :-/.)
--
"Gregory M. Kurtzer" <gmku...@lbl.gov> writes:
>> I don't know about its limitations, but perhaps the new, reformed
>> Sourceforge, if national labs can't do it? (I spent a long time in a
>> national lab :-/.)
>>
>>
> In theory, I can host some resources to support it,
[I was implying it might not be easy...]
> but I don't have time
> to build and maintain a platform that would do this sort of hosting. I am
> open to ideas though!
What sort of a platform do you envisage?
Perhaps also, what sort of images might go there -- fairly plain
customizable bases for various distributions, random applications for
people to pull who don't have root, or what? (I ask because it's not
difficult to build them, and there isn't a mechanism for sharing on a
multi-access system.)
Looks very cool! Thank you!
It would probably be annoying for people to have to generate Singularity equivalent Docker files for images, but just being able to export a Docker image seems like a solid way to start! The only test I've done is to create ubuntu images for ubuntu:14.04 and ubuntu:latest and the containers create and I can connect to them successfully. I'll be making the script take in proper arguments, and further:- programatically determining the size- some integration of setting to set up a runscriptThis came up specifically with my talk with Christian. He thinks that it may be possible to determine the Dockerfile CMD from within the Docker container root. I have not had a chance to investigate this, but it would indeed be very interesting if we could and then import it directly into a Singularity runscript (/singularity).
- can we programatically get more meta data / etc about the images to also help making the DESCRIPTION and MAINTAINER files?That would also be very helpful for the image repo!It's a bit alarming that more information isn't readily available about what is included in an image via docker inspect. Perhaps Singularity can do better by generating some data structure to live with the image that better summarizes this. Likely developers don't have this immediate need to just deploy some Dockerized app, but for researchers it's very important to be able to do things like computationally compare different images. Given that I'm looking for an image to perform some need, I'd want to be able to immediately generate some kind of unsupervised clustering of images based on these things.That is an excellent idea. Can you create a GitHub issue/enhancement for this and include your ideas on what kind of information you want to store, and perhaps how you want to set and edit that info? Chances are it will go into 2.2 (instead of 2.1 which I am just finishing up on).
[I thought I sent this earlier -- apologies if it's duplicated.]
vanessa s <vso...@gmail.com> writes:
> And @Remy submit a PR to get the size programatically as well, stop the
> container, and a check
> <https://github.com/vsoch/singularity-tools/blob/master/docker/docker2singularity.sh>
> to ensure the user has sudo (given the need for docker).
That looks wrong generally. It would fail for me -- I'm not in a "sudo"
or "root" group, but I can use sudo and docker (via "wheel" and
"docker"). I only needed sudo for the singularity part of the
conversion, not the docker one, when I tried it by hand. I think you
have to try executing commands and report if they don't work either bare
or with sudo.
Is it documented somewhere what "Size" actually means? I got the
impression you'd need to run the thing to get the actual size. I don't
know if you can rely on being able to run df /, but doing that in the
minimal "fedora" image provided by the EPEL6 docker packaging, I see
265945088 B used v. a Size from inspect of 214315878.
All else failing, obviously you'd at most double the time to do the
conversion if you just found the size of the export stream, which might
be faster than writing to a file.
>> This came up specifically with my talk with Christian. He thinks that it
>> may be possible to determine the Dockerfile CMD from within the Docker
>> container root. I have not had a chance to investigate this, but it would
>> indeed be very interesting if we could and then import it directly into a
>> Singularity runscript (/singularity).
>>
>
> +1. It looks like this is a pretty reasonable thing to do!
> https://github.com/CenturyLinkLabs/dockerfile-from-image
This might be a way to write a /singularity script for a converted
image, but I'm no docker expert and definitely don't understand the
format stuff:
Cmd=$(docker inspect --format='{{json .Config.Cmd}}' $image)
if [[ $Cmd != none ]]; then
echo '#!/bin/sh'
(IFS='[],'; echo $Cmd)
fi > singularity
However, there's ENTRYPOINT as well as CMD. You'd have to decide which
to use if they're both present.
[Something I notice experimenting is the lack of fabled fast startup
with docker with the installation I have. It seems similar to vagrant
with virtualbox, which I normally use, but which isn't nearly trendy
enough.]
On Thursday, June 23, 2016, Dave Love <d.l...@liverpool.ac.uk> wrote:[I thought I sent this earlier -- apologies if it's duplicated.]
vanessa s <vso...@gmail.com> writes:
> And @Remy submit a PR to get the size programatically as well, stop the
> container, and a check
> <https://github.com/vsoch/singularity-tools/blob/master/docker/docker2singularity.sh>
> to ensure the user has sudo (given the need for docker).
That looks wrong generally. It would fail for me -- I'm not in a "sudo"
or "root" group, but I can use sudo and docker (via "wheel" and
"docker"). I only needed sudo for the singularity part of the
conversion, not the docker one, when I tried it by hand. I think you
have to try executing commands and report if they don't work either bare
or with sudo.That is how I'd recommend doing it as well. Perhaps do a:if sudo true; then...ifAs far as prefixing the docker commands with sudo, it seems reasonable and it should always work.
Is it documented somewhere what "Size" actually means? I got the
impression you'd need to run the thing to get the actual size. I don't
know if you can rely on being able to run df /, but doing that in the
minimal "fedora" image provided by the EPEL6 docker packaging, I see
265945088 B used v. a Size from inspect of 214315878.
All else failing, obviously you'd at most double the time to do the
conversion if you just found the size of the export stream, which might
be faster than writing to a file.Temporary files are generally bad especially when they are large, but you could in theory write the tar to disk, measure the size, and then create the singularity container and import.Hrmm, scratch that.... Horrible idea. Lol
>> This came up specifically with my talk with Christian. He thinks that it
>> may be possible to determine the Dockerfile CMD from within the Docker
>> container root. I have not had a chance to investigate this, but it would
>> indeed be very interesting if we could and then import it directly into a
>> Singularity runscript (/singularity).
>>
>
> +1. It looks like this is a pretty reasonable thing to do!
> https://github.com/CenturyLinkLabs/dockerfile-from-image
This might be a way to write a /singularity script for a converted
image, but I'm no docker expert and definitely don't understand the
format stuff:
Cmd=$(docker inspect --format='{{json .Config.Cmd}}' $image)
if [[ $Cmd != none ]]; then
echo '#!/bin/sh'
(IFS='[],'; echo $Cmd)
fi > singularityI like this idea for the /singularity run script.
However, there's ENTRYPOINT as well as CMD. You'd have to decide which
to use if they're both present.I am not totally familiar with the differences between entry point and cmd. Does entry point describe the shell to use within the container?
[Something I notice experimenting is the lack of fabled fast startup
with docker with the installation I have. It seems similar to vagrant
with virtualbox, which I normally use, but which isn't nearly trendy
enough.]
What kind of startup times are you seeing?
--Gregory M. Kurtzer
High Performance Computing Services (HPCS)
University of California
Lawrence Berkeley National Laboratory
One Cyclotron Road, Berkeley, CA 94720
"Gregory M. Kurtzer" <gmku...@lbl.gov> writes:
>> However, there's ENTRYPOINT as well as CMD. You'd have to decide which
>> to use if they're both present.
>
>
> I am not totally familiar with the differences between entry point and cmd.
> Does entry point describe the shell to use within the container?
There's doc at
https://docs.docker.com/engine/reference/builder/#entrypoint
On looking again, I think you need to consider them both in constructing
/singularity but it needs a more careful reading.
>>
>> [Something I notice experimenting is the lack of fabled fast startup
>> with docker with the installation I have. It seems similar to vagrant
>> with virtualbox, which I normally use, but which isn't nearly trendy
>> enough.]
>>
>>
> What kind of startup times are you seeing?
30s or more.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
$SUDOCMD docker inspect $container_id >> singularity.json
sudo singularity copy $new_container_name singularity.json /
rm singularity.json
singularity info ubuntu:latest-2016-04-06.img
singularity inspect ubuntu:latest-2016-04-06.img
Here is a simple command to export the docker inspect json and import into the container as singularity.json (at the container base /)
https://github.com/vsoch/singularity-tools/blob/master/docker/docker2singularity.sh#L94$SUDOCMD docker inspect $container_id >> singularity.json
sudo singularity copy $new_container_name singularity.json /
rm singularity.json
I didn't change any of the fields / formatting of the json itself (I'd be happy to come up with something) but generally wanted to bring up the idea of having some singularity.json located at "/" in each container that would carry this same meta info as docker inspect. We could perhaps come up with a suggested standard, but allow the user to make any singularity.json that would be desired to expose container meta, options, variables, etc. You could add a command line util to give the user programmatic access, eg:singularity info ubuntu:latest-2016-04-06.imgor even if we want to be consistent with docker:singularity inspect ubuntu:latest-2016-04-06.imgFor the singularity.json - my suggestion would be a mixture of meta data about the container (akin to the current docker inspect) but then add fields that will capture programatically what the container can be used for (in terms of inputs and outputs) and variables the user can specify. If we want to do this right we probably should use some standard like the common workflow language, so these things can eventually be controlled by something to fit into an actual workflow. This way, in the long run a user could submit their images to our container hub, and they could be easily parsed for meta and allowable inputs and outputs, and then plug right into a nice workflow web interface. :)
On Saturday, June 25, 2016, vanessa s <vso...@gmail.com> wrote:Here is a simple command to export the docker inspect json and import into the container as singularity.json (at the container base /)
https://github.com/vsoch/singularity-tools/blob/master/docker/docker2singularity.sh#L94$SUDOCMD docker inspect $container_id >> singularity.json
sudo singularity copy $new_container_name singularity.json /
rm singularity.jsonI am curious if JSON is the best format for Singularity or the best location.
An interesting idea would be to add this type of data to the container header. It doesn't have to be binary and it has the added benefit of not having to mount the container in order to get to the metadata. It will make manipulation of the data a bit trickier unless we have a defined data structure standard (which would include field lengths).
I didn't change any of the fields / formatting of the json itself (I'd be happy to come up with something) but generally wanted to bring up the idea of having some singularity.json located at "/" in each container that would carry this same meta info as docker inspect. We could perhaps come up with a suggested standard, but allow the user to make any singularity.json that would be desired to expose container meta, options, variables, etc. You could add a command line util to give the user programmatic access, eg:singularity info ubuntu:latest-2016-04-06.imgor even if we want to be consistent with docker:singularity inspect ubuntu:latest-2016-04-06.imgFor the singularity.json - my suggestion would be a mixture of meta data about the container (akin to the current docker inspect) but then add fields that will capture programatically what the container can be used for (in terms of inputs and outputs) and variables the user can specify. If we want to do this right we probably should use some standard like the common workflow language, so these things can eventually be controlled by something to fit into an actual workflow. This way, in the long run a user could submit their images to our container hub, and they could be easily parsed for meta and allowable inputs and outputs, and then plug right into a nice workflow web interface. :)What kinds of fields would be appropriate for this metadata? I've got a couple of ideas but it would be interesting to have a brainstorm of the types of data people would like to be able to obtain about a container.
Great ideas, thanks!
The workaround worked!
https://github.com/vsoch/singularity-tools/blob/master/docker/docker2singularity.sh#L97
The only issue would be if there is some kind of confidential stuffs there. See below for more comments:
On Sat, Jun 25, 2016 at 3:20 PM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:
On Saturday, June 25, 2016, vanessa s <vso...@gmail.com> wrote:Here is a simple command to export the docker inspect json and import into the container as singularity.json (at the container base /)
https://github.com/vsoch/singularity-tools/blob/master/docker/docker2singularity.sh#L94$SUDOCMD docker inspect $container_id >> singularity.json
sudo singularity copy $new_container_name singularity.json /
rm singularity.jsonI am curious if JSON is the best format for Singularity or the best location.I don't have strong opinions about the location, but if we are to develop tools in python and for the web, then JSON is optimal. If you suggest something like yaml or RDF I'm going to run for the hills.
An interesting idea would be to add this type of data to the container header. It doesn't have to be binary and it has the added benefit of not having to mount the container in order to get to the metadata. It will make manipulation of the data a bit trickier unless we have a defined data structure standard (which would include field lengths).Oh, cool, I didn't know the containers had headers! How do I inspect / mess around with them?
I didn't change any of the fields / formatting of the json itself (I'd be happy to come up with something) but generally wanted to bring up the idea of having some singularity.json located at "/" in each container that would carry this same meta info as docker inspect. We could perhaps come up with a suggested standard, but allow the user to make any singularity.json that would be desired to expose container meta, options, variables, etc. You could add a command line util to give the user programmatic access, eg:singularity info ubuntu:latest-2016-04-06.imgor even if we want to be consistent with docker:singularity inspect ubuntu:latest-2016-04-06.imgFor the singularity.json - my suggestion would be a mixture of meta data about the container (akin to the current docker inspect) but then add fields that will capture programatically what the container can be used for (in terms of inputs and outputs) and variables the user can specify. If we want to do this right we probably should use some standard like the common workflow language, so these things can eventually be controlled by something to fit into an actual workflow. This way, in the long run a user could submit their images to our container hub, and they could be easily parsed for meta and allowable inputs and outputs, and then plug right into a nice workflow web interface. :)What kinds of fields would be appropriate for this metadata? I've got a couple of ideas but it would be interesting to have a brainstorm of the types of data people would like to be able to obtain about a container.If I wanted to use this in a workflow, I would need a list of inputs, outputs, along with acceptable values (file extensions, etc). It seems like in the case of a workflow there are two options - either to capture just inputs and outputs as file types, or as other containers that are acceptable. The first is completely open to connecting any two images given the input --matches--> output, the second is extremely limited but much less likely to lead to error in the workflow generation. Of those inputs and outputs, I would want to be able to specify variables for the container analysis (or purpose) like ports, certificates, and for meta data about the container I would likely want an author (someone to contact with questions or issues). If these are stored on a container hub then there would be a board for the container's issues.
The entire spec for CWL is here, but my thinking is that we should take an extremely minimalist approach - basically having the minimal things listed above, and only expanding on that as we develop workflows / use cases and find that there is need. For so many of these standards big teams of ontologists come up with meta data things that are extremely detailed (and mostly useless) that do nothing but make the standard annoying and hard to use.
On Sat, Jun 25, 2016 at 3:36 PM, vanessa s <vso...@gmail.com> wrote:The workaround worked!
https://github.com/vsoch/singularity-tools/blob/master/docker/docker2singularity.sh#L97
The only issue would be if there is some kind of confidential stuffs there. See below for more comments:Fantastic, but that will only work if it is on the same system and imported as the same user. I think I can make it more portable...At present, the Singularity process flow for the group file is:1. mounts the container2. copies the /etc/group file to the sessiondirectory3. appends user info from getgrent() to the group file4. binds the updated group file to the container's /etc/groupSo I will look into adding a step 3.5 to also include all supplementary groups.
On Sat, Jun 25, 2016 at 3:20 PM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:
On Saturday, June 25, 2016, vanessa s <vso...@gmail.com> wrote:Here is a simple command to export the docker inspect json and import into the container as singularity.json (at the container base /)
https://github.com/vsoch/singularity-tools/blob/master/docker/docker2singularity.sh#L94$SUDOCMD docker inspect $container_id >> singularity.json
sudo singularity copy $new_container_name singularity.json /
rm singularity.jsonI am curious if JSON is the best format for Singularity or the best location.I don't have strong opinions about the location, but if we are to develop tools in python and for the web, then JSON is optimal. If you suggest something like yaml or RDF I'm going to run for the hills.Haha, I guess I am thinking more low level... But,... in theory I suppose we can print it in JSON (or YAML LOL) so it can be easily parseable by the calling process.
An interesting idea would be to add this type of data to the container header. It doesn't have to be binary and it has the added benefit of not having to mount the container in order to get to the metadata. It will make manipulation of the data a bit trickier unless we have a defined data structure standard (which would include field lengths).Oh, cool, I didn't know the containers had headers! How do I inspect / mess around with them?Well, at the moment, there isn't much in the header aside from the interpreter line which is what provides the magic for being able to execute a container file. There isn't much to see, but you can do:$ head -n 1 container.imgBut, much of the code is already in place that if we want to prepend more information there we can.
I didn't change any of the fields / formatting of the json itself (I'd be happy to come up with something) but generally wanted to bring up the idea of having some singularity.json located at "/" in each container that would carry this same meta info as docker inspect. We could perhaps come up with a suggested standard, but allow the user to make any singularity.json that would be desired to expose container meta, options, variables, etc. You could add a command line util to give the user programmatic access, eg:singularity info ubuntu:latest-2016-04-06.imgor even if we want to be consistent with docker:singularity inspect ubuntu:latest-2016-04-06.imgFor the singularity.json - my suggestion would be a mixture of meta data about the container (akin to the current docker inspect) but then add fields that will capture programatically what the container can be used for (in terms of inputs and outputs) and variables the user can specify. If we want to do this right we probably should use some standard like the common workflow language, so these things can eventually be controlled by something to fit into an actual workflow. This way, in the long run a user could submit their images to our container hub, and they could be easily parsed for meta and allowable inputs and outputs, and then plug right into a nice workflow web interface. :)What kinds of fields would be appropriate for this metadata? I've got a couple of ideas but it would be interesting to have a brainstorm of the types of data people would like to be able to obtain about a container.If I wanted to use this in a workflow, I would need a list of inputs, outputs, along with acceptable values (file extensions, etc). It seems like in the case of a workflow there are two options - either to capture just inputs and outputs as file types, or as other containers that are acceptable. The first is completely open to connecting any two images given the input --matches--> output, the second is extremely limited but much less likely to lead to error in the workflow generation. Of those inputs and outputs, I would want to be able to specify variables for the container analysis (or purpose) like ports, certificates, and for meta data about the container I would likely want an author (someone to contact with questions or issues). If these are stored on a container hub then there would be a board for the container's issues.So would these items be specific to the runscript inside the container at /singularity or something more general about the contents inside the container and other metadata like build date, maintainer, contact, etc?
commands=`$SUDOCMD docker export $container_id | $SUDOCMD singularity import $new_container_name`
What I'd like to see is a collaboration working on the implementation details and then integrate those findings (citing the sources) into the existing Singularity command architecture (either directly into import or a new docker-import command). I would prefer to keep all Singularity functions within the shell based command syntax.
A point that Dave brought up which is worth reiterating is regarding licensing and copyright assertion. I won't accept a non-BSD license. MIT license may be acceptable if BSD is impossible for you, but no GPL, Apache or other OSI approved licenses will be accepted. Additionally, I leave copyright assertion up to the contributor. When you send a patch or pull request it is your choice to also include your copyright information (or not). I will not limit acceptance of a patch based on copyright.
I will add that information to the website and documentation as soon as I can.
On Mon, Jun 27, 2016 at 9:52 AM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:What I'd like to see is a collaboration working on the implementation details and then integrate those findings (citing the sources) into the existing Singularity command architecture (either directly into import or a new docker-import command). I would prefer to keep all Singularity functions within the shell based command syntax.+1 on both these points - I agree that the core singularity should be packaged in a shell based command syntax, and that collaborations should feed into this in the way that best preserves documentation, etc. The workflow infrastructure that I am planning out is going to have a large web component, and command line, and so my suggestion is that the command line until reveals data structures that are seamlessly parsable into these technologies (and still command line friendly) :)
A point that Dave brought up which is worth reiterating is regarding licensing and copyright assertion. I won't accept a non-BSD license. MIT license may be acceptable if BSD is impossible for you, but no GPL, Apache or other OSI approved licenses will be accepted. Additionally, I leave copyright assertion up to the contributor. When you send a patch or pull request it is your choice to also include your copyright information (or not). I will not limit acceptance of a patch based on copyright.I am not hugely opinionated about licensing - I like MIT in that it is most permissive, and am ok with a well thought out choice.
I will add that information to the website and documentation as soon as I can.Thanks for putting your thoughts together so nicely! Same page! :O)
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
On Jun 28, 2016, at 7:32 AM, Dave Love <d.l...@liverpool.ac.uk> wrote:
One point is that the licence is LBNL-specific with the names
incorporated, and I'm not sure how that should be treated. Presumably
there are Labs rules/policy on all this, but I've never run into them.
"Gregory M. Kurtzer" <gmku...@lbl.gov> writes:
> A point that Dave brought up which is worth reiterating is regarding
> licensing and copyright assertion. I won't accept a non-BSD license. MIT
> license may be acceptable if BSD is impossible for you, but no GPL, Apache
> or other OSI approved licenses will be accepted. Additionally, I leave
> copyright assertion up to the contributor. When you send a patch or pull
> request it is your choice to also include your copyright information (or
> not). I will not limit acceptance of a patch based on copyright.
One point is that the licence is LBNL-specific with the names
incorporated, and I'm not sure how that should be treated. Presumably
there are Labs rules/policy on all this, but I've never run into them.
I know you're not a beginner, and maybe Warewulf experience is
different, but I think it's worth tracking copyright holders. I try to
remember to add "Copyright <date> <holder> ..." for significant
contributions as well as noting them in log messages and noting when
they become significant. For instance, I originally gave up on
packaging for Debian because the ftpmaster insisted on a complete list
of copyright holders, which wasn't available (as for Linux, amongst many
other packages, but...). I've also had to re-write stuff whose
provenance wasn't available from revision history.
On Jun 28, 2016, at 7:32 AM, Dave Love <d.l...@liverpool.ac.uk> wrote:
One point is that the licence is LBNL-specific with the names
incorporated, and I'm not sure how that should be treated. Presumably
there are Labs rules/policy on all this, but I've never run into them.
I’ve expressed concern about this as well. The LBNL license is not an OSI-approved open source license and hasn’t to my knowledge been evaluated by anyone for compatibility with other OSS licenses.
If Singularity is going places, IMO the license situation should be clarified: either re-license to a standard OSS license or submit the LBNL license for community approval and get a third-party opinion on compatibility.
I did ask Greg about this offline a few months ago. IIRC, the LBNL license was the only license available to LBNL folks who wished to publish their source code, by institutional policy.
On Jun 28, 2016, at 8:21 AM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:On Tue, Jun 28, 2016 at 8:05 AM, Priedhorsky, Reid <rei...@lanl.gov> wrote:
On Jun 28, 2016, at 7:32 AM, Dave Love <d.l...@liverpool.ac.uk> wrote:
One point is that the licence is LBNL-specific with the names
incorporated, and I'm not sure how that should be treated. Presumably
there are Labs rules/policy on all this, but I've never run into them.
I’ve expressed concern about this as well. The LBNL license is not an OSI-approved open source license and hasn’t to my knowledge been evaluated by anyone for compatibility with other OSS licenses.Well, it is basically a standard 3 clause BSD license. The only difference specific to the license has to do with calling out the LBNL/DOE name can not be used for promotion.
Orrrr, are you referring to the bit of verbiage about contributions or the final NOTICE? If so, it should be mentioned that the license itself is standard 3 clause BSD.
If Singularity is going places, IMO the license situation should be clarified: either re-license to a standard OSS license or submit the LBNL license for community approval and get a third-party opinion on compatibility.Would it help in your mind if I spoke with legals about changing the 3rd clause to be standard BSD wording, such that it is no longer modified?
I did ask Greg about this offline a few months ago. IIRC, the LBNL license was the only license available to LBNL folks who wished to publish their source code, by institutional policy.It is not completely institutional policy, rather then collaboration with legals... Some people have gotten other licenses approved, but I will have to push back on them and make the case.Thoughts?--Gregory M. Kurtzer
High Performance Computing Services (HPCS)
University of California
Lawrence Berkeley National Laboratory
One Cyclotron Road, Berkeley, CA 94720
You are under no obligation whatsoever to provide any bug fixes, patches, orupgrades to the features, functionality or performance of the source code("Enhancements") to anyone; however, if you choose to make your Enhancementsavailable either publicly, or directly to Lawrence Berkeley NationalLaboratory, without imposing a separate written license agreement for suchEnhancements, then you hereby grant the following license: a non-exclusive,royalty-free perpetual license to install, use, modify, prepare derivativeworks, incorporate into other computer software, distribute, and sublicensesuch enhancements or derivative works thereof, in binary and source code form.
GK> The only difference specific to the license has to do with calling out the LBNL/DOE name can not be used for promotion.
No, it’s this:
You are under no obligation whatsoever to provide any bug fixes, patches, orupgrades to the features, functionality or performance of the source code("Enhancements") to anyone; however, if you choose to make your Enhancementsavailable either publicly, or directly to Lawrence Berkeley NationalLaboratory, without imposing a separate written license agreement for suchEnhancements, then you hereby grant the following license: a non-exclusive,royalty-free perpetual license to install, use, modify, prepare derivativeworks, incorporate into other computer software, distribute, and sublicensesuch enhancements or derivative works thereof, in binary and source code form.
On Jun 28, 2016, at 9:40 AM, Gregory M. Kurtzer <gmku...@lbl.gov> wrote:
You are under no obligation whatsoever to provide any bug fixes, patches, orupgrades to the features, functionality or performance of the source code("Enhancements") to anyone; however, if you choose to make your Enhancementsavailable either publicly, or directly to Lawrence Berkeley NationalLaboratory, without imposing a separate written license agreement for suchEnhancements, then you hereby grant the following license: a non-exclusive,royalty-free perpetual license to install, use, modify, prepare derivativeworks, incorporate into other computer software, distribute, and sublicensesuch enhancements or derivative works thereof, in binary and source code form.
Ahhh, that part I don't think will be able to be removed. But... It is technically not the license rather then a contribution agreement
and (in simplistic summary) it only ensures that those enhancements or contributions will always be BSD'ish distributable (e.g. not restrict-able via patent's or in any other ways).
What is the concern here? I am happy to discuss it with legals and ask for a layman's explanation of it to add to the FAQ.
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.