A Complete Solution to Distribution (musings on a unix build solution for erlang)

63 views
Skip to first unread message

Eric Merritt

unread,
Feb 22, 2012, 7:28:49 PM2/22/12
to erlware-...@googlegroups.com
Torben Hoffmen, Tim Watson and I have been talking a bit about how to
manage the complete lifecycle of an Erlang/OTP release. I had some
thoughts today that I talked over with Torben. What follows is a rough
outline of what I have been thinking.


Right now we have two competing systems

* Rebar (which is quite popular)
* Sinan (which is much less popular but more correct)

These build systems approach the problem of building an OTP system in
very different ways. However, they both take the approach of managing
the entire process from dependency resolution until release
management. So while rebar does a better, more complete, job of actual
building than sinan does, and sinan does a better job of dependency
management you have no choice but to adopt one or the other. We seem
to have lost the unix philosophy of small tools dedicated to a single
purpose with standard inputs and standard outputs so that they can be
chained together at need. So people can use the right tool for the
right job and substitute tools out at need.

I propose that we create a tool chain based in this idea to do the
complete build cycle from dependency resolution to release management
and everything in between. For the purposes of discussion I will split
the complete build process into parts and give each part a name to
refer to it. Then I will talk about how to build a tool to address
each part. The parts of the build process are as follows:

1. Assemble Metadata on available OTP Applications (erld)
2. Process dependency specs to produce a dotrel file (erls)
3. Resolve the dependencies in the dotrel file (erlr)
4. Build the project (erlb)
5. Using the dotrel and build result assemble the release (erla)
6. Package the release (erlp)

These names are not meant to be


Assemble Metadata on available OTP Applications (erld)
------------------------------------------------------

This task assumes that there are repositories out tthere that contain
OTP applications. It is left as a task for us to define what a
repository is and how to access it.

However, for the moment lets assume that this infrastructure exists. In
that instance we want to have knowledge of all of the OTP Applications
available to us.

The first step in our tool chain is to build erld, a command that goes
out to the universe of OTP apps and pulls down complete metadata about
the apps. This metadata should include the following:

* The contents of the dotapp file
* Any dependency information (probably should be in the dotapp file)
* The location of the application

It should put this metadata in an easily consumed file in a
configurable location.


### Inputs

The inputs to erld should be a list of repositories and on disk
locations for OTP apps.

It should be able to take these from configuration files (probably in
/etc/<something> and ~/.<something>) and via command line args.

### Outputs

erld should output an easily queryable file containing the universe of
app metadata. This file should make two types of queries
trivial.

* Given an app name it should be trivial to return a list of versions.
* Given an app name, version pair it should be trivial to return a the
OTP App metadata and dependencies for that app/version.

This output option should be specifiable on the command line as well
as defaulting to a known location if no option is specified.


Process dependency specs to produce a dotrel file (erls)
--------------------------------------------------------

erls depends on the output of erld. It consumes the metadata file that
erld produces. The job of erls is to take a set of applications and
constraints and produce a standard rel file that contains all direct
and transitive dependencies for the release.

### Inputs

erls takes in the metadata produced by erld. By default it consumes
metadata in a known global location. However, the metadata file should
be specifiable on the command line. It also takes as input a list of
applications and constraints to direct the resolution.

### Outputs

erls outputs a dotrel file containing the direct and transitive
dependencies for the project.


Resolve the dependencies in the dotrel file (erlr)
--------------------------------------------------

erlr takes a dotrel file and the metadata produced by erld and pulls
down the specified OTP Applications and version. Depending on the
nature of the package and repository it may optionally build these
dependencies. By default, it should output to a known global location,
however, an alternate location is specifiable on the command line.

### Inputs

erlr takes an input a dotrel file, the metadata, and an output
location. The metadata may be omitted from the command line and erlr
will look for the metadata in the known global location. The output
location my also be omitted, in that case the applications are pulled
down to a known global location.

### Output

The output from erlr is all of the *realized*, compiled OTP
applications in the correct <app-name>-<app-vsn> format in the output
location specified.

**NOTE** it could be that erld and erlr are the same thing since the
both deal with interacting with the remote repos.

Build the project (erlb)
------------------------

erlb is the most complex of the suite. It takes the output of erlr as
dependencies and builds the OTP Application(s) specified.

### Inputs

A list of locations and/or applications to build and the directory
containing the dependencies.

### Outputs

The built OTP Applications

**NOTE** rebar or sinan could probably fill this role easily without
much change.

Using the dotrel and build result assemble the release (erla)
-------------------------------------------------------------

erla assembles a release into a release directory in the correct OTP
format. It takes the dotrel/dotrelup and builds the required script
and boot files.

### Inputs

The dotrel file, paths to any configuration required, and the erlb
built apps.

### Outputs

A directory containing a complete OTP Release.


Package the release (erlp)
--------------------------

erlp simply packages up the release. By default this should probably
be the usual distribution tarball. However, it could also output
distribution specific package files at need.

### Inputs

The directory containing the release, a type specifier for the output.

### Outputs

The specified distributable unit.

**NOTE** erla and erlp might end up being the same application.

Tristan Sloughter

unread,
Feb 22, 2012, 10:12:26 PM2/22/12
to erlware-...@googlegroups.com
I haven't read all this yet... But I forgot to ask, in regards to binary packages, if anyone has tried CEAN2.

Tristan


--
You received this message because you are subscribed to the Google Groups "erlware-questions" group.
To post to this group, send email to erlware-...@googlegroups.com.
To unsubscribe from this group, send email to erlware-questi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/erlware-questions?hl=en.


Tim Watson

unread,
Feb 23, 2012, 5:08:17 PM2/23/12
to erlware-...@googlegroups.com
On 23 February 2012 03:12, Tristan Sloughter

<tristan....@gmail.com> wrote:
> I haven't read all this yet... But I forgot to ask, in regards to binary
> packages, if anyone has tried CEAN2.
>

Haven't tried it, but looked at the source code and decided not to.

Tristan Sloughter

unread,
Feb 23, 2012, 5:10:04 PM2/23/12
to erlware-...@googlegroups.com
Did you look at the new CEAN or the old? I feel like the old was a bunch of shell scripts but the new isn't. I'll have to go look myself...

Tim Watson

unread,
Feb 23, 2012, 5:14:31 PM2/23/12
to erlware-...@googlegroups.com
Doesn't look to have changed much. The really scary bit was the screen
scraping HTML to get the dependencies. Maybe I was looking at the
wrong repo....

On 23 February 2012 22:10, Tristan Sloughter

Tristan Sloughter

unread,
Feb 23, 2012, 5:17:10 PM2/23/12
to erlware-...@googlegroups.com
It is impossible to find the source repo and distinguish CEAN2 from CEAN1 as far as I can tell right now. Ugh

Tristan Sloughter

unread,
Feb 23, 2012, 5:18:06 PM2/23/12
to erlware-...@googlegroups.com
 But it caused me to run into this one again http://jkvor.com/erlang-package-manager

Tim Watson

unread,
Feb 23, 2012, 5:25:02 PM2/23/12
to erlware-...@googlegroups.com
It's actually really good - I've submitted a couple of features myself
and use it to grab new stuff to play with before I decide if I want to
use it. I stopped putting stuff into ERL_LIBS a while ago and keep my
installs completely clean, relying on (project) local dependencies as
part of my build processes. It saves getting nasty surprises when you
forget what you've put in there.

Jake's epm works by pulling the download and running a shell command
(which it either tries to figure out, or you provide on the command
line) to build it. It does create the install directory with the
correct <app>-<version> structure.

I can recommend it as another agner, although I don't use either for
anything serious.

On 23 February 2012 22:18, Tristan Sloughter

Tim Watson

unread,
Feb 23, 2012, 5:52:59 PM2/23/12
to erlware-...@googlegroups.com
I'm coming to play for sure.

On 23 February 2012 00:28, Eric Merritt <ericbm...@gmail.com> wrote:
> Torben Hoffmen, Tim Watson and I have been talking a bit about how to
> manage the complete lifecycle of an Erlang/OTP release. I had some
> thoughts today that I talked over with Torben. What follows is a rough
> outline of what I have been thinking.
>
>
> Right now we have two competing systems
>
> * Rebar (which is quite popular)
> * Sinan (which is much less popular but more correct)
>
> These build systems approach the problem of building an OTP system in
> very different ways. However, they both take the approach of managing
> the entire process from dependency resolution until release
> management. So while rebar does a better, more complete, job of actual
> building than sinan does, and sinan does a better job of dependency
> management you have no choice but to adopt one or the other.  We seem
> to have lost the unix philosophy of small tools dedicated to a single
> purpose with standard inputs and standard outputs so that they can be
> chained together at need. So people can use the right tool for the
> right job and substitute tools out at need.

I am 100% behind this approach. I think it's absolutely right to build
a protocol between the tools and let people come in and replace them
if they need/want to.

>
> I propose that we create a tool chain based in this idea to do the
> complete build cycle from dependency resolution to release management
> and everything in between. For the purposes of discussion I will split
> the complete build process into parts and give each part a name to
> refer to it. Then I will talk about how to build a tool to address
> each part. The parts of the build process are as follows:
>
> 1. Assemble Metadata on available OTP Applications (erld)
> 2. Process dependency specs to produce a dotrel file (erls)
> 3. Resolve the dependencies in the dotrel file (erlr)
> 4. Build the project (erlb)
> 5. Using the dotrel and build result assemble the release (erla)
> 6. Package the release (erlp)
>
> These names are not meant to be
>

Final, I hope. ;)

>
> Assemble Metadata on available OTP Applications (erld)
> ------------------------------------------------------
>
> This task assumes that there are repositories out tthere that contain
> OTP applications. It is left as a task for us to define what a
> repository is and how to access it.
>

We need to define how we interact with the repository I think, and
then people can implement it on github API calls, an SCM repository,
maven/ivy/nexus, erlware/faxien, cean (if you don't mind scraping
HTML), rpm/deb, etc, etc....

> However, for the moment lets assume that this infrastructure exists. In
> that instance we want to have knowledge of all of the OTP Applications
> available to us.
>
> The first step in our tool chain is to build erld, a command that goes
> out to the universe of OTP apps and pulls down complete metadata about
> the apps. This metadata should include the following:
>
> * The contents of the dotapp file
> * Any dependency information (probably should be in the dotapp file)
> * The location of the application

* whether or not it is a source or binary distribution
* if binary: the minimum (and by inference maximum) version of erts it
will run on
* if binary: any operating system and/or architecture constraints
(ports/drivers, stuff compiled with HIPE, etc)

I know we're not trying to build a package manager, but we do at least
need to think about these things just like you did in faxien. Or do
you think those checks belong somewhere else? I would've thought you
want to download the right thing for your environment. Perhaps if
you're building on a mac from binary artefacts, but planning on
deploying to linux, you should have a way to state that also.

This does make the local repository a bit more complex to manage. I
use kerl to manage multiple (clean) OTP installations, often for
testing things on SMP vs NON, with/without HIPE, etc. So I need a way
to install multiple clashing applications

erlxsl-0.0.2
- hipe (os-x x86_64)
- hipe (linux x86)
- hipe (linux x86_64)
- built on R13
- built on R14
- built on R15

I don't know if folder structure of indexed metadata is the best way
to manage this (or push the binaries into a running a local couch-db
instance or whatever) but it needs consideration.

Sorry - I realise you're not onto putting the built artefacts on disk
yet, I just ended up there.

>
> It should put this metadata in an easily consumed file in a
> configurable location.
>

This is what we do once we've built up a local index (of available
artefacts) right? Sounds sensible to me - this is how most package
management tools seem to work (yum, apt, etc).

Ivy and Maven do something quite different - they query the
repositories as and when they want an individual item, but I think
that approach makes it harder to do the unix tool chain thing, which
is very worthwhile.

>
> ### Inputs
>
> The inputs to erld should be a list of repositories and on disk
> locations for OTP apps.
>
> It should be able to take these from configuration files (probably in
> /etc/<something> and ~/.<something>) and via command line args.
>

Sounds good. It would be nice to have a command to add then to your
local config file too.

> ### Outputs
>
> erld should output an easily queryable file containing the universe of
> app metadata. This file should make two types of queries
> trivial.
>
> * Given an app name it should be trivial to return a list of versions.
> * Given an app name, version pair it should be trivial to return a the
>  OTP App metadata and dependencies for that app/version.
>
> This output option should be specifiable on the command line as well
> as defaulting to a known location if no option is specified.
>

I'm not 100% sure I agree with this. It sounds right, but I wonder
whether 'erld' should actually provide a query interface (either
command line or API based) to which you can ask those questions. This
insulates the consumer from the internal structure (implementation) of
the index file, and also means you can vary whether they index is a
file or a database or whatever an implementor likes.

I like the unix tool chain style, and I get the whole simple
pipes/files as glue thing, but I do wonder whether a bit more
insulation here might be a good idea. Just food for thought.

May I also suggest that erld should be aware of other constructs
besides OTP applications, namely the publisher. This was I can specify
that I want hyperthunk's rebar binary as a dependency, not basho's (or
vice versa). I do realise that if you're downloading multiple versions
of the same artefact with all these additional factors, your local
repository is going to look a bit mental.....

/basho/rebar/1.0
/hyperthunk/rebar/1.0

This adds to the already messy repository management problem of having
multiple OTP installations and all that jazz. But I think its
necessary, especially in the open source community where so much
forking goes on. Many, many people have come on the rebar mailing list
asking for the dependency resolver to handle branches, tags and
commits, let alone forks. Now personally I like tagged versions of
stuff, but I think having the ability to do this is important.

It also goes a tiny way towards getting past Erlang's beautifully
simple flat namespace, at least in terms of resolving applications.

>
> Process dependency specs to produce a dotrel file (erls)
> --------------------------------------------------------
>
> erls depends on the output of erld. It consumes the metadata file that
> erld produces. The job of erls is to take a set of applications and
> constraints and produce a standard rel file that contains all direct
> and transitive dependencies for the release.
>

This is a wicked idea, my comments about the interface to erld notwithstanding.

> ### Inputs
>
> erls takes in the metadata produced by erld. By default it consumes
> metadata in a known global location. However, the metadata file should
> be specifiable on the command line. It also takes as input a list of
> applications and constraints to direct the resolution.
>
> ### Outputs
>
> erls outputs a dotrel file containing the direct and transitive
> dependencies for the project.
>
>
> Resolve the dependencies in the dotrel file (erlr)
> --------------------------------------------------
>
> erlr takes a dotrel file and the metadata produced by erld and pulls
> down the specified OTP Applications and version. Depending on the
> nature of the package and repository it may optionally build these
> dependencies. By default, it should output to a known global location,
> however, an alternate location is specifiable on the command line.
>

This is the bit where the complexity of storing many binaries for the
same publisher-application-version artefact has to be dealt with then.

> ### Inputs
>
> erlr takes an input a dotrel file, the metadata, and an output
> location. The metadata may be omitted from the command line and erlr
> will look for the metadata in the known global location. The output
> location my also be omitted, in that case the applications are pulled
> down to a known global location.
>
> ### Output
>
> The output from erlr is all of the *realized*, compiled OTP
> applications in the correct <app-name>-<app-vsn> format in the output
> location specified.
>
> **NOTE** it could be that erld and erlr are the same thing since the
>  both deal with interacting with the remote repos.
>

Yes I think that would make the most sense. You could write the spec
separately and just symlink them (or provide some kind of batch
wrapper for windows versions below Vista).

Yes you'd also want the ability to do some of the things
https://github.com/hyperthunk/rebar_dist_plugin does.

> ### Inputs
>
> The directory containing the release, a type specifier for the output.
>
> ### Outputs
>
> The specified distributable unit.
>
> **NOTE** erla and erlp might end up being the same application.
>

> --
>

I would like to add to this that there needs to be a way for me to

- write my OTP application
- build it using whatever implementation of `erlb' floats my boat
- deploy it to a remote repository that the tool chain knows about (or
can be configured to use)

Not every packaging task builds a release - OTP applications can live
on their own, despite the fact that production systems should *always*
be based on a proper release (at least where I work anyway).

This is really good stuff - I think you guys are on the right track
with this. Very interested to see what others think, and what will
happen with the discussions about repositories. I, like you Eric -
IIRC - am not fond of downloading source packages, although I can live
with it if no binary artefact is found.

Tim Watson

unread,
Feb 23, 2012, 6:07:23 PM2/23/12
to erlware-...@googlegroups.com
On 23 February 2012 22:52, Tim Watson <watson....@gmail.com> wrote:

>>
>> The output from erlr is all of the *realized*, compiled OTP
>> applications in the correct <app-name>-<app-vsn> format in the output
>> location specified.
>>

Personally I'd really like it if there for support for a binary
artefact packaged as an archive file, just getting dropped in place.
That way it'll go fetch hyperthunk/erlxsl-0.0.1.ez and I'm ready to
go. I guess this is low priority until the archive implementation in
OTP is made non-experimental though, which given the state of
'parameterised modules' could take years. :(

Tim Watson

unread,
Feb 23, 2012, 6:10:09 PM2/23/12
to erlware-...@googlegroups.com
Although rabbitmq use .ez archives as they plugin distribution
mechanism of choice: http://www.rabbitmq.com/plugins.html.

Eric Merritt

unread,
Feb 23, 2012, 6:37:08 PM2/23/12
to erlware-...@googlegroups.com
>> management you have no choice but to adopt one or the other.  We seem
>> to have lost the unix philosophy of small tools dedicated to a single
>> purpose with standard inputs and standard outputs so that they can be
>> chained together at need. So people can use the right tool for the
>> right job and substitute tools out at need.
>
> I am 100% behind this approach. I think it's absolutely right to build
> a protocol between the tools and let people come in and replace them
> if they need/want to.

cool lets hash out something reasonable and maybe take it to the
erlang list and see what happens. I cant build the entire thing myself
at the moment though I am certainly willing to pull the relevant
things out of sinan or what have you.

>>
>> 1. Assemble Metadata on available OTP Applications (erld)
>> 2. Process dependency specs to produce a dotrel file (erls)
>> 3. Resolve the dependencies in the dotrel file (erlr)
>> 4. Build the project (erlb)
>> 5. Using the dotrel and build result assemble the release (erla)
>> 6. Package the release (erlp)
>>
>> These names are not meant to be
>>
>
> Final, I hope. ;)

lol, yes indeed.


>>
>> Assemble Metadata on available OTP Applications (erld)
>> ------------------------------------------------------
>>
>> This task assumes that there are repositories out tthere that contain
>> OTP applications. It is left as a task for us to define what a
>> repository is and how to access it.
>>
>
> We need to define how we interact with the repository I think, and
> then people can implement it on github API calls, an SCM repository,
> maven/ivy/nexus, erlware/faxien, cean (if you don't mind scraping
> HTML), rpm/deb, etc, etc....

I agree. See my recent conversation with Torben. I am leaning towards
a metadata repo that simply points off to other canonical locations.


>> However, for the moment lets assume that this infrastructure exists. In
>> that instance we want to have knowledge of all of the OTP Applications
>> available to us.
>>
>> The first step in our tool chain is to build erld, a command that goes
>> out to the universe of OTP apps and pulls down complete metadata about
>> the apps. This metadata should include the following:
>>
>> * The contents of the dotapp file
>> * Any dependency information (probably should be in the dotapp file)
>> * The location of the application
>
> * whether or not it is a source or binary distribution

yes, absolutely.


> * if binary: the minimum (and by inference maximum) version of erts it
> will run on

This may very well need to be an option on the source as well. We
might also be able to deduce this from the beam files. At least the
lower bounds, faxien used to do this. Thats an implementation detail
though.

> * if binary: any operating system and/or architecture constraints
> (ports/drivers, stuff compiled with HIPE, etc)

yes as well. Hmm, we would need to have the ability to point to
multiple tarballs for the binaries. You might have one for i686 and
amd64.

> I know we're not trying to build a package manager, but we do at least
> need to think about these things just like you did in faxien. Or do
> you think those checks belong somewhere else? I would've thought you
> want to download the right thing for your environment. Perhaps if
> you're building on a mac from binary artefacts, but planning on
> deploying to linux, you should have a way to state that also.

For this discussion I was assuming we would start with source only
packages along with some kind of build instructions and add direct
binary support after the fact. That make still be a good way to get
started.

> This does make the local repository a bit more complex to manage. I
> use kerl to manage multiple (clean) OTP installations, often for
> testing things on SMP vs NON, with/without HIPE, etc. So I need a way
> to install multiple clashing applications

Not really if all we do is host the metadata in github or something
like that. It makes managing the metadata complex but even then not
terribly. On disk management of the instillation is really trivial
since it doesn't have to do anything but store the binary result.

I am thinking that you *always* download to a location you specify
with parameters you specify. That would allow to have the properties
you are looking for.

>
> I don't know if folder structure of indexed metadata is the best way
> to manage this (or push the binaries into a running a local couch-db
> instance or whatever) but it needs consideration.

less infrastructure is better. That is one very hard lesson we learned
with faxien. Not only that but rebuilding for sources for every new
erlang (or even making sure that happened) is really tough. Hmm, you
are convincing me that source packages with good build/management
rules is going to be a much simpler option for us. Though, i dont
think that is your purpose.

>
> Sorry - I realise you're not onto putting the built artefacts on disk
> yet, I just ended up there.

No problem. This is all free form munging at the moment.

>>
>> It should put this metadata in an easily consumed file in a
>> configurable location.
>>
>
> This is what we do once we've built up a local index (of available
> artefacts) right? Sounds sensible to me - this is how most package
> management tools seem to work (yum, apt, etc).

Yup. exactly.

>
> Ivy and Maven do something quite different - they query the
> repositories as and when they want an individual item, but I think
> that approach makes it harder to do the unix tool chain thing, which
> is very worthwhile.

yea. it also means you go to the net a lot more often. In the
originial pre 0.10.0 versions of sinan it did this and that was the #1
complaint.

>>
>> ### Inputs
>>
>> The inputs to erld should be a list of repositories and on disk
>> locations for OTP apps.
>>
>> It should be able to take these from configuration files (probably in
>> /etc/<something> and ~/.<something>) and via command line args.
>>
>
> Sounds good. It would be nice to have a command to add then to your
> local config file too.

I agree. great feature.

>> ### Outputs
>>
>> erld should output an easily queryable file containing the universe of
>> app metadata. This file should make two types of queries
>> trivial.
>>
>> * Given an app name it should be trivial to return a list of versions.
>> * Given an app name, version pair it should be trivial to return a the
>>  OTP App metadata and dependencies for that app/version.
>>
>> This output option should be specifiable on the command line as well
>> as defaulting to a known location if no option is specified.
>>
>
> I'm not 100% sure I agree with this. It sounds right, but I wonder
> whether 'erld' should actually provide a query interface (either
> command line or API based) to which you can ask those questions. This
> insulates the consumer from the internal structure (implementation) of
> the index file, and also means you can vary whether they index is a
> file or a database or whatever an implementor likes.

I like that idea as long as we say that erld needs to provide an
erlang library interface as well. If you are doing this inside an
erlang app you shouldnt have to call out to the command line. However,
I figure these things are all going to be in erlang anyway so that
shouldnt be a problem.

>
> I like the unix tool chain style, and I get the whole simple
> pipes/files as glue thing, but I do wonder whether a bit more
> insulation here might be a good idea. Just food for thought.

If we give the finest granularity of tools the insulation can be built
on top of it. I dont actually expect folks to use these tools too much
directly. Some folks will build make files, we will probably provide
some higher level tool as well. Think of git, where there is the git
command that only exists to provide insulation to all the more
granular commands, but you can get to those granular commands as
needed.

> May I also suggest that erld should be aware of other constructs
> besides OTP applications, namely the publisher. This was I can specify
> that I want hyperthunk's rebar binary as a dependency, not basho's (or
> vice versa). I do realise that if you're downloading multiple versions
> of the same artefact with all these additional factors, your local
> repository is going to look a bit mental.....

This really begs the question. So far I have been assuming that
version numbers matter a lot. If hyperthunk has a release and basho
has a release and they are different they should absolutely have
different version numbers. That makes things much much easier for us.
If thats not the case we need to think that out very well.

>  /basho/rebar/1.0
>  /hyperthunk/rebar/1.0

I really hope that if they are different the are not both 1.0. I
suspect that they are because people are generally freaking idiots,
but it shouldnt be. In this case it might be possible to 'fix' the
versions so basho becomes 1.0-basho and hyperthunk becomes
1.0-hyperthunk though that scares me.

> This adds to the already messy repository management problem of having
> multiple OTP installations and all that jazz. But I think its
> necessary, especially in the open source community where so much
> forking goes on. Many, many people have come on the rebar mailing list
> asking for the dependency resolver to handle branches, tags and
> commits, let alone forks. Now personally I like tagged versions of
> stuff, but I think having the ability to do this is important.
>
> It also goes a tiny way towards getting past Erlang's beautifully
> simple flat namespace, at least in terms of resolving applications.

hmmm. fair enough. Lets break this out into another thread. If we
support this its something that is going to be hard to get right and
we will absolutely need to get it right.

>>
>> Process dependency specs to produce a dotrel file (erls)
>> --------------------------------------------------------
>>
>> erls depends on the output of erld. It consumes the metadata file that
>> erld produces. The job of erls is to take a set of applications and
>> constraints and produce a standard rel file that contains all direct
>> and transitive dependencies for the release.
>>
>
> This is a wicked idea, my comments about the interface to erld notwithstanding.

this is what sinan already does and it is wicked as hell.

>>
>> erlr takes a dotrel file and the metadata produced by erld and pulls
>> down the specified OTP Applications and version. Depending on the
>> nature of the package and repository it may optionally build these
>> dependencies. By default, it should output to a known global location,
>> however, an alternate location is specifiable on the command line.
>>
>
> This is the bit where the complexity of storing many binaries for the
> same publisher-application-version artefact has to be dealt with then.

yea. again something we have to get right.

>> The directory containing the release, a type specifier for the output.
>>
>> ### Outputs
>>
>> The specified distributable unit.
>>
>> **NOTE** erla and erlp might end up being the same application.
>>
>> --
>>
>
> I would like to add to this that there needs to be a way for me to
>
> - write my OTP application
> - build it using whatever implementation of `erlb' floats my boat
> - deploy it to a remote repository that the tool chain knows about (or
> can be configured to use)

I am less sure on the *deploy*. That depends but its something we can talk out.

> Not every packaging task builds a release - OTP applications can live
> on their own, despite the fact that production systems should *always*
> be based on a proper release (at least where I work anyway).

agreed on both counts.

>
> This is really good stuff - I think you guys are on the right track
> with this. Very interested to see what others think, and what will
> happen with the discussions about repositories. I, like you Eric -
> IIRC - am not fond of downloading source packages, although I can live
> with it if no binary artefact is found.

I should be clear that at the moment I am the only active erlware
developer. Martin doesn't really code erlang much any more and while
Jordan and Tristan help where they can (mostly blogging and code
reviewing) its mostly just me. So while I love these ideas its not
something I can probably do by myself. Not in any reasonable time
frame. So if you like the idea please consider stepping up to
implement some parts of them. Some of these like 'erls' and 'erlr' i
can implement trivially by using chunks of sinan and I am more then
happy to do that. Some of the others (erld, etc) need to be written
from scratch. I think rebar/sinan can provide erlb so there is no real
need to do much there. At least not yet.

Eric Merritt

unread,
Feb 23, 2012, 6:38:26 PM2/23/12
to erlware-...@googlegroups.com

This is actually not a big deal and pretty trivial to do. Once we have
a fetcher it should be just a few minutes work to add. You should
realize that ez does not seem to work for include or priv files. Which
I have found to be a huge probelm.

Tim Watson

unread,
Feb 23, 2012, 7:02:14 PM2/23/12
to erlware-...@googlegroups.com
On 23 February 2012 23:38, Eric Merritt <ericbm...@gmail.com> wrote:
> On Thu, Feb 23, 2012 at 5:07 PM, Tim Watson <watson....@gmail.com> wrote:
>> On 23 February 2012 22:52, Tim Watson <watson....@gmail.com> wrote:
>>
>>>>
>>>> The output from erlr is all of the *realized*, compiled OTP
>>>> applications in the correct <app-name>-<app-vsn> format in the output
>>>> location specified.
>>>>
>>
>> Personally I'd really like it if there for support for a binary
>> artefact packaged as an archive file, just getting dropped in place.
>> That way it'll go fetch hyperthunk/erlxsl-0.0.1.ez and I'm ready to
>> go. I guess this is low priority until the archive implementation in
>> OTP is made non-experimental though, which given the state of
>> 'parameterised modules' could take years. :(
>
> This is actually not a big deal and pretty trivial to do. Once we have
> a fetcher it should be just a few minutes work to add. You should
> realize that ez does not seem to work for include or priv files. Which
> I have found to be a huge probelm.
>

Yes I've run into that too. OTP team need to fix erl_prim_loader.

Tim Watson

unread,
Feb 23, 2012, 7:08:52 PM2/23/12
to erlware-...@googlegroups.com
On 23 February 2012 23:37, Eric Merritt <ericbm...@gmail.com> wrote:
>
> I should be clear that at the moment I am the only active erlware
> developer. Martin doesn't really code erlang much any more and while
> Jordan and Tristan help where they can (mostly blogging and code
> reviewing) its mostly just me. So while I love these ideas its not
> something I can probably do by myself. Not in any reasonable time
> frame. So if you like the idea please consider stepping up to
> implement some parts of them. Some of these like 'erls' and 'erlr' i
> can implement trivially by using chunks of sinan and I am more then
> happy to do that. Some of the others (erld, etc) need to be written
> from scratch. I think rebar/sinan can provide erlb so there is no real
> need to do much there. At least not yet.
>

I'm not going to respond to all the points you've raised as it's late
and I need to sleep, but I think we're in agreement on almost all
these points. I do agree that building from source is probably
safer/easier, as long as it gets built once and only has to be rebuilt
when I'm running on another (incompatible) erts version or whatever
(or it it was HIPE compiled for 32 bit and I'm not running a HIPE
compiled erts in 64bit mode) then I'm fine with that.

Yes I can step up and do plenty of implementing, although my
development will be slow as I have a slightly mad personal life at the
moment and my job is busier than hell. Hacking around on this in what
little spare time I've got sounds rather sanity restoring though, so
yes that's fine.

What I do want to do it hold fire until we've specified things in
sufficient detail that I know what I'm going to do before kicking it
off. And I think there will need to be some discussion about the tool
chain that we should use to build the tool chain, which after all your
bootstrapping stuff on Joxa I'm sure you'll be in a good place to
advise on. :)

Cheers,

Tim

Eric Merritt

unread,
Feb 23, 2012, 7:56:20 PM2/23/12
to erlware-...@googlegroups.com
>>
>
> I'm not going to respond to all the points you've raised as it's late
> and I need to sleep, but I think we're in agreement on almost all
> these points. I do agree that building from source is probably
> safer/easier, as long as it gets built once and only has to be rebuilt
> when I'm running on another (incompatible) erts version or whatever
> (or it it was HIPE compiled for 32 bit and I'm not running a HIPE
> compiled erts in 64bit mode) then I'm fine with that.

I suspect you are right. We should think about the local (on disk)
repo management.

>
> Yes I can step up and do plenty of implementing, although my
> development will be slow as I have a slightly mad personal life at the
> moment and my job is busier than hell. Hacking around on this in what
> little spare time I've got sounds rather sanity restoring though, so
> yes that's fine.

I am in the same boat, so no worries there. Some of this might go
quickly depending on what we can reuse, but the new stuff will go
slow. Fortunately, each thing is usable in its own right so we get
incremental value.


>
> What I do want to do it hold fire until we've specified things in
> sufficient detail that I know what I'm going to do before kicking it
> off. And I think there will need to be some discussion about the tool
> chain that we should use to build the tool chain, which after all your
> bootstrapping stuff on Joxa I'm sure you'll be in a good place to
> advise on. :)

Again, we are on the same page with this. Bootstrapping is mind-bendy fun.

> Cheers,
>
> Tim

jose....@gmail.com

unread,
May 7, 2012, 4:17:38 PM5/7/12
to erlware-...@googlegroups.com
I just read the whole thread, it was a great read. :)

There are two things I would like to add:

1) Regarding supporting both:

basho/rebar/1.0
hypertrunk/rebar/1.0

In my opinion it is wrong to treat them as the same thing as their versions will certainly diverge.

A possible solution is to simply require them to have different names. For example, the first one will likely be named "rebar" (since it is the official) and the second one can be named "hypertrunk-rebar" (or whatever hypertrunk would like to name it). I think the build tool should not care about mangling the names.

2) Regarding the support of different erts, hipe and erlang versions, having different directories where the built artifacts are stored may be a simple but efficient solution. For example in Ruby we have:

~/.rvm/ruby-p1.9.3-p0
~/.rvm/jruby-1.6.5

In our case, we could have:

~/.evm/erts-5.9
~/.evm/erts-5.9-hipe-64

The nice about this structure is that the directory can be anything, so if I want I could have directories per project (and env variable should be enough to set it):

~/.evm/erts-5.9-hipe-64@myproject

And if it happens that you no longer use HiPE 32 bits, just get rid of the whole directory.

jose....@gmail.com

unread,
May 7, 2012, 4:34:24 PM5/7/12
to erlware-...@googlegroups.com
basho/rebar/1.0
hypertrunk/rebar/1.0

In my opinion it is wrong to treat them as the same thing as their versions will certainly diverge.
A possible solution is to simply require them to have different names.

Just to make it clear, they will have different names in the *cloud*. Internally, they will define the same application called rebar.
A conflict will arise if both are used in the same project, but that will happen to any pair of projects that define the same modules.

Eric Merritt

unread,
May 8, 2012, 1:20:03 PM5/8/12
to erlware-...@googlegroups.com
Jose,

In general you are absolutely right. Tim and I continued talking about
many of these same points in later emails. I will try to get them all
pulled together into a coherent document in the next few days.

Eric
> --
> You received this message because you are subscribed to the Google Groups
> "erlware-questions" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/erlware-questions/-/3xX6HNqCzjMJ.
Reply all
Reply to author
Forward
0 new messages