Compatibility etiquette for apps, with cabal sandboxes and `stack`

32 views
Skip to first unread message

Paolo Giarrusso

unread,
Nov 29, 2015, 6:54:28 AM11/29/15
to Haskell-cafe
Hi all,

IIUC, people used to spend nontrivial effort making their Haskell tools work across a range of dependencies, and be careful about dropping support for older ones. 

Do cabal sandboxes or Stack reduce that need, at least for applications?* Or conversely, how bad is it to restrict support to users having them? I guess I am asking about common policies, but this probably depends on adoption of those tools.

As in:

a) without sandboxes or Stack, packages need to build with whatever environment is there.
b) with either of sandboxes or Stack, you can just specify the environment and get it. With Stack, you can even specify whichever (supported) GHC you want, without impacting the environment.

Concretely, Control.Monad.Trans.Error is deprecated in transformers-0.4.3.0, but the replacement Control.Monad.Trans.Except does not exist in transformers-0.3.x, thus in the one-but-last Haskell Platform (2014.2.0.0). Hence, fixing the deprecation would make this application hard to install** for some users, including past me (around two months ago).*** Would you apply such a patch? Would you keep it out? Or would you even use CPP (something I'd really like to avoid)?

Cheers,
Paolo

* I take for granted libraries are a different matter — composing libraries means intersecting their supported versions.
** I'll omit detailing "hard to install", because that's a matter of opinion; I considered `constraint: transformers installed` necessary in my cabal config (https://www.vex.net/~trebla/haskell/cabal-cabal.xhtml), and I've read that's becoming the default in the HP, so I think users are entitled to this opinion.
*** This particular app has a tiny userbase, so my question is a curiosity.

Michael Snoyman

unread,
Nov 29, 2015, 7:23:58 AM11/29/15
to Paolo Giarrusso, Haskell-cafe

For your particular example, you can use the transformers-compat package. But to answer your general question: for applications, I strongly advocate supporting just a single set of package version combos for dependencies. Stack does do this by default, but cabal freeze files can get you most of the way there too (barring some corner cases with different OSes and OS-specific packages).

I'm sure others will disagree with this recommendation, but I see no reason to absorb a large amount of compatibility work just so that two installations of the same application may end up behaving differently at runtime because of differences in the behavior of a dependency.


_______________________________________________
Haskell-Cafe mailing list
Haskel...@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

Omari Norman

unread,
Nov 29, 2015, 7:30:15 AM11/29/15
to haskell Cafe
On Sun, Nov 29, 2015 at 6:54 AM, Paolo Giarrusso <p.gia...@gmail.com> wrote:
Hi all,

IIUC, people used to spend nontrivial effort making their Haskell tools work across a range of dependencies, and be careful about dropping support for older ones. 

Given the tools we had, this was not easy.  I was never aware of any tool that substantially helped with making sure that a package remained compatible with the range of packages that was theoretically allowed by the package's .cabal file.
 
Do cabal sandboxes or Stack reduce that need, at least for applications?* Or conversely, how bad is it to restrict support to users having them? I guess I am asking about common policies, but this probably depends on adoption of those tools.


IMO there is really no one right answer to this question.  It depends on how nice you are, or whether someone is paying you.  Obviously if someone is paying you, do as she says or do what she needs.  If no one is paying you, then how nice do you want to be by expending the work?  Honestly I'm not very nice.  I keep stuff working in the current Stackage Nightly but that's it.  Maintaining compatibility with huge dependency matrices is just an enormous amount of work.  In my view, someone installing an application can just use stack and Stackage.


Michael Orlitzky

unread,
Nov 29, 2015, 10:40:55 AM11/29/15
to haskel...@haskell.org
On 11/29/2015 06:54 AM, Paolo Giarrusso wrote:
> Hi all,
>
> IIUC, people used to spend nontrivial effort making their Haskell tools
> work across a range of dependencies, and be careful about dropping
> support for older ones.
>
> Do cabal sandboxes or Stack reduce that need, at least for
> applications?* Or conversely, how bad is it to restrict support to users
> having them? I guess I am asking about common policies, but this
> probably depends on adoption of those tools.
>

If your application can't be installed through a package manager, a lot
of end users and every system administrator are going to pretend it
doesn't exist. Using stack/sandboxes hides the fact that the ecosystem
is broken from the developer, but it cannot be hidden from the end user.
Please make sure any such breakage is not your fault =)

Paolo Giarrusso

unread,
Nov 29, 2015, 11:37:12 AM11/29/15
to Haskell-cafe, p.gia...@gmail.com, mic...@snoyman.com
On Sunday, November 29, 2015 at 1:23:58 PM UTC+1, Michael Snoyman wrote:

For your particular example, you can use the transformers-compat package.


Thanks, that's useful!

But to answer your general question: for applications, I strongly advocate supporting just a single set of package version combos for dependencies. Stack does do this by default, but cabal freeze files can get you most of the way there too (barring some corner cases with different OSes and OS-specific packages).

I'm sure others will disagree with this recommendation, but I see no reason to absorb a large amount of compatibility work just so that two installations of the same application may end up behaving differently at runtime because of differences in the behavior of a dependency.

 
Personally, I switched all the way to stack. I added mention of cabal sandboxes to address anybody who didn't switch yet or who prefers to stick to cabal.

Peter Simons

unread,
Nov 29, 2015, 11:37:35 AM11/29/15
to haskel...@haskell.org
Omari Norman writes:

> Someone installing an application can just use stack and Stackage.

I wonder how many people would be using XMonad, git-annex, etc. if this
view were common place among application developers.

I'm pretty sure that a large part of the user base of these tools has no
clue "stack" exists, even, and the only reason why they can install
these programs is because their distributions package manager allows
then to do so without exposing them to any Haskell-specific build tools.

Best regards,
Peter

Paolo Giarrusso

unread,
Nov 29, 2015, 11:51:34 AM11/29/15
to Haskell-cafe, haskel...@haskell.org, sim...@cryp.to
On Sunday, November 29, 2015 at 5:37:35 PM UTC+1, Peter Simons wrote:
Omari Norman writes:

 > Someone installing an application can just use stack and Stackage.

I wonder how many people would be using XMonad, git-annex, etc. if this
view were common place among application developers.

I'm pretty sure that a large part of the user base of these tools has no
clue "stack" exists, even, and the only reason why they can install
these programs is because their distributions package manager allows
then to do so without exposing them to any Haskell-specific build tools.

I guess you're thinking of different markets: "uses git-annex" is indeed (probably) orthogonal even to "knows there's a programming language called Haskell" (I might be exaggerating); I agree there distributing via Hackage is not a good option.

But developers of, say, `hlint` have (I guess arguably) another target audience — Haskell developers; my question was about this audience, and Omari Norman's answer makes sense there.

Peter Simons

unread,
Nov 29, 2015, 12:25:34 PM11/29/15
to haskel...@haskell.org
Hi Paolo,

> But developers of, say, `hlint` have (I guess arguably) another
> target audience — Haskell developers; my question was about this
> audience, and Omari Norman's answer makes sense there.

personally, I never install anything with stack or cabal-install,
regardless of whether it's a tool intended for Haskell developers or
not. Stack and cabal-install are great tools to use during code
development, but neither of them has the capabilities I want from a
package manager.

Just my 2 cents,

Omari Norman

unread,
Nov 29, 2015, 1:37:35 PM11/29/15
to haskell Cafe
On Sun, Nov 29, 2015 at 11:37 AM, Peter Simons <sim...@cryp.to> wrote:
Omari Norman writes:

 > Someone installing an application can just use stack and Stackage.

I wonder how many people would be using XMonad, git-annex, etc. if this
view were common place among application developers.

Maybe zero.  If an application developer cares about popularity, he should consider these things.  Not all application developers care how many users they have.

I'm pretty sure that a large part of the user base of these tools has no
clue "stack" exists, even, and the only reason why they can install
these programs is because their distributions package manager allows
then to do so without exposing them to any Haskell-specific build tools.

Distribution packagers are savvy enough to use stack.  Furthermore, distributions do not install using cabal or from Hackage.  Therefore, by your reasoning just as many people would be using XMonad, git-annex, etc. because the distribution packager would get the package, make the necessary alterations, and upload the distribution-specific package to the repository.
 

Michael Orlitzky

unread,
Nov 29, 2015, 2:12:18 PM11/29/15
to haskel...@haskell.org
On 11/29/2015 01:37 PM, Omari Norman wrote:
>
> Distribution packagers are savvy enough to use stack.

Ignoring the question of *how* that might work, most distributions
forbid bundled dependencies because it creates a maintenance nightmare
and fills our users' machines with untraceable security vulnerabilities.
Literally no one does this, so I'm not sure what you're claiming here.


> Furthermore, distributions do not install using cabal or from Hackage.

They do install from Hackage, just not using cabal-install.


> Therefore, by your reasoning just as many people would be using
> XMonad, git-annex, etc. because the distribution packager would get
> the package, make the necessary alterations, and upload the
> distribution-specific package to the repository.

When using a real package manager, every package's dependencies must be
satisfied simultaneously. Using stack isolates the developer from
dependency conflicts with other packages during development, but when a
user goes to install it, he doesn't have that luxury.

If the developers of xmonad and git-annex both use stack/sandboxes, then
it's possible that one of them will introduce a dependency that
conflicts with the other, and neither developer will notice it thanks to
the sandboxes. But if a user tries to install both at the same time, he
can't, because (for example) xmonad wants foo-1.0 and git-annex wants
foo-2.0.

As a volunteer packager, I'm not going to fix that for you, I'm just
going to work on something else whose upstream isn't a pain in the ass.

Omari Norman

unread,
Nov 29, 2015, 2:39:47 PM11/29/15
to haskell Cafe
On Sun, Nov 29, 2015 at 2:12 PM, Michael Orlitzky <mic...@orlitzky.com> wrote:

 
> Furthermore, distributions do not install using cabal or from Hackage.

They do install from Hackage, just not using cabal-install.

So there's a distribution out there where end users pull source from Hackage, pull source for every dependency, and then build it all with GHC?  If they're not doing what distributors like Debian does--building binaries--then what's the point of distributing at all?

When using a real package manager, every package's dependencies must be
satisfied simultaneously.

True, but ouch, ultimately this is one factor that pushed me out of desktop Linux altogether.  It's too hard to get packages for things I want to use, and then I'm fending for myself by building things.  Centrally-planned packaging does not scale.
 
Using stack isolates the developer from
dependency conflicts with other packages during development, but when a
user goes to install it, he doesn't have that luxury.

He does if he uses stack.  Grab a stack binary.  It even installs GHC for the user.

Imants Cekusins

unread,
Nov 29, 2015, 2:52:24 PM11/29/15
to Omari Norman, haskell Cafe

Just thought of something that might help to deal with dependency nuisance:

What if the top package (in namespace)  contained major version?

E.g. Cabal_1_22

below packages might use minor versions - basically, whenever api changes, change the version part of the affected package.

Here by package I mean part of the namespace.

This way, different versions could coexist quite painlessly..

?

Sven Panne

unread,
Nov 29, 2015, 3:13:29 PM11/29/15
to Haskell Cafe
[... and now to the whole list, I hate gmail's defaults... :-]

---------- Forwarded message ----------
From: Sven Panne <sven...@gmail.com>
Date: 2015-11-29 21:12 GMT+01:00
Subject: Re: [Haskell-cafe] Fwd: Compatibility etiquette for apps, with cabal sandboxes and `stack`
To: Imants Cekusins <ima...@gmail.com>


2015-11-29 20:52 GMT+01:00 Imants Cekusins <ima...@gmail.com>:

[...] What if the top package (in namespace)  contained major version?

E.g. Cabal_1_22

below packages might use minor versions - basically, whenever api changes, change the version part of the affected package. [...]

As a developer, what should I import if my program/library works with e.g. the version range [1.20 .. 1.23]? It definitely can't be

   import Cabal_1_23.Foo.Bar

because if it later still works with e.g. 1.24, I would have to rename all my imports. And always using the lower bound is probably too restrictive, unless I'm mistaken... In general I think it's a bad idea to spread build dependencies all over the code.

Cheers,
   S.

Imants Cekusins

unread,
Nov 29, 2015, 3:23:54 PM11/29/15
to Sven Panne, Haskell Cafe
> As a developer, what should I import if my program/library works with e.g. the version range [1.20 .. 1.23]?

maybe change only that part of the namespace where api changed?

also, why not publish less frequently? I mean, nothing stops anyone
from making frequent changes to a library, but why not limit number
of versions released to public?

Sven Panne

unread,
Nov 29, 2015, 3:36:21 PM11/29/15
to Imants Cekusins, Haskell Cafe
2015-11-29 21:23 GMT+01:00 Imants Cekusins <ima...@gmail.com>:
> As a developer, what should I import if my program/library works with e.g. the version range [1.20 .. 1.23]?

maybe change only that part of the namespace where api changed?
 
In the exporting library? In the importing library?

also, why not publish less frequently? I mean, nothing stops anyone
from making frequent changes to  a library, but why not limit number
of versions released to public?

Could you explain this in more detail, please? I don't seem to understand what you're proposing... :-/

Michael Orlitzky

unread,
Nov 29, 2015, 3:46:48 PM11/29/15
to haskel...@haskell.org
On 11/29/2015 02:39 PM, Omari Norman wrote:
>
> So there's a distribution out there where end users pull source from
> Hackage, pull source for every dependency, and then build it all with
> GHC? If they're not doing what distributors like Debian does--building
> binaries--then what's the point of distributing at all?

Sure, all of the source-based distributions use the upstream tarball and
compile it. The point of creating a "package" is so that you can have a
real package manager manage your dependencies. Since most of the
dependency info is contained in the cabal file, the packages are usually
trivial. Gentoo, Nix, and FreeBSD all have tools that will convert a
hackage package into a distribution package automatically.


> When using a real package manager, every package's dependencies must be
> satisfied simultaneously.
>
>
> True, but ouch, ultimately this is one factor that pushed me out of
> desktop Linux altogether. It's too hard to get packages for things I
> want to use, and then I'm fending for myself by building things.
> Centrally-planned packaging does not scale.

Given that almost all Linux systems in existence uses centrally-planned
packaging, I don't believe that last claim. How many programs can you
realistically keep installed and up-to-date with stack? Ten, twenty
maybe -- if this is a serious hobby for you. One or two hundred if it's
your full-time job?

A typical Linux system will have hundreds of packages, and a system
administrator will need to manage tens or hundreds of those systems.
It's just not possible to do with something like stack -- you need one
package manager that does everything.

I'm not saying you need to rely on e.g. Debian upstream to create
packages for you (I certainly don't), but you do need to have "system"
packages for everything installed. This actually isn't very hard with
those automated tools I mentioned earlier.


> Using stack isolates the developer from
> dependency conflicts with other packages during development, but when a
> user goes to install it, he doesn't have that luxury.
>
>
> He does if he uses stack. Grab a stack binary. It even installs GHC
> for the user.

If the user is highly technical and he wants to make stack
administration his weekend activity. Otherwise, this isn't feasible for
more than one or two packages.

You can easily convince yourself of this: set up ten virtual machines,
and install 20 packages using stack on each of them. Now keep them
up-to-date for a year. If you're very good at bookkeeping and time
management, it might even be possible. But it's not going to be a fun
year. And 200 packages is barely enough to boot a single web server.

Imants Cekusins

unread,
Nov 29, 2015, 4:03:21 PM11/29/15
to Sven Panne, haskell Cafe

> In the exporting library? In the importing library?

Well, in both places. Basically, similar part of namespace guarantees that api is the same below.

If api (or code) changed, then it will lead to  a different result.

As consumer of a library, you'd refer to exactly the version you used while developing. This exact same version would be pulled in alongside other versions of the same library, used in other parts of a large app.

It may lead to code duplication and larger binaries however it would essentially give you a sandbox with very little effort.

>> also, why not publish less frequently?

> Could you explain this in more detail, please? I don't seem to understand what you're proposing... :-/

Well, let's say I am working on a library. Tempting as it is to release the first draft, I'd first use this fresh library in a few places, catch some bugs, fix them, and then release.

Because this takes time, this library would see very few releases per year.

It would be less likely to cause multiple  version conflict.

Imants Cekusins

unread,
Nov 29, 2015, 4:15:43 PM11/29/15
to Sven Panne, haskell Cafe

Here is an allegory:

There is Toyota 1985 and Toyota 2010.

Although they both are Toyotas, they may be very different cars.

ads show cars by brand, model and year, because brand alone is rarely enough.

So why not spell this out in code? Why dump this task on tools which can not really tell what importing code actually requires?

Joachim Durchholz

unread,
Nov 29, 2015, 4:19:55 PM11/29/15
to haskel...@haskell.org
Am 29.11.2015 um 21:46 schrieb Michael Orlitzky:
> On 11/29/2015 02:39 PM, Omari Norman wrote:
>>
>> So there's a distribution out there where end users pull source from
>> Hackage, pull source for every dependency, and then build it all with
>> GHC? If they're not doing what distributors like Debian does--building
>> binaries--then what's the point of distributing at all?
>
> Sure, all of the source-based distributions use the upstream tarball and
> compile it. The point of creating a "package" is so that you can have a
> real package manager manage your dependencies. Since most of the
> dependency info is contained in the cabal file, the packages are usually
> trivial. Gentoo, Nix, and FreeBSD all have tools that will convert a
> hackage package into a distribution package automatically.
>
>
>> When using a real package manager, every package's dependencies must be
>> satisfied simultaneously.
>>
>>
>> True, but ouch, ultimately this is one factor that pushed me out of
>> desktop Linux altogether. It's too hard to get packages for things I
>> want to use, and then I'm fending for myself by building things.
>> Centrally-planned packaging does not scale.
>
> Given that almost all Linux systems in existence uses centrally-planned
> packaging, I don't believe that last claim.

What does not scale is having to beg, bribe, or strong-arm upstreams
into using a consistent set of library versions. You'll run into
situation where application A wants libraries X.5 and Y.6, and
application B wants X.6 and Y.4, and at that point, you'll have to make
a hard decision between A and B.

One way out is to make it so that multiple versions of the same library
can be installed at the same time. C-based packages do this routinely by
installing not libX and libY, but by installing libX-5, libX-6, libY.4,
and libY.6. This still requires a mechanism to automatically select the
right packaged lib, so the Haskell runtime will have to be told which
libraries at what versions to combine with a given application. This
could be totally easy or a huge PITA, I don't know enough about Haskell
(I just happen to have a lot of administration experience with Linux).

Another way out is to statically link each application, and avoid
library packages entirely. It's viable only because we have
multi-terabyte harddisks and multi-gigabyte RAM these days, and probably
not what everybody wants to do.

One thing that does not work at all in my experience is situations where
you have a software ecosystem that's orthogonal to the OS. Typical
package managers offer no way of having a local install, so my Eclipse
installation typically consists of a download somewhere into my home
directory, and plugins installed into that. Python is similar - I almost
never install a Python application directly, I download it, use a
virtualenv (Python's sandboxing method), and let it pull in and set up
any dependencies it wants or needs.
These local installs are all wheel reinventions, it would be better if
apt, yum, rpm etc. supported local installs (in the form of "please
install this into THAT directory inside my home dir, thank you very
much"), and kept these installs separate. You can do stuff like that,
but it requires expert knowledge so it's not an option for application
installs.

> How many programs can you
> realistically keep installed and up-to-date with stack? Ten, twenty
> maybe -- if this is a serious hobby for you. One or two hundred if it's
> your full-time job?
>
> A typical Linux system will have hundreds of packages, and a system
> administrator will need to manage tens or hundreds of those systems.
> It's just not possible to do with something like stack -- you need one
> package manager that does everything.

True for operating systems. Or anything else that needs to "just work"
without bothering about specific versions.
Not so true for those individual applications. Of these, you often need
a specific version, and since nothing in the OS depends on them, it's
okay to have these installed independently of the package manager (but
these applications can have such complicated dependency setups that
they'll need their own package managers - Eclipse and Python come with
such things for exactly that reason).

Regards,
Jo

Paolo Giarrusso

unread,
Nov 29, 2015, 6:11:27 PM11/29/15
to haskel...@googlegroups.com, Haskell-cafe
On 29 November 2015 at 21:46, Michael Orlitzky <mic...@orlitzky.com> wrote:
> On 11/29/2015 02:39 PM, Omari Norman wrote:
>>
>> So there's a distribution out there where end users pull source from
>> Hackage, pull source for every dependency, and then build it all with
>> GHC? If they're not doing what distributors like Debian does--building
>> binaries--then what's the point of distributing at all?

> Gentoo, Nix, and FreeBSD all have tools that will convert a
> hackage package into a distribution package automatically.

> I'm not saying you need to rely on e.g. Debian upstream to create
> packages for you (I certainly don't), but you do need to have "system"
> packages for everything installed. This actually isn't very hard with
> those automated tools I mentioned earlier.

OK, that might be a (somewhat) reasonable alternative to using
`cabal-install` or `stack` as a package manager — for users of those
distros only. I know the mantra `cabal` is not a package manager, and
that usually read as "`cabal` doesn't want to support its users'
needs".
I'm on OS X + Homebrew though.

Also, as long as you just want to upgrade, a working `cabal upgrade`
would be enough (and maybe within reach after the upcoming cabal
changes); `stack upgrade-all` could also do the job. Bigger problems
are removing packages and intermediate build products.

--
Paolo G. Giarrusso - Ph.D. Student, Tübingen University
http://ps.informatik.uni-tuebingen.de/team/giarrusso/

Paolo Giarrusso

unread,
Nov 29, 2015, 6:12:19 PM11/29/15
to haskel...@googlegroups.com, Haskell-cafe
On 29 November 2015 at 20:12, Michael Orlitzky <mic...@orlitzky.com> wrote:
> On 11/29/2015 01:37 PM, Omari Norman wrote:
>>
>> Distribution packagers are savvy enough to use stack.
>
> Ignoring the question of *how* that might work, most distributions
> forbid bundled dependencies because it creates a maintenance nightmare
> and fills our users' machines with untraceable security vulnerabilities.

But doesn't Haskell do static linking (usually) and cross-module
inlining? Or are you fine with static linking as long as it's somehow
tracked by the package manager, so that upgrading some-vuln-lib from
1.0 to 1.1 forces upgrading all client programs (looks quite doable at
least with Debian packages)?

Michael Orlitzky

unread,
Nov 29, 2015, 6:25:05 PM11/29/15
to haskel...@haskell.org
On 11/29/2015 06:11 PM, Paolo Giarrusso wrote:
> On 29 November 2015 at 20:12, Michael Orlitzky <mic...@orlitzky.com> wrote:
>> On 11/29/2015 01:37 PM, Omari Norman wrote:
>>>
>>> Distribution packagers are savvy enough to use stack.
>>
>> Ignoring the question of *how* that might work, most distributions
>> forbid bundled dependencies because it creates a maintenance nightmare
>> and fills our users' machines with untraceable security vulnerabilities.
>
> But doesn't Haskell do static linking (usually) and cross-module
> inlining? Or are you fine with static linking as long as it's somehow
> tracked by the package manager, so that upgrading some-vuln-lib from
> 1.0 to 1.1 forces upgrading all client programs (looks quite doable at
> least with Debian packages)?
>

GHC does dynamic linking now, but I'm OK with static linking as long as
it's tracked. The end result is the same as if you had dynamic linking,
only with a lot more wasted space and rebuilds/reinstalls.

Sven Panne

unread,
Nov 30, 2015, 2:20:36 AM11/30/15
to Imants Cekusins, haskell Cafe
2015-11-29 22:03 GMT+01:00 Imants Cekusins <ima...@gmail.com>:

> In the exporting library? In the importing library?

Well, in both places. Basically, similar part of namespace guarantees that api is the same below.

To make this work on the exporting side, the namespace hierarchy would need to be structured according to API versions. This would mean constant reshuffling of modules in the hierarchy every time you make an API change. Furthermore, speaking from a more aesthetical point of view, names in the hierarchy should have some sensible semantic meaning and should not be littered with more or less random name suffixes. 

[...] This exact same version would be pulled in alongside other versions of the same library, used in other parts of a large app.

This won't work when e.g.the libraries have a C part (unless you convince all people out there to use some kind consistent hierarchical naming scheme for C, too).
 

It may lead to code duplication and larger binaries however it would essentially give you a sandbox with very little effort.

Personally, I don't consider littering my code with version numbers as "little effort". Stuff like this should be specified outside of the source code IMHO. 

Well, let's say I am working on a library. Tempting as it is to release the first draft, I'd first use this fresh library in a few places, catch some bugs, fix them, and then release.

Because this takes time, this library would see very few releases per year.

It would be less likely to cause multiple  version conflict.

This contradicts the common "release early, release often" scheme of doing things, which has *many* advantages. And the number of releases is not relevant in itself, it's how often you change the API, which is a totally different matter. The longer you wait with releases, the higher the chance is that the next release has an incompatible API change, and your users won't get bug fixes without that API change, which won't make them especially happy.

Cheers,
   S.

Joachim Durchholz

unread,
Nov 30, 2015, 3:54:21 AM11/30/15
to haskel...@haskell.org
Am 30.11.2015 um 08:20 schrieb Sven Panne:
> Personally, I don't consider littering my code with version numbers as
> "little effort".

Indeed it isn't.
The effort could be reduced with proper tool or language support.

> Stuff like this should be specified outside of the source
> code IMHO.

Well, sort-of.
Let's assume semantic versioning: major version number changes for
incompatible API changes, minor for downwards-compatible API changes,
and tertiary ("milli") version number changes for API-unaffecting changes.

So if your code calls into foo-3.4.1, you shouldn't have to worry
whether it is actually being used with foo-3.4.2, or foo-3.5.0.
However, you need to say foo-3 somewhere, inside your sources, because
your code may break with foo-4.x.x.

One issue I'm having with semantic versioning is that
supposed-to-be-compatible library updates tend to be actually
incompatible, often in subtle ways, so the library is released with a
millversion change but that's a mistake (and lieing towards the
application that's assuming an unchanged API).
It's a big issue with imperative languages, where implementation details
can leak via state. In Haskell, I suspect that (non-)termination
behaviour is a similar potential implementation leak, though there's
less opportunity for that to actually happen since the library would
need to have lots of complicated, potentially infinite data structures
hidden below opaque types.

Regards,
Jo

Joachim Durchholz

unread,
Nov 30, 2015, 3:59:23 AM11/30/15
to haskel...@haskell.org
Am 30.11.2015 um 00:24 schrieb Michael Orlitzky:
> GHC does dynamic linking now, but I'm OK with static linking as long as
> it's tracked. The end result is the same as if you had dynamic linking,
> only with a lot more wasted space and rebuilds/reinstalls.

Well, the idea was to use static linking to keep the libraries
themselves outside the view of the package manager. I.e. the libraries
aren't tracked inside the package manager, they become part of your
upstream.

If you insist on tracking the libs inside the package manager, then you
retain all the disadvantages of dynamic linking (inability to work with
mutually incompatible library version requirements) and static linking
(bigger space requirements).

I do wonder about the "a lot more wasted space" bit.
How much space are we really talking about?

Regards,
Jo

Michał Antkiewicz

unread,
Nov 30, 2015, 9:44:20 AM11/30/15
to Paolo Giarrusso, Haskell-cafe
Paolo,

The most important thing for compatibility is using the -compat*
packages. There are a few, including mtl-compat and
transformers-compat, etc. This will solve a big class of these
problems but unfortunately not all, so sometimes CPP is necessary.

Second, the whole purpose of stack is to provide reproducible builds;
however, sometimes you have extra dependencies which are not on
stackage. To prevent people running into trouble of creating the
correct stack.yaml, I include one as a data file like this, which
forces it to be part of the source distribution tar:

data-files: stack.yaml

Stack provides options for creating it automatically using a solver
but I found it "advanced usage".

Re: using cabal, I don't think currently anybody expects for any
package to build across versions without a sandbox in whatever the
current state of the package database might be. Maybe the first one
will, but not the subsequent 10. At least not until the "no reinstall
cabal" is fully implemented. The trend is to reduce the global
packages to the bare minimum to minimize the possibility of conflicts.
Sandboxing is currently necessary and should be always recommended to
users but it will change (improve) in the future GHC/cabal.

Hope that helps,
Michał

On Sun, Nov 29, 2015 at 6:54 AM, Paolo Giarrusso <p.gia...@gmail.com> wrote:

Imants Cekusins

unread,
Nov 30, 2015, 10:41:42 AM11/30/15
to Joachim Durchholz, haskell Cafe

Well let's think of books vs tweets.

Books:
take longer to write
are written to last for at least a couple years
are expected to be read by a number of readers, over time
are planned
are structured / organized
cover a few topics in depth
are reviewed
are proof read
some books become out of date
may contain errata

There are different editions. There is usually time span between them. Sometimes authors change title between editions.

There are different books about similar topics.

Tweets are in several ways just the opposite of books.

I like to think of software libraries as books. Good libraries are like hardcover editions.

Michael Orlitzky

unread,
Nov 30, 2015, 10:47:22 AM11/30/15
to haskel...@haskell.org
On 11/30/2015 03:59 AM, Joachim Durchholz wrote:
> Am 30.11.2015 um 00:24 schrieb Michael Orlitzky:
>> GHC does dynamic linking now, but I'm OK with static linking as long as
>> it's tracked. The end result is the same as if you had dynamic linking,
>> only with a lot more wasted space and rebuilds/reinstalls.
>
> Well, the idea was to use static linking to keep the libraries
> themselves outside the view of the package manager. I.e. the libraries
> aren't tracked inside the package manager, they become part of your
> upstream.
>

I get the idea, but it doesn't work in general. To use a cheesy example,
what if OpenSSL was statically linked into everything when heartbleed
was announced? If the static linking goes untracked, how do you fix it?
You need to rebuild everything that was linked against OpenSSL, but how
do you find those packages and rebuild them? Can you explain your answer
to a typical "Windows Update" user?

Less-serious vulnerabilities pop up every day and require the same
attention, so whatever solution you come up with needs to be fast and
automated.


> I do wonder about the "a lot more wasted space" bit.
> How much space are we really talking about?

Compiled programs aren't too bad. If you pull in a ton of libraries, you
might get 50MB overhead for a huge program. But if you go full retard
like NodeJS and bundle recursively, you can find yourself pulling in
500MB of dependencies for helloworld.js.

That's not anyone's main objection though. If we could fix dependencies
by wasting disk space everyone would be on board.
Reply all
Reply to author
Forward
0 new messages