When publishing a Cabal package, you should ensure that your dependencies in the build-depends field are accurate. This means specifying not only lower bounds, but also upper bounds on every dependency.
On Wed, Aug 15, 2012 at 12:38 PM, Bryan O'Sullivan <b...@serpentine.com> wrote:This argument precisely captures my feelings on this subject. I will
> I propose that the sense of the recommendation around upper bounds in the
> PVP be reversed: upper bounds should be specified only when there is a known
> problem with a new version of a depended-upon package.
be removing upper bounds next time I make releases of my packages.
So we are certain that the rounds of failures that led to their being *added* will never happen again?
Would it make sense to have a known-to-be-stable-though soft upper bound added proactively, and a known-to-break-above hard bound added reactively, so people can loosen gracefully as appropriate?
Would it make sense to have a known-to-be-stable-though soft upper bound added proactively, and a known-to-break-above hard bound added reactively, so people can loosen gracefully as appropriate?
On Wed, Aug 15, 2012 at 1:02 PM, Brandon Allbery <allb...@gmail.com> wrote:
> So we are certain that the rounds of failures that led to their beingIt would be useful to have some examples of these. I'm not sure we had
> *added* will never happen again?
no one is disputing that there are conditional changes in dependencies depending on library versions.
be to add a flag to Cabal/cabal-install that would cause it to ignoreupper bounds. (Frankly, I think it would also be great if
On 16 August 2012 03:38, Bryan O'Sullivan <b...@serpentine.com> wrote:Likewise ...
> Hi, folks -
>
> I'm sure we are all familiar with the phrase "cabal dependency hell" at this
> point, as the number of projects on Hackage that are intended to hack around
> the problem slowly grows.
>
> I am currently undergoing a fresh visit to that unhappy realm, as I try to
> rebuild some of my packages to see if they work with the GHC 7.6 release
> candidate.
I think part of the problem might be that some packages (like
> A substantial number of the difficulties I am encountering are related to
> packages specifying upper bounds on their dependencies. This is a recurrent
> problem, and its source lies in the recommendations of the PVP itself
> (problematic phrase highlighted in bold):
bytestring, transformers?) have had their major version number
incremented even despite being backwards-compatible. Perhaps there are
incompatible changes, but most of the cabal churn I've seen recently
has involved incrementing the bytestring upper bound to <0.11 without
requiring any code changes to modules using Data.ByteString.
So that we are using concrete examples. here is an example of a change that really shouldn't break any package:
https://github.com/timthelion/threadmanager/commit/c23e19cbe78cc6964f23fdb90b7029c5ae54dd35
The exposed functions are the same. The behavior is changed. But as the commiter of the change, I cannot imagine that it would break any currently working code.
There is another issue though. With this kind of change, there is no reason for a package which was written for the old version of the library, to be built with the new version. If I am correct, that this change changes nothing for currently working code, then why should an old package be built with the newer package?
The advantage in this case, is merely that we want to prevent version duplication. We don't want to waste disk space by installing every possible iteration of a library.
I personally think that disk space is so cheep, that this last consideration is not so important. If there are packages that only build with old versions of GHC, and old libraries, why can we not just seamlessly install them? One problem, is if we want to use those old libraries with new code. Take the example of Python2 vs Python3. Yes, we can seamlessly install python2 libraries, even though we use python3 normally, but we cannot MIX python2 libraries with python3 libraries.
Maybe we could make Haskell linkable objects smart enough that we COULD mix old with new? That sounds complicated.
I think, Michael Sloan is onto something though with his idea of compatibility layers. I think that if we could write simple "dictionary" packages that would translate old API calls to new ones, we could use old code without modification. This would allow us to build old libraries which normally wouldn't be compatible with something in base using a base-old-to-new dictionary package. Then we could use these old libraries without modification with new code.
It's important that this be possible from the side of the person USING the library, and not the library author. It's impossible to write software, if you spend all of your time waiting for someone else to update their libraries.
Timothy
---------- Původní zpráva ----------
Od: Ivan Lazar Miljenovic <ivan.mi...@gmail.com>
Datum: 16. 8. 2012
Předmět: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends
A benign change will obviously have no visible effect, while a compilation failure is actually better than a depsolver failure, because it's more informative.
I think what we’d need is a more relaxed policy with modifying a
package’s meta data on hackage. What if hackage would allow uploading a
new package with the same version number, as long as it is identical up
to an extended version range? Then the first person who stumbles over an
upper bound that turned out to be too tight can just fix it and upload
the fixed package directly, without waiting for the author to react.
What if instead of upper (and lower) bounds we just specify our interface requirements?
Any vastly more complicated and detailed versioning scheme has a huge burden to prove that it won't collapse dramatically more quickly. (Frankly, I think that anything involving "specify every detail of your known dependencies" is dead on arrival from a practical standpoint: it's way too much work.)