The latter is shorter and could be useful to represent "canonical"
packages. But there's a practical problem here: how is a package
determined to be canonical?
--
A imports C, requires C 1.8.0
B imports C, requires C >= 2.*
main imports A, B
How is the versioning schema supposed to solve this situation?
>> B imports C, requires C >= 2.*
>> main imports A, B
>>
>> How is the versioning schema supposed to solve this situation?
>
> It will prevent you from doing this unless you really want to (something
> like -f flag). And in fact Go compiler will compile it: import paths are
> different.
Then the versioning schema is completely broken. If C, for example,
happens to export some global state/entity, for instance like package
"log" does, than building a program with two different(ly named)
versions of C linked into the binary is a pretty clear show stopper.
This is never going to work.
Would you prefer gonads.io?
This is only half-accurate. There is a contract between both the
package authors and the package users. If the upstream package API
changes, there needs to be a period of backward-compatibility in which
downstream users are expected to make the transition. It doesn't
necessarily have to work back to the first commit.
On Monday, December 10, 2012 12:13:45 AM UTC-7, Jan Mercl wrote:This is only half-accurate. There is a contract between both the
package authors and the package users. If the upstream package API
changes, there needs to be a period of backward-compatibility in which
downstream users are expected to make the transition. It doesn't
necessarily have to work back to the first commit.I agree only to the extent that the package author has a contract with the package user due to a lack of tooling in this area. I do believe that a package author has a contract to keep previously public revisions available in their source tree, except for revisions that sensitive data managed to leak into.In Go already, go get does not update packages unless you explicitly ask it to, and these are build-time, not run-time concerns. Additionally, many Go programmers are wary (for security reasons and otherwise) of fetching the tip/head of a repo when building in a new environment, and automated build systems for go seem to more frequently use a push-to-target deployment model, rather than a build-on-target model (which makes dependency management a relative non-issue even for massively-parallel deploys).While I loathe manifest files and such that you see in gem's, egg's, etc., those are designed for a deployment and runtime model nearly opposite to how Go functions; if the go tool were extended with a 'freeze' command that dumped dependency versions (including the installed version of Go and the stdlib) into a .go-deps file within that package's directory, that file could later be used by other tools to conditionally fetch the exact dependency revisions (with some exceptions, like allowing newer stdlib/runtime releases). go get would also benefit from the ability to select a revision/branch via an option switch. If this tooling existed, then package users would not be so troubled when an upstream API is changed in a backwards incompatible way. A tool that dispatches to `go tool api` could even leverage rcs bisect capabilities to find and report on the latest revision supporting the (subset of the) foreign API used by an app or package.
With a complete core toolset, maintaining backwards compatibility should not be a mandate, though that's not to say it wouldn't be a good thing to maintain the old API on a separate branch if there are known to be existing users and bugs which need backported fixes.
--
A few thoughts about this:I think gopkgdoc is great for package discovery. This part it handles well, in a decentralized way (don't know how many hosting providers are supported, but clearly most big ones), this is going in the right direction I believe (especially since it supports Go's experimental packages! Now my packages can be found... :).As for versioning, node's npm has perfected this to an art form. Managing dependencies, like the A that uses B and C with a different B is trivial in node, and versions can be locked down using shrinkwrap, it's really neat. But this is in dynamic javascript-land, not in statically-linked Go-land, where it's a bit harder.
The OP announcement is the first attempt that I'm aware of to tackle this problem,
this is great news, but I'm not so sure about the centralized repo thing though. Couldn't this work using conventions, such as looking for a vM.m.p tag (Major, minor and patch, as prescribed by http://semver.org/) in source control? No need to publish to a separate site?
Finally, for the "moral contract" between package developers and users, the API compatiblity and all, this is why we have versions. The solution is version-based, not a fragile and implicit "contract"-based that puts a hell of a lot of pressure on the package developer ("yes, your production code will always work!"). This helps not only for breaking changes, but to explicitly announce bug fixes too (hey, here's a new patch-bumping version: bug fix).
Le dimanche 9 décembre 2012 04:23:45 UTC-5, Alexey Palazhchenko a écrit :So – feel free to try Go Nuts, publish your packages, install other and post your comments into this thread or gonu...@googlegroups.com discussion group. You even may contribute. ;)Good news everyone!
I'm happy to announce a preview of http://gonuts.io/ – centralized repository for versioned Go packages.
Why do I think Go ecosystem need this? There are two problem "go get" can't solve.
First, it doesn't support package versioning – it always installs latest version. If someone wants to install previous one, he/she has to use git/hg/svn/bzr manually. Therefore, package authors are forced to maintain backward compatibility since first commit. If they want to remove some API, they should use a different repository.
Second, in practice many developers are moving their code to other places (for example, from Google Code to GitHub), renaming repositories (web.go become web) or just deleting them (at the time of this writing many links in Go Projects Dashboard and GoPkgDoc are dead). Yes, it's a social problem, but we still should deal with it.
So how can we solve those problems? Big companies typically have special repositories for third-party code. Once imported there, code is never deleted. And they have a budget to fix their world of dependencies. So, "go get" probably works okay for "google/..." packages. Smaller companies and individual developers are able to bundle third-party packages with their application and take pain of updating them only when needed. But what should package developers do if they want to use other packages?..
gonuts.io for Go, similar to PyPI for Python, RubyGems.org for Ruby and NPM for Node.js should solve those problems. Published packages (called "nuts") are never deleted, and versioning schema allows to install exact version. There are plans to allow to install a version matching pattern (like 2.*.*), but still be in control (similar to RubyGems' Bundler). And nut client tool was designed to work along with Go conventions: nuts are installed into workspace specified by GOPATH environment variable, and imported as "gonuts.io/test_nut1/0.0.1".
There are few more things I need to do before official launch. First of all, I want to provide a clear path for transition of well-known packages to gonuts.io without names being squatted. So for now gonuts.io works in preview mode, and all nuts will be removed before going into real production.
Thanks.
–-–
Alexey "AlekSi" Palazhchenko
--
To be clear, it's not "hard" to link Av1 -> {Bv1, Cv2}, Bv1 -> {Cv1} in Go. It's not possible. If you use different directories, then it's no longer the same package, so it "works," but you will have two instances of the package in memory whose values cannot be used interchangeably, not to mention the potential of having separate global state if the package employs it.
I don't think we should enforce versioning semantics on the entire community. Sure, it's probably a good idea to tag versions in your repository, but (especially because of the linking problem above) I don't think it's as important a requirement as stabilizing your exported API.
To be clear, it's not "hard" to link Av1 -> {Bv1, Cv2}, Bv1 -> {Cv1} in Go. It's not possible.
On Wednesday, December 12, 2012 11:07:58 AM UTC-7, Kyle Lemons wrote:
To be clear, it's not "hard" to link Av1 -> {Bv1, Cv2}, Bv1 -> {Cv1} in Go. It's not possible.I'm not convinced this is a major problem with Go.
If it's that hard to update B to use Cv2, then either B or C was probably written quite poorly.
The real issue comes with whether Cv2 has a disjoint feature-set with Cv1 (and where features in the symmetric difference are needed by A and B respectively. In this case, I'd say the author of C is either is taking a long time on v2 (and didn't have the good sense to delay releasing it until it was at least as capable as v1), or the author of A didn't have the good sense to notice this kind of issue.In any case, I don't think this is a case that needs tooling.
I was suggesting a dependency list, not for the each library, but for the app as a whole (if you can't build it with `go build`, then deps don't matter, and if the problem you describe above happens to occur, you can't `go build` anyway).
--
If it's that hard to update B to use Cv2, then either B or C was probably written quite poorly.That is quite true. In fact, in the cases in which a package author does need to change their API, it would be really swell of them to provide a gofix module for it, after a suitable period of supporting the backward-compatible interface as well.
That's one of the reasons I stopped working on rx, actually. It was conceptualized as a way to identify and track inter-repository dependencies in such a way that it could then pull the updates from a repository and check that nothing depending on it broke, and if it did, play games with (tagged) versions in between to see if one of them still works. I'll also mention that I planned on having it run tests, as API changes aren't the only thing that could change, and a package author might not even realize that he made a semantic change or that he depended on an undocumented feature/bug in another package.
On Wednesday, December 12, 2012 9:37:01 PM UTC-7, Kyle Lemons wrote:
If it's that hard to update B to use Cv2, then either B or C was probably written quite poorly.That is quite true. In fact, in the cases in which a package author does need to change their API, it would be really swell of them to provide a gofix module for it, after a suitable period of supporting the backward-compatible interface as well.Intriguing. A quick search, however, seems to indicate that there's no "userland" support in gofix yet. I suppose limited changes could be bundled in a file of gofmt invocations that could probably be made to be sh/bat/rc compat all at once.
That's one of the reasons I stopped working on rx, actually. It was conceptualized as a way to identify and track inter-repository dependencies in such a way that it could then pull the updates from a repository and check that nothing depending on it broke, and if it did, play games with (tagged) versions in between to see if one of them still works. I'll also mention that I planned on having it run tests, as API changes aren't the only thing that could change, and a package author might not even realize that he made a semantic change or that he depended on an undocumented feature/bug in another package.You've made me realize that I've now danced on both sides of the argument. Certainly with this awareness, I'm now leaning towards the "leverage go's hackability" as an inherent "tool". In something like Java, where you see a lot of copypasta, both library authors and library users may be unwilling to consider or adapt to incompatible changes without automated or batched tooling. In go, those habits may linger, yet I regularly find it's faster to figure out, fix, scrap, and then rewrite large portions of someone else's library than it is to wait for a response to an bug report, for better or worse. Sure, it promotes fragmentation, but only if the fragmenters don't post pull requests or the authors don't consider them. Projects with merit but without stewardship are asking to get fragmented anyway.
--
I am curious. In this thread there is a common reference to the idea of a singular central repository for packages being highly desirable.
Personally I find the idea unattractive, a SPoF, overly authoritarian and potentially a political football.
Could someone explain what benefits they see in a central singular namespace for packages?
Dave
--
Mate, I'm not going to quote urban dictionary to you, but you have to
find another name for a versioned package. Nut is not acceptable.
I am curious. In this thread there is a common reference to the idea of a singular central repository for packages being highly desirable.
Personally I find the idea unattractive, a SPoF, overly authoritarian and potentially a political football.
Could someone explain what benefits they see in a central singular namespace for packages?
--
On Thursday, December 13, 2012 12:20:33 PM UTC-6, Dave Cheney wrote:Could someone explain what benefits they see in a central singular namespace for packages?
This is a very good point. Most of the discussion here is being done in abstract terms, which is fine to some extent. But what would be even better would be to talk in terms of requirements, a.k.a. what developer problems are being solved.To give an example of problems being solved: I'm currently learning Python, coming from a C# background. I'm discovering that PyPI and the 'pip' tool solve some real workflow problems for me. Here's how pip and PyPI work together (apologies to everyone for whom this is well-understood, including possibly the parent poster).I create a file "requirements.txt" containing (say)djangosouthsimplejsondjango-facebookurllib3I runpip install -r requirements.txtto have the latest version of everything I need installed from PyPI, including the listed projects' dependencies (also from PyPI), and start hacking. I can re-run the install at any point to either refresh my packages or get new ones I've added to requirements.txt.When it's time to nail down the specific set of libraries I'm using, e.g. to build a machine in a production environment, I runpip freezeand it outputs the precise version of everything I'm currently using, in a form suitable for use as a requirements.txt file:Django==1.4.3South==0.7.6argparse==1.2.1distribute==0.6.24django-facebook==4.2.5simplejson==2.6.2urllib3==1.5wsgiref==0.1.2Deploying to production is as simple as loading up my own code and running 'pip install -r' on the above list.So, here's a list of problems being solved, a.k.a. possible requirements:- supply well-known unambiguous names to aid discovery, discussion, and blogging- zero-configuration download- zero-configuration install into my project- automatic dependency detection (so I can specify only what matters to me, and not risk getting it wrong for the package I'm bringing in)- support imprecise versioning (for hacking)- support precise versioning (for production deployment and/or QA)- Related: per-project download and install (isolate my different projects from local dependencies, where possible; this gets into python's virtualenv concept)The problem of packages relying on different versions of the same dependency doesn't seem to be an issue; it's possible that they've solved the social aspects here, or that they don't emerge as frequently as we might think. I'm new to Python so maybe I just haven't seen it yet.I'm so new to Go that I'm nearly useless for analyzing these requirements for their intrinsic value or their difficulty in implementation, but nonetheless I'd be very interested to hear the experts discuss them.It's entirely possible that the Go ecosystem just isn't ready for this, and that we need to wait for a few "obvious gold standard" projects to emerge before trying to centralize their distribution; for all I know that's how PyPI got its start. It's also possible that the presence of a "benevolent dictator" in the Python world tends to make decisions like "which project gets to call itself the 'email' project" easier when it becomes an issue.-- Dave
So, I've got this crazy idea, and I'd like to know how crazy it really is.The current path appears to be to place a json file in each repo to describe the project, have some metadata, and enumerate the package's dependencies. Very similar to most other languages out there.Now, one thing that I really enjoy about Go is the "code is configuration" mantra. Similarly, I feel pretty strongly that documentation is part of your "code". Smush those two ideas together, and you get:README-defined project metadata.With a simple set of conventions, a project readme can contain all the metadata any self respecting package manager ever needs. Plus, it becomes very visible documentation, helps project authors not to repeat themselves, etc.A format something along these lines: https://gist.github.com/nevir/5182712
-Ian
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Surely the point of versions is to make it so you can't do that? Not to make it so you can.If they are versioned and the version is easily visible it can hopefully become obvious why you cannot use a package that depends on a different version of a package that you already depend on.
The issues seen within the juju development routine was consciously
introduced because some of the developers did not want to use the
convention of using a version number in the package URL, because they
felt it was too much trouble while they were at a fast pace developing
an external dependency. I disagree.. the convention works well in my
experience with mgo and others, and it can be used in those cases too.
Aha, here we go :)There's a couple options for dealing with the issues you raise (thankfully, we've got a lot of examples of other languages/package managers dealing with them). I'd like to push for a very opinionated approach, though:
- Assert that all projects strictly adhere to http://semver.org/
- When a project says it depends on version X.Y.Z, it is indicating that it depends on code that was introduced in that version. This becomes an implicit ">= X.Y.Z && < (X+1).0.0" version requirement.
Before I try to address your points, let me enumerate what I currentlyunderstand about Go's package management; I feel like I must be missingsomething:1. A Go package is identified by its repository URI; with code being read fromeither `master`, or a branch specific to the current go version.2. Any time a package maintainer makes a breaking change to their API, theironly option is to fork the project (to have a new URI).
3. Packages are not locked into a given set of dependencies: imports areresolved at `go get` time, from whatever code was just checked out.4. `go get` will no-op if a package already exists at the target `src` location.
5. `import` statements are effectively a statement indicating that your codeworks when run against a dependency _at the time the code was written_. Theydo not indicate retroactive support for dependencies.
5a. I.e., if package A starts depending on package B today, but you previouslychecked out package B a week ago, there is no guarantee that package A willwork with package B. `go get` will not update package B for you in thiscase, either; you must update it manually.5b. The "time the code was written" is actually not strictly true. A package isreally only supported on whatever revision of a dependency the author hadchecked out in their `$GOPATH` at the time.6. If you encounter a breaking issue with a dependency, and you need a way torevert that package back to a known working version, your options are eitherto: fork that dependency, or tar it up and deploy it manually. (This is toaddress issues before the maintainer has time to fix them)
Ok, on to your comment, assuming the above points are accurate:> This is precisely where it falls apart. Unless you assert that you must build> from a precise version that unambiguously identifies the state of code in that> repository, different people can get different views of your dependencies,> some of which may work and some of which may not.I'm not sure I understand this (from the way I'm reading it, I don't see anydifference from the current state of package management):* Different people already get different views of a package's dependencies(points 3, 4, 5, 5a, 5b above) based on when they previously checked out apackage and/or its dependencies.* There's no guarantee that the dependencies you currently have in your $GOPATHwill work with a newly `go gotten` package (points 5a, 5b above)
> You can "require" that APIs and dependencies not change unless the major> version number changes, but the solution will eventually degenerate to> requiring that you explicitly demarcate the allowable revision or range of> revisions for every dependency.From what I understand, Go already does demarcate allowable revisions in animplicit manner. A package supports a dependency that...* ...was checked out in the author's `$GOPATH` at the time that they starteddepending on it. (points 5, 5a, 5b above)* ...until that package URI is no longer maintained.This is _almost exactly the same_ as saying you depend on version`>= X.Y.Z && < (X+1).0.0` if packages were versioned. "Y.Z" could just aseasily be a timestamp or revision (assuming linear history), and "X" could bemapped to the repository URI, if you want to think about it like that.
One benefit of having version constraints is that they are expressed as aguarantee - one that would obviate the issues introduced by 4, 5, 5a, 5b.The tools would know when to bump a version of a stale package that you have inyour workspace.> Building from head, while annoying, is at the very least much easier to> understand.Hard to diagnose logic issues (introduced by subtle behavior changes acrossversions of a dependency), IMO, are worth having slightly more oppressivetooling to avoid; in addition to all the other benefits that package managementprovides (not enumerating here; this post is already far too long).Finally, package versioning is a pretty common and well understood practice. Idon't buy that package management as a concept is very confusing
Awesome, thanks for the thorough response. Couple more questions:> Well, no, you could revert the repository back to that version or update your code to the new semantics.Are there any tools to help deal with this? go get doesn't understand scm revisions, does it? (as part of the CLI at least).How are people dealing with freezing versions and/or dealing with broken dependencies right now? I can't imagine that waiting for the dependency author before trying to push again is reasonable?
The one thing that I hate the most is having to debug application code (due to some dependency) when rolling out new hosts or doing a hotfix against a known version of my code.
Rory McGuire ClearFormat - Research and Development UK : 44 870 224 0424 USA : 1 877 842 6286 RSA : 27 21 466 9400 Email: rmcg...@clearformat.com Website: www.clearformat.com |
--
You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/cyt-xteBjr8/unsubscribe?hl=en-US.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
Rory McGuire ClearFormat - Research and Development UK : 44 870 224 0424 USA : 1 877 842 6286 RSA : 27 21 466 9400 Email: rmcg...@clearformat.com Website: www.clearformat.com |
You're describing something similar to what I started, rx : kylelemons.net/go/rxThe problem is what was described above. If A depends on B v1 and C v2, and B depends on C v1, then no amount of tooling can really help you if you need a feature that's in C v2.
What if package C contains a global registry (for example, net/http handler
or crypto's hash registry)?