The latter is shorter and could be useful to represent "canonical"
packages. But there's a practical problem here: how is a package
determined to be canonical?
--
A imports C, requires C 1.8.0
B imports C, requires C >= 2.*
main imports A, B
How is the versioning schema supposed to solve this situation?
>> B imports C, requires C >= 2.*
>> main imports A, B
>>
>> How is the versioning schema supposed to solve this situation?
>
> It will prevent you from doing this unless you really want to (something
> like -f flag). And in fact Go compiler will compile it: import paths are
> different.
Then the versioning schema is completely broken. If C, for example,
happens to export some global state/entity, for instance like package
"log" does, than building a program with two different(ly named)
versions of C linked into the binary is a pretty clear show stopper.
This is never going to work.
Would you prefer gonads.io?
This is only half-accurate. There is a contract between both the
package authors and the package users. If the upstream package API
changes, there needs to be a period of backward-compatibility in which
downstream users are expected to make the transition. It doesn't
necessarily have to work back to the first commit.
On Monday, December 10, 2012 12:13:45 AM UTC-7, Jan Mercl wrote:This is only half-accurate. There is a contract between both the
package authors and the package users. If the upstream package API
changes, there needs to be a period of backward-compatibility in which
downstream users are expected to make the transition. It doesn't
necessarily have to work back to the first commit.I agree only to the extent that the package author has a contract with the package user due to a lack of tooling in this area. I do believe that a package author has a contract to keep previously public revisions available in their source tree, except for revisions that sensitive data managed to leak into.In Go already, go get does not update packages unless you explicitly ask it to, and these are build-time, not run-time concerns. Additionally, many Go programmers are wary (for security reasons and otherwise) of fetching the tip/head of a repo when building in a new environment, and automated build systems for go seem to more frequently use a push-to-target deployment model, rather than a build-on-target model (which makes dependency management a relative non-issue even for massively-parallel deploys).While I loathe manifest files and such that you see in gem's, egg's, etc., those are designed for a deployment and runtime model nearly opposite to how Go functions; if the go tool were extended with a 'freeze' command that dumped dependency versions (including the installed version of Go and the stdlib) into a .go-deps file within that package's directory, that file could later be used by other tools to conditionally fetch the exact dependency revisions (with some exceptions, like allowing newer stdlib/runtime releases). go get would also benefit from the ability to select a revision/branch via an option switch. If this tooling existed, then package users would not be so troubled when an upstream API is changed in a backwards incompatible way. A tool that dispatches to `go tool api` could even leverage rcs bisect capabilities to find and report on the latest revision supporting the (subset of the) foreign API used by an app or package.
With a complete core toolset, maintaining backwards compatibility should not be a mandate, though that's not to say it wouldn't be a good thing to maintain the old API on a separate branch if there are known to be existing users and bugs which need backported fixes.
--
A few thoughts about this:I think gopkgdoc is great for package discovery. This part it handles well, in a decentralized way (don't know how many hosting providers are supported, but clearly most big ones), this is going in the right direction I believe (especially since it supports Go's experimental packages! Now my packages can be found... :).As for versioning, node's npm has perfected this to an art form. Managing dependencies, like the A that uses B and C with a different B is trivial in node, and versions can be locked down using shrinkwrap, it's really neat. But this is in dynamic javascript-land, not in statically-linked Go-land, where it's a bit harder.
The OP announcement is the first attempt that I'm aware of to tackle this problem,
this is great news, but I'm not so sure about the centralized repo thing though. Couldn't this work using conventions, such as looking for a vM.m.p tag (Major, minor and patch, as prescribed by http://semver.org/) in source control? No need to publish to a separate site?
Finally, for the "moral contract" between package developers and users, the API compatiblity and all, this is why we have versions. The solution is version-based, not a fragile and implicit "contract"-based that puts a hell of a lot of pressure on the package developer ("yes, your production code will always work!"). This helps not only for breaking changes, but to explicitly announce bug fixes too (hey, here's a new patch-bumping version: bug fix).
Le dimanche 9 décembre 2012 04:23:45 UTC-5, Alexey Palazhchenko a écrit :So – feel free to try Go Nuts, publish your packages, install other and post your comments into this thread or gonu...@googlegroups.com discussion group. You even may contribute. ;)Good news everyone!
I'm happy to announce a preview of http://gonuts.io/ – centralized repository for versioned Go packages.
Why do I think Go ecosystem need this? There are two problem "go get" can't solve.
First, it doesn't support package versioning – it always installs latest version. If someone wants to install previous one, he/she has to use git/hg/svn/bzr manually. Therefore, package authors are forced to maintain backward compatibility since first commit. If they want to remove some API, they should use a different repository.
Second, in practice many developers are moving their code to other places (for example, from Google Code to GitHub), renaming repositories (web.go become web) or just deleting them (at the time of this writing many links in Go Projects Dashboard and GoPkgDoc are dead). Yes, it's a social problem, but we still should deal with it.
So how can we solve those problems? Big companies typically have special repositories for third-party code. Once imported there, code is never deleted. And they have a budget to fix their world of dependencies. So, "go get" probably works okay for "google/..." packages. Smaller companies and individual developers are able to bundle third-party packages with their application and take pain of updating them only when needed. But what should package developers do if they want to use other packages?..
gonuts.io for Go, similar to PyPI for Python, RubyGems.org for Ruby and NPM for Node.js should solve those problems. Published packages (called "nuts") are never deleted, and versioning schema allows to install exact version. There are plans to allow to install a version matching pattern (like 2.*.*), but still be in control (similar to RubyGems' Bundler). And nut client tool was designed to work along with Go conventions: nuts are installed into workspace specified by GOPATH environment variable, and imported as "gonuts.io/test_nut1/0.0.1".
There are few more things I need to do before official launch. First of all, I want to provide a clear path for transition of well-known packages to gonuts.io without names being squatted. So for now gonuts.io works in preview mode, and all nuts will be removed before going into real production.
Thanks.
–-–
Alexey "AlekSi" Palazhchenko
--
To be clear, it's not "hard" to link Av1 -> {Bv1, Cv2}, Bv1 -> {Cv1} in Go. It's not possible. If you use different directories, then it's no longer the same package, so it "works," but you will have two instances of the package in memory whose values cannot be used interchangeably, not to mention the potential of having separate global state if the package employs it.
I don't think we should enforce versioning semantics on the entire community. Sure, it's probably a good idea to tag versions in your repository, but (especially because of the linking problem above) I don't think it's as important a requirement as stabilizing your exported API.
To be clear, it's not "hard" to link Av1 -> {Bv1, Cv2}, Bv1 -> {Cv1} in Go. It's not possible.
On Wednesday, December 12, 2012 11:07:58 AM UTC-7, Kyle Lemons wrote:
To be clear, it's not "hard" to link Av1 -> {Bv1, Cv2}, Bv1 -> {Cv1} in Go. It's not possible.I'm not convinced this is a major problem with Go.
If it's that hard to update B to use Cv2, then either B or C was probably written quite poorly.
The real issue comes with whether Cv2 has a disjoint feature-set with Cv1 (and where features in the symmetric difference are needed by A and B respectively. In this case, I'd say the author of C is either is taking a long time on v2 (and didn't have the good sense to delay releasing it until it was at least as capable as v1), or the author of A didn't have the good sense to notice this kind of issue.In any case, I don't think this is a case that needs tooling.
I was suggesting a dependency list, not for the each library, but for the app as a whole (if you can't build it with `go build`, then deps don't matter, and if the problem you describe above happens to occur, you can't `go build` anyway).
--
If it's that hard to update B to use Cv2, then either B or C was probably written quite poorly.That is quite true. In fact, in the cases in which a package author does need to change their API, it would be really swell of them to provide a gofix module for it, after a suitable period of supporting the backward-compatible interface as well.
That's one of the reasons I stopped working on rx, actually. It was conceptualized as a way to identify and track inter-repository dependencies in such a way that it could then pull the updates from a repository and check that nothing depending on it broke, and if it did, play games with (tagged) versions in between to see if one of them still works. I'll also mention that I planned on having it run tests, as API changes aren't the only thing that could change, and a package author might not even realize that he made a semantic change or that he depended on an undocumented feature/bug in another package.
On Wednesday, December 12, 2012 9:37:01 PM UTC-7, Kyle Lemons wrote:
If it's that hard to update B to use Cv2, then either B or C was probably written quite poorly.That is quite true. In fact, in the cases in which a package author does need to change their API, it would be really swell of them to provide a gofix module for it, after a suitable period of supporting the backward-compatible interface as well.Intriguing. A quick search, however, seems to indicate that there's no "userland" support in gofix yet. I suppose limited changes could be bundled in a file of gofmt invocations that could probably be made to be sh/bat/rc compat all at once.
That's one of the reasons I stopped working on rx, actually. It was conceptualized as a way to identify and track inter-repository dependencies in such a way that it could then pull the updates from a repository and check that nothing depending on it broke, and if it did, play games with (tagged) versions in between to see if one of them still works. I'll also mention that I planned on having it run tests, as API changes aren't the only thing that could change, and a package author might not even realize that he made a semantic change or that he depended on an undocumented feature/bug in another package.You've made me realize that I've now danced on both sides of the argument. Certainly with this awareness, I'm now leaning towards the "leverage go's hackability" as an inherent "tool". In something like Java, where you see a lot of copypasta, both library authors and library users may be unwilling to consider or adapt to incompatible changes without automated or batched tooling. In go, those habits may linger, yet I regularly find it's faster to figure out, fix, scrap, and then rewrite large portions of someone else's library than it is to wait for a response to an bug report, for better or worse. Sure, it promotes fragmentation, but only if the fragmenters don't post pull requests or the authors don't consider them. Projects with merit but without stewardship are asking to get fragmented anyway.
--
I am curious. In this thread there is a common reference to the idea of a singular central repository for packages being highly desirable.
Personally I find the idea unattractive, a SPoF, overly authoritarian and potentially a political football.
Could someone explain what benefits they see in a central singular namespace for packages?
Dave
--
Mate, I'm not going to quote urban dictionary to you, but you have to
find another name for a versioned package. Nut is not acceptable.