In Ivy/Maven when something fails I can relatively simply figure out what went wrong (i.e. the resolver is wrong, …) because it is just an URL. When you can see exactly where stuff comes from this can be a good way to debug. of course, it would be better if the users never had to come into this type of situation :)
>
> For 3), you could still use Maven Central directly. I agree that being able to specify things as a URL somewhere is one approach to compatibility with non-adept artifact repositories. The proposal was to have a separate type of artifact repository that maps a hash to a URL instead of a hash to content. (That repository might be a single file or many files each containing a URL or list of URLs.) This contains the problem to a specific repository instead of spreading it out through the metadata.
Aha, ok I see what you mean. I agree that that sounds like a good idea!
>
> For me, URLs are mainly for compatibility. Yes, they are convenient, but an important aspect of a dependency manager is to ensure that artifacts are still around years later. Allowing a URL discourages this unless it is only used to point at stable URLs, such as a legacy repository.
Right, I see where you are getting at. As long as we have URLs for compatibility, I think this is a good idea.
>
> A URL is also back to the problems with caching artifacts in Maven/Ivy. You don't locally know whether the content at the other end has changed. You have to contact the remote server to find out. So, it shouldn't be encouraged.
Currently, the sha-1 is stored in information for adept. When an artifact is downloaded the sha-1 is checked and you get an error if it does not check out. If a sha-1 is changed it will download the new artifact. In this setting I think there should not be any caching issues as such?
>
> In terms of evolving, a hash->URL repository gives more flexibility. The user can swap out that hash->URL repository if their dependencies become available in a normal hash->content repository. They don't have to be in charge of the metadata (or write some transformation on the metadata) to do this.
Yes, this is true. We will have to have an adept artifact-server or something then to make it possible to host this? With this approach, it would also be possible to use torrents though so that is a enormous plus :)
>
>
>>> I like the dependencies.adept file idea. It seems to me that it is close to an ideal balance of a project providing full information to build with the principle that repositories should be configured per-machine. It sounds like the main work on a user's part right now is if a dependencies.adept file has repositories you don't want to use? (I also think the name should be more like resolve.adept or something that indicates it is an output and not an input. This is a minor issue right now though.)
>> Great that you like the idea! After rewriting the SBT plugin I did not include this, but since you think it sounds like a good idea I will put it back in!
>
> Oh, was there a reason? What sounded like a good idea to me was that it was an explicit, human readable output that you could check into your project as
I had some difficulty in figuring out how to check which commit a module is in, and therefore I did not add it. I am planning on putting it back in though!
>
> a) documentation of a set of repositories that would successfully resolve the dependencies
> b) the exact commits from those repositories
> c) a way to lock down a project
>
Yep this is what I was thinking was nice with it! :)
> So, perhaps by default:
>
> a) the .adept file is generated by a resolve with the configured repositories and used commits
Yep, that is the way it worked
> b) the user can lock it (adds lock: true to the file or whatever). This means that adept will use only the listed repositories at the specific commits.
Yes, that is an interesting idea. You could also simply check it into you VCS, so that you would be able to easily spot changes.
> c) on a failed resolution, if the .adept file contains repositories that the user does not have, adept generates a warning with the missing repositories.
I was thinking that adept would pull and update itself to the last version specified in the resolve.adept file.
>
>> Yes, I guess if you do not want to use certain repositories with this approach you would have to delete them from the resolve.adept file (and the local adept if you have it there as well).
>> Perhaps it could be solved in a simpler manner and have a SBT setting that lets you prune out the repositories you do not want to use?
>
>
>
>>> I like the idea of having all versions for a module in one file. This solves the problem of where to put the metadata when using multiple version attributes (see the other thread on Pruning modules).
>> Yes, Evan had a great idea there! :)
>>>
>>> I like the download logging with the count, amount to download, speed, and progress. I expect there will be annoying interactions with sbt logging, but I think that is a detail for later.
>> Yep. I also want to add an ETA as well. I think it is very important to have a good user experience (even for a cmd line tool), and that means it is important to be responsive.
>> My hope is to make the downloader much better.
>> Currently it downloads as fast as possible from all locations. If there is a file that has multiple locations it chooses the one that finishes first.
>> My dream is to have a downloader that downloads multiple parts of a file, then figures out what is fastest (download everything from the same host or from another). There will have to be a mechanism for corrupt data handling as well.. Again, I was thinking that torrents might help us do this, but the issue is that we would need support for it. I guess this goes back to how we get artifacts/dependencies.
>>>
>>> I've mentioned this before, but I think that as much as possible, the expected behavior should be specified in a separate, English document once that behavior is decided on. This is the easiest way for a contributor/user to know what _should_ be happening and helps identify what is just not implemented yet. Certainly POCs don't need to be fully specified, but I think making this part of the process early is important. Once behaviors are reasonably well agreed on, I'd say specify them. I can be the one to extract this information and maintain a specification (used somewhat loosely), but only if everyone else thinks this is a good idea and cooperates ;).
>> Yes, I agree that this is very very important. I think it is important that adept has a culture where every feature means changing documentation.
>> For me, it would be great if you want to do this now!
>> At least we should have one person coordinating this to make sure the doc is as crisp, useful and up-to-date as it can be. I want to emphasize on "crisp", it is important that we do not have more doc than needed either. If there is too much doc, it is harder to find what you need and the maintenance burden is heavier.
>
> For me, this is separate from user documentation. This is a specification, with ideally enough detail/precision that you could write a clean implementation from it. (In practice, that wouldn't happen without someone actually trying to do it.)
Aha, I see what you mean then. I think specifications are a lot of work to maintain and might be buggy/detached from the actual implementation(s). If we were writing a protocol I would agree that this is vital.
My goal with adept, to be blunt, is simply to make sure that the JVM ecosystem has a dependency management system that does not give me grey hairs and makes me waste my time on waiting for it to finish...
I am not sure if spending time on a specification that enables you to write a new implementation is the way.
How about if we concentrate all our efforts on making a clean, well documented implementation that makes it possible to extract whatever you want and build on top of it?
>
> -Mark
>
>>> -Mark
>>
>