> So, the question is - what say developers -- what's the balance point
> between:
> - too often, making collaborating on try pushes hard
> - too infrequent, introducing increasing push times
I wouldn't have such a big issue with Try resets if we didn't lose
information in the process. I believe every time there's been a Try
reset, I've lost data from a recent (<1 week) Try push and I needed to
re-run that job - incurring extra cost to Mozilla and wasting my time. I
also periodically find myself wanting to answer questions like "what
percentage of tree closures are due to pushes that didn't go to Try
first." Data loss stinks.
I'd say the goal should be "no data loss." I have an idea that will
enable us to achieve this.
Let's expose every newly-reset instance of the Try repo as a separate
URL. We would still push to ssh://
hg.mozilla.org/try, but the URLs
printed and the URLs used by automation would be URLs to repos that
would never go away. e.g.
https://hg.mozilla.org/tries/try1/rev/840f122d1286 ("try1" being the
important bit in there). When we reset Try, you'd hand out URLs to
"try2." You could reset the writable Try repo as frequently as you
desired and aside from a slightly different repo URL being given out,
nobody should notice.
The main drawbacks of this approach that I can think of are all in
automation: parts of automation are very repo/URL centric and having
effectively dynamic URLs might break assumptions. But making automation
work against arbitrary URLs is a good thing, as it allows automation to
be more flexible and this allows people to experiment with alternate
repo hosting, landing tools, landing-integrated code review tools, etc
without requiring special involvement from RelEng. "Everything is a web
service and is self-service," etc.