would be nice if a commit could clean up the current trunk, there's a
trivial typo. See ticket #5569
http://code.djangoproject.com/ticket/5569
Michael
--
noris network AG - Deutschherrnstraße 15-19 - D-90429 Nürnberg -
Tel +49-911-9352-0 - Fax +49-911-9352-100
http://www.noris.de - The IT-Outsourcing Company
Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel, Hansjochen Klenk -
Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689
Fixed in [6404]. Sorry for the inconvienience.
Yours
Russ Magee %-)
> Sorry for the inconvienience.
Oh come on ... go ahead and let's have a bit more of these buglets, so that
the rest of us feel less bad for proposing wrong patches ;-)
BTW, wouldn't it be nice (TM) if new commits first went to side branch and
only get committed to trunk after the buildbot has given green light? I have
no idea if this is feasible ;-)
Oh... ok then. I've just committed [6405] which randomly renames 1 in
10 variables in the Django source tree. Have fun :-)
(for the humour impaired - only kidding)
> BTW, wouldn't it be nice (TM) if new commits first went to side branch and
> only get committed to trunk after the buildbot has given green light? I have
> no idea if this is feasible ;-)
There is a class of bugs that will never get caught by this approach -
e.g., changes to the django-admin command line handling. However, it
would be useful for everything that doesn't fall in to this edge case.
I have a vague recollection that this can be done in SVN as a
pre-commit trigger - i.e., commits are received and applied, but the
commit isn't actually made effective until the tests all pass. It is
probably worth opening a ticket so that this idea doesn't get
forgotten.
Yours,
Russ Magee %-)
> It is probably worth opening a ticket so that this idea doesn't get
> forgotten.
#5571
G'night!
I don't think holding things in limbo for a really long time using
pre-commit is a good plan. You're holding up the subversion commit
process mid-stream.
Bear in mind that using virtual machines, even on very powerful
hardware, you're looking at the better part of an hour or more to run
all the tests for, say, the six main database backends and then testing
a couple of backends on Python 2.3 as well (oh, and mysql_old should
really be tested with both and old and a recent MySQLdb to avoid
accidental breakage). The Oracle tests take quite a while, in
particular, because setting up the databases for the tests there seems a
bit slower (it might well be because that's the only one I run
virtualised).
We do more than one commit an hour in any sort of periods, so the tests
are going to back up. And it will mean that you don't get to see
somebody else's recent changes for an hour or so.
In short, I don't think there's been a huge rash of breaking the tree
because of bad commits. It happens maybe once every few months and is
usually trivial to fix. If somebody gets hurt by it, they can checkout
an older revision temporarily. Periodically checking the buildbot output
might be useful for when we don't have time to run every combination of
backend and Python version locally.
Let's please be careful about being too preventative at the cost of
productivity here. A big way bugs are founds are by people using stuff.
Test suite is useful for very basic sanity checking, but real bugs are
found by people, so let's get it into their hands.
Malcolm
Perhaps the buildbot could send build failures to django-updates; I've
even seen setups that list the commits between the last good build and
the first failed one.
Bill
I'm also a little concerned that the noise level would be high for
those messages - that ultimately people would ignore them because of
frequency.
Maybe a separate list just for the buildbot output? I'm open to suggestions...
-joe
The current state of the tree is a little misleading; Post-sprint, we
have had a few test failures, but most of the time, the test suite
should be in a passing state.
> Maybe a separate list just for the buildbot output? I'm open to suggestions...
Sounds like a good idea to me. A 'build errors only' list would be
relatively low traffic, and could be easily ignored at individual
discression.
Another approach would be to only send the failure message to the
committer that stimulated the build. i.e., you break it, you get a
nastygram in your inbox :-)
On a related note -
- Have we got any documentation pointing to the existence of the
buildbot? I was thinking that it should be mentioned somewhere on the
'how to contribute' page.
- The build last night ([6407]) should have passed, but the Postgres
build seems to have crapped out due to taking too long. Is the test
max duration a setting that is being too conservative here, or has
something else gone wrong?
Yours,
Russ Magee %-)
Whilst well intentioned, this tends not to work for community-oriented
projects. The person who committed the change may not necessarily be
available to fix it, so there's a lot of benefit to sending it to a list
where the group will see it and *somebody* will end up fixing it. Also,
there are a lot of way for completely random failures to happen. We've
already seen this with config tweaks and timeouts (URL tests could fail
due to poor network performance, for example, because we test
validate_exists on the URLField).
We're all helping each other here. Let's keep it that way and try to not
to get held hostage too much by the green bar in buildbot. It's a guide,
not a decision maker.
Regards,
Malcolm
No documentation as yet, and we're still in the process of getting
settled. We don't have the builds all settled beyond the initial "get
'em running for the spring", and they're currently relying on root
access to do everything they do. We need to get them running a tad
more indepdently, and then we'll be more rarin' to go.
A bit earlier, we had some troubles with databases being in an
inconsistent state when the builds were running, but when I last
looked at the bot, things appear to have been resolved.
-joe