Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What do we want to do with the Valgrind builds?

4 views
Skip to first unread message

Phil Ringnalda

unread,
Jan 1, 2011, 11:50:38 PM1/1/11
to
We have Valgrind builds running on mozilla-central, once a day, whenever
there happen to be 5 slaves free after the once a day trigger. In
theory, I think, we have them on both Linux and Linux64, though
apparently there haven't been 5 free Linux64 slaves since forever, or
maybe those are turned off.

They are likely to be red a fair amount. We don't have it running on
Try, it's not currently something that everyone does themselves before
pushing, we've already had one transient failure in bug 621054
(apparently in a non-WebGL path, so it went away when that was enabled),
and bug 620828 wants to make both our existing leaks and future leaks
turn it red.

If I r+ bug 617431, then they'll be visible on tbpl, and then some
random push at 8am on a slow day, or more likely at 10pm or 1am after a
busy day, will have a red build against it from something that it did
not cause, and nobody will know what to do with it, and if they don't
hide it in tinderbox by the time I get up then I will.

Do we want Valgrind, badly enough to do what we need to do, have it
running on Try and running on every push, rather than as this idle-time
nightly?

The only way I can see to make the current once-a-day thing acceptable
would be to hide it, at which point the only way that I've been
realizing it ran, by seeing firebot say "Firefox: Build 'valgrind-linux'
added to tinderbox. (Status: Burning)," will go away, and then nobody
will ever notice anything it does.

L. David Baron

unread,
Jan 2, 2011, 12:31:49 AM1/2/11
to dev-tree-...@lists.mozilla.org
On Saturday 2011-01-01 20:50 -0800, Phil Ringnalda wrote:
> Do we want Valgrind, badly enough to do what we need to do, have it
> running on Try and running on every push, rather than as this
> idle-time nightly?

I think there's value in having tests that run once a day. If we
look at the results, it can be quite useful, even though it doesn't
run for every push. Running more often is obviously better, but
there may be more value in using our limited resources to run more
tests less often rather than run fewer tests more often.

However, I think we need way to present the results of tests that
run once a day so that we see them, but that they aren't "blamed" on
one push and they don't cause immediate tree closure when they fail.
Tinderbox / TBPL clearly doesn't do that. I don't know what the
right way is, though if the tests have clear failure/success
conditions, then email to dev.tree-management might be. I'd love to
hear better ideas, though.

-David

--
L. David Baron http://dbaron.org/
Mozilla Corporation http://www.mozilla.com/

Axel Hecht

unread,
Jan 2, 2011, 2:46:23 PM1/2/11
to

FWIW, if the schedulers are done to do so, those builds would be
associated with all changes since the last build, and if the build
status exported that, it'd show up.

The other question is if there are implications for the tree policies
when a test is run occasionally. Like, it's tough to back out a change
that's almost 24 hours old.

Axel

Nicholas Nethercote

unread,
Jan 3, 2011, 7:23:11 PM1/3/11
to Julian Seward
On Jan 1, 8:50 pm, Phil Ringnalda <philringna...@gmail.com> wrote:
>
> They are likely to be red a fair amount. We don't have it running on
> Try, it's not currently something that everyone does themselves before
> pushing, we've already had one transient failure in bug 621054
> (apparently in a non-WebGL path, so it went away when that was enabled),
> and bug 620828 wants to make both our existing leaks and future leaks
> turn it red.

Until we have experience, that's just speculation about how often
it'll be red.

Besides, if it's not automated, we'll just end up in the current
situation where someone
(e.g. jseward) will run it manually every so often and report the
problem, and someone will have to fix it. Because Valgrind's usually
right, and the things it complains about (memory leaks, bad memory
accesses, uses of undefined values, etc.) almost always warrant fixing
quickly.

That avoids the awkwardness of an obviously red/orange tree by hiding
the problem. That doesn't sound good to me. It also leads to less
useful regression ranges than automated daily testing does.

How slow is the Valgrind test run? From the way it's scheduled it
sounds like it takes a long time. But I thought the idea was for it
to run a tiny fraction of the usual test suite, precisely because it
does cause a big slowdown.

Nick

Mark Banner

unread,
Jan 4, 2011, 8:30:19 AM1/4/11
to
On 02/01/2011 05:31, L. David Baron wrote:
> However, I think we need way to present the results of tests that
> run once a day so that we see them, but that they aren't "blamed" on
> one push and they don't cause immediate tree closure when they fail.
> Tinderbox / TBPL clearly doesn't do that. I don't know what the
> right way is, though if the tests have clear failure/success
> conditions, then email to dev.tree-management might be. I'd love to
> hear better ideas, though.

From a tinderbox perspective, I think you either send the results to a
different tree, or classify the builders (somehow) as non-tree-closing.

From a TBPL persepective, I think it could be extended to integrate
these builders into a seperate column/location of one-off builds. This
would also mean it would make better sense for nightly builds.

Standard8

Chris AtLee

unread,
Jan 5, 2011, 10:35:35 AM1/5/11
to dev-tree-...@lists.mozilla.org
On 03/01/11 07:23 PM, Nicholas Nethercote wrote:
> How slow is the Valgrind test run? From the way it's scheduled it
> sounds like it takes a long time. But I thought the idea was for it
> to run a tiny fraction of the usual test suite, precisely because it
> does cause a big slowdown.

Total time is anywhere from 20 minutes to 2 hours. The bulk of the time
is actually doing the special valgrind build, not running the tests, so
if we already have an object dir handy, the recompile is usually quick.

Mike Shaver

unread,
Jan 5, 2011, 10:57:26 AM1/5/11
to Chris AtLee, dev-tree-...@lists.mozilla.org
On Wed, Jan 5, 2011 at 7:35 AM, Chris AtLee <cat...@mozilla.com> wrote:
> On 03/01/11 07:23 PM, Nicholas Nethercote wrote:
>>
>> How slow is the Valgrind test run?  From the way it's scheduled it
>> sounds like it takes a long time.  But I thought the idea was for it
>> to run a tiny fraction of the usual test suite, precisely because it
>> does cause a big slowdown.
>
> Total time is anywhere from 20 minutes to 2 hours.  The bulk of the time is
> actually doing the special valgrind build, not running the tests, so if we
> already have an object dir handy, the recompile is usually quick.

Maybe we should look at making valgrind the default config -- IIRC it
doesn't have a lot of overhead, and we can include the valgrind
headers easily.

Mike

0 new messages