Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Removing tinderbox-builds from archive.mozilla.org

332 views
Skip to first unread message

nth...@mozilla.com

unread,
May 9, 2018, 2:49:37 AM5/9/18
to
We have approximately 400 TB of old files in the two directories
firefox/tinderbox-builds and mobile/tinderbox-builds on
archive.mozilla.org [1,2]. I'm suggesting it's time to remove them.

The files come from pushes into the gecko repositories when the builds &
tests were run on Buildbot. The subdirectories for mozilla-inbound and
mozilla-central go as far back as September 2015, and up to August 2017,
although the later dates may only contain test logs. Others are for
discontinued branches like b2g-inbound, graphics, and fx-team. We used a
30 day retention policy until Sep 2015, when the backend storage for
archive.m.o was migrated to AWS S3 (at that time S3 did not have the
lifecycle management feature).

As the build jobs were gradually moved to Taskcluster the artifacts
moved into the Taskcluster index, which has a retention period of 1
year. For some platforms there is now a gap between the old buildbot
files and the taskcluster ones. eg linux64 on buildbot goes up to
January 2017, and taskcluster starts at May 2017.

I'd like to remove the two tinderbox-builds directories. Please respond
if you know of any reasons we should not proceed.

[1] https://archive.mozilla.org/pub/firefox/tinderbox-builds/
[2] https://archive.mozilla.org/pub/mobile/tinderbox-builds/

Xidorn Quan

unread,
May 9, 2018, 5:00:29 AM5/9/18
to dev-pl...@lists.mozilla.org
Would removing those files affect the ability of mozregression to locate pushes of old regressions?

- Xidorn
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

William Lachance

unread,
May 9, 2018, 11:16:40 AM5/9/18
to
On 2018-05-09 4:59 AM, Xidorn Quan wrote:
> Would removing those files affect the ability of mozregression to locate pushes of old regressions?

Good question. I checked and it seems that the answer is no (yay).

For nightly builds in mozregression, we fetch stuff out of:

https://archive.mozilla.org/pub/firefox/nightly/

For inbound builds, we've been using taskcluster for a while.

Relevant code is in:
https://github.com/mozilla/mozregression/blob/44f216c091072b7d6c1c2136041e7c04b0e1c89c/mozregression/fetch_configs.py

Will

Botond Ballo

unread,
May 9, 2018, 11:49:14 AM5/9/18
to William Lachance, dev-platform
On Wed, May 9, 2018 at 11:16 AM, William Lachance <wlac...@mozilla.com> wrote:
> On 2018-05-09 4:59 AM, Xidorn Quan wrote:
>>
>> Would removing those files affect the ability of mozregression to locate
>> pushes of old regressions?
>
>
> Good question. I checked and it seems that the answer is no (yay).
>
> For nightly builds in mozregression, we fetch stuff out of:
>
> https://archive.mozilla.org/pub/firefox/nightly/
>
> For inbound builds, we've been using taskcluster for a while.

What about for bugs where the regression range is older, and falls
into the time period where we no longer have taskcluster builds, but
have these buildbot builds?

Cheers,
Botond

William Lachance

unread,
May 9, 2018, 12:39:38 PM5/9/18
to
On 2018-05-09 11:48 AM, Botond Ballo wrote:
>> Good question. I checked and it seems that the answer is no (yay).
>>
>> For nightly builds in mozregression, we fetch stuff out of:
>>
>> https://archive.mozilla.org/pub/firefox/nightly/
>>
>> For inbound builds, we've been using taskcluster for a while.
> What about for bugs where the regression range is older, and falls
> into the time period where we no longer have taskcluster builds, but
> have these buildbot builds?

mozregression won't be able to bisect into inbound branches then, but I
believe we've always been expiring build artifacts created from
integration branches after a few months in any case.

My impression was that people use mozregression primarily for tracking
down relatively recent regressions. Please correct me if I'm wrong.

Will

L. David Baron

unread,
May 9, 2018, 1:12:05 PM5/9/18
to William Lachance, dev-pl...@lists.mozilla.org
It's useful for tracking down regressions no matter how old the
regression is; I pretty regularly see mozregression finding useful
data on bugs that regressed multiple years ago.

-David

--
𝄞 L. David Baron http://dbaron.org/ 𝄂
𝄢 Mozilla https://www.mozilla.org/ 𝄂
Before I built a wall I'd ask to know
What I was walling in or walling out,
And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)
signature.asc

Adam Roach

unread,
May 9, 2018, 1:56:21 PM5/9/18
to L. David Baron, William Lachance, dev-pl...@lists.mozilla.org
On 5/9/18 12:11 PM, L. David Baron wrote:
> It's useful for tracking down regressions no matter how old the
> regression is; I pretty regularly see mozregression finding useful
> data on bugs that regressed multiple years ago.

I want to agree with David -- I recall one incident in particular where
I used mozregression to track a problem down to a three-year-old change
that was only exposed when we flipped the big "everyone gets e10s now"
switch. I would have been pretty lost figuring out the root cause
without older builds.

/a

Ted Mielczarek

unread,
May 9, 2018, 2:01:57 PM5/9/18
to dev-pl...@lists.mozilla.org
On Wed, May 9, 2018, at 1:11 PM, L. David Baron wrote:
> > mozregression won't be able to bisect into inbound branches then, but I
> > believe we've always been expiring build artifacts created from integration
> > branches after a few months in any case.
> >
> > My impression was that people use mozregression primarily for tracking down
> > relatively recent regressions. Please correct me if I'm wrong.
>
> It's useful for tracking down regressions no matter how old the
> regression is; I pretty regularly see mozregression finding useful
> data on bugs that regressed multiple years ago.

To be clear here--we still have an archive of nightly builds dating back to 2004, so you should be able to bisect to a single day using that. We haven't ever had a great policy for retaining individual CI builds like these tinderbox-builds. They're definitely useful, and storage is not that expensive, but given the number of build configurations we produce nowadays and the volume of changes being pushed we can't archive everything forever.

-Ted

William Lachance

unread,
May 9, 2018, 2:09:14 PM5/9/18
to
On 2018-05-09 2:01 PM, Ted Mielczarek wrote:
>> It's useful for tracking down regressions no matter how old the
>> regression is; I pretty regularly see mozregression finding useful
>> data on bugs that regressed multiple years ago.
> To be clear here--we still have an archive of nightly builds dating back to 2004, so you should be able to bisect to a single day using that. We haven't ever had a great policy for retaining individual CI builds like these tinderbox-builds. They're definitely useful, and storage is not that expensive, but given the number of build configurations we produce nowadays and the volume of changes being pushed we can't archive everything forever.

jrmuizel pointed this bug to me, I assume this kind of scenario is not
that uncommon, where a manual/source bisection (which would presumably
take a bunch of scarce developer time) is required due to the
integration builds being expired:

https://bugzilla.mozilla.org/show_bug.cgi?id=1444055

Based on the feedback I'm seeing here, I think it might be worth
extending the expiry dates a little further back for some common build
configurations used by mozregression. Obviously no one cares about a
linux32 asan build from 2013, but I feel like there might be a better
sweet spot than where we're at.

Will

Andrew Halberstadt

unread,
May 9, 2018, 2:58:33 PM5/9/18
to William Lachance, dev-pl...@lists.mozilla.org
Going back to the original question, it looks like mozregression doesn't
use the builds that Nick wants to remove anyway. So regardless of our
retention policies, it looks like removing these builds would have no
impact on mozregression's effectiveness. Is that an accurate statement?

-Andrew

On Wed, May 9, 2018 at 2:10 PM William Lachance <wlac...@mozilla.com>
wrote:

William Lachance

unread,
May 9, 2018, 3:09:01 PM5/9/18
to
On 2018-05-09 2:58 PM, Andrew Halberstadt wrote:
> Going back to the original question, it looks like mozregression doesn't
> use the builds that Nick wants to remove anyway. So regardless of our
> retention policies, it looks like removing these builds would have no
> impact on mozregression's effectiveness. Is that an accurate statement?

Correct, see my original reply for details.

https://groups.google.com/d/msg/mozilla.dev.platform/Ed3-nE3l32o/0tQgGPv6CQAJ

Will

Gregory Szorc

unread,
May 11, 2018, 7:07:38 PM5/11/18
to Ted Mielczarek, dev-platform
On Wed, May 9, 2018 at 11:01 AM, Ted Mielczarek <t...@mielczarek.org> wrote:

> On Wed, May 9, 2018, at 1:11 PM, L. David Baron wrote:
> > > mozregression won't be able to bisect into inbound branches then, but I
> > > believe we've always been expiring build artifacts created from
> integration
> > > branches after a few months in any case.
> > >
> > > My impression was that people use mozregression primarily for tracking
> down
> > > relatively recent regressions. Please correct me if I'm wrong.
> >
> > It's useful for tracking down regressions no matter how old the
> > regression is; I pretty regularly see mozregression finding useful
> > data on bugs that regressed multiple years ago.
>
> To be clear here--we still have an archive of nightly builds dating back
> to 2004, so you should be able to bisect to a single day using that. We
> haven't ever had a great policy for retaining individual CI builds like
> these tinderbox-builds. They're definitely useful, and storage is not that
> expensive, but given the number of build configurations we produce nowadays
> and the volume of changes being pushed we can't archive everything forever.


It's worth noting that once builds are deterministic, a build system is
effectively a highly advanced caching mechanism. It follows that cache
eviction is therefore a tolerable problem: if the entry isn't in the cache,
you just build again! Artifact retention and expiration boils down to a
trade-off between the cost of storage and the convenience of accessing
something immediately (as opposed to waiting several dozen minutes to
populate the cache).

The good news is that Linux Firefox builds have been effectively
deterministic (modulo PGO and some minor build details like the build time)
for several months now (thanks, glandium!). And moving to Clang on all
platforms will make it easier to achieve deterministic builds on other
platforms. The bad news is we still have many areas of CI that are not
hermetic and attempts to retrigger Firefox build tasks in the future have a
very high possibility of failing for numerous reasons (e.g. some dependent
task of the build hits a 3rd party server that is no longer available or
has deleted a file). In other words, our CI results may not be reproducible
in the future. So if we delete an artifact, even though the build is
deterministic, we may not have all the inputs to reconstruct that result.

Making CI hermetic and reproducible far in the future is a hard problem.
There are esoteric failure scenarios like "what if we need to fetch content
from a server in 2030 but TLS 1.2 has been disabled due to a critical
vulnerability and code in the hermetic build task doesn't support TLS 1.3."
In order to realistically achieve reproducible builds in the future, we
need to store *all* inputs somewhere reliable where they will always be
available. Version control is one possibility. A content-indexed service
like tooltool is another. (At Google, they check in the source code for
Clang, glibc, binutils, Linux, etc into version control so all they need is
a version revision and a bootstrap compiler (which I also suspect they
check into the monorepo) to rebuild the world from source.)

What I'm trying to say is we're making strides towards making builds
deterministic and reproducible far in the future. So hopefully in a few
years we won't need to be concerned about deleting old data because our
answer will be "we can easily reproduce it at any time."

Boris Zbarsky

unread,
May 11, 2018, 10:48:04 PM5/11/18
to
On 5/11/18 7:06 PM, Gregory Szorc wrote:
> Artifact retention and expiration boils down to a
> trade-off between the cost of storage and the convenience of accessing
> something immediately (as opposed to waiting several dozen minutes to
> populate the cache).

Just to be clear, when doing a bisect, one _can_ just deal with local
builds. But the point is that then it takes tens of minutes per build
as you point out. So a bisect task that might otherwise take 10-15
minutes total (1 minute per downloaded build) ends up taking hours...

-Boris

Gregory Szorc

unread,
May 11, 2018, 11:28:58 PM5/11/18
to Boris Zbarsky, dev-platform
And this is where bisect-in-the-cloud would come in. You could trigger
dozens of builds on Taskcluster and they would all come back in the time it
takes a single build to run. Or you codify the thing you are testing for,
submit it to bisect-in-the-cloud, and it does everything for you and
reports on the results. It still might take dozens of minutes to a few
hours. But at least you wouldn't be burdened with the overhead of doing
everything manually.

Boris Zbarsky

unread,
May 11, 2018, 11:54:30 PM5/11/18
to
On 5/11/18 11:28 PM, Gregory Szorc wrote:
> You could trigger dozens of builds on Taskcluster and they would all come back in the time it
> takes a single build to run.

This is doable, and basically corresponds to doing N-ary search for N>2.
With some tooling support so you don't have to pick the revisions by
hand, could work.

> Or you codify the thing you are testing for,
> submit it to bisect-in-the-cloud, and it does everything for you and
> reports on the results. It still might take dozens of minutes to a few
> hours. But at least you wouldn't be burdened with the overhead of doing
> everything manually.

Yeah, the typical thing that needs testing during a bisect is some
random website, that's a bit hard to fully automate... I agree that for
cases we can turn into an automated test we should be able to do way
better. The latency is not that big a deal usually if it's truly
zero-attention.

-Boris

Eric Rescorla

unread,
May 12, 2018, 8:20:50 PM5/12/18
to Gregory Szorc, dev-platform, Ted Mielczarek
On Fri, May 11, 2018 at 4:06 PM, Gregory Szorc <g...@mozilla.com> wrote:

> On Wed, May 9, 2018 at 11:01 AM, Ted Mielczarek <t...@mielczarek.org>
> wrote:
>
> > On Wed, May 9, 2018, at 1:11 PM, L. David Baron wrote:
> > > > mozregression won't be able to bisect into inbound branches then,
> but I
> > > > believe we've always been expiring build artifacts created from
> > integration
> > > > branches after a few months in any case.
> > > >
> > > > My impression was that people use mozregression primarily for
> tracking
> > down
> > > > relatively recent regressions. Please correct me if I'm wrong.
> > >
> > > It's useful for tracking down regressions no matter how old the
> > > regression is; I pretty regularly see mozregression finding useful
> > > data on bugs that regressed multiple years ago.
> >
> > To be clear here--we still have an archive of nightly builds dating back
> > to 2004, so you should be able to bisect to a single day using that. We
> > haven't ever had a great policy for retaining individual CI builds like
> > these tinderbox-builds. They're definitely useful, and storage is not
> that
> > expensive, but given the number of build configurations we produce
> nowadays
> > and the volume of changes being pushed we can't archive everything
> forever.
>
>
> It's worth noting that once builds are deterministic, a build system is
> effectively a highly advanced caching mechanism. It follows that cache
> eviction is therefore a tolerable problem: if the entry isn't in the cache,
> you just build again! Artifact retention and expiration boils down to a
> trade-off between the cost of storage and the convenience of accessing
> something immediately (as opposed to waiting several dozen minutes to
> populate the cache).
>
This might end up being true, but it seems a bit optimistic to me. I've
worked
with lots of systems much simpler than our builds that were in theory
reproducible
but then found when I went back to reproduce the results, things weren't so
simple.
You allude to one case above: it's one thing to have reproducible builds
from
days ago and quite another from years ago.

Given the incredibly low cost of storage (the street price of Glacier is
$.004/GB/month) [0]
I'd be pretty hesitant to delete data which we thought we might want to use
again
just because we figured we'd reproduce it.

-Ekr

[0] https://aws.amazon.com/glacier/

Jean-Yves Avenard

unread,
May 13, 2018, 11:01:05 AM5/13/18
to dev-pl...@lists.mozilla.org, bzba...@mit.edu
Hi


On 12/05/2018 04:47, Boris Zbarsky wrote:
> Just to be clear, when doing a bisect, one _can_ just deal with local
> builds.  But the point is that then it takes tens of minutes per build
> as you point out.  So a bisect task that might otherwise take 10-15
> minutes total (1 minute per downloaded build) ends up taking hours...

I've found it pretty difficult to build old versions once past a couple
of months. Different version of rustc, dev tools not yet supported
(particularly on Windows with requirements to always use the last
version of Visual Studio

So dowloading the build is in practice the only way to bisect things

Randell Jesup

unread,
May 15, 2018, 1:53:20 PM5/15/18
to
Also (as others have pointed out) going too far back (often not that
far) may run you into tool differences that break re-building old revs.
Hopefully you don't get variable behavior, just a failure-to-build at
some point. I'm not sure how much Rust has made this worse.

--
Randell Jesup, Mozilla Corp
remove "news" for personal email

Tom Ritter

unread,
May 17, 2018, 10:22:40 AM5/17/18
to Mozilla
I agree with ekr in general, but I would also be curious to discover
what failures we would experience in practice and how we could
overcome them.

I think many of the issues experienced with local builds are
preventable by doing a TC-like build; just build in a docker container
(for Linux/Mac) and auto-build any toolchains needed. (Which would be
part of bisect in the cloud automatically.) I've been doing this
locally lately and it is not a friendly process right now though.

Of course on Windows it's an entirely different story. But one more
reason to pursue clang-cl builds on Linux ;)

-tom

Mike Kaply

unread,
May 17, 2018, 10:33:16 AM5/17/18
to Tom Ritter, Mozilla
Can we move the builds temporarily and see if it affects workflows over a
few months and if not, then remove them?

Mike

On Thu, May 17, 2018 at 9:22 AM, Tom Ritter <t...@mozilla.com> wrote:

> I agree with ekr in general, but I would also be curious to discover
> what failures we would experience in practice and how we could
> overcome them.
>
> I think many of the issues experienced with local builds are
> preventable by doing a TC-like build; just build in a docker container
> (for Linux/Mac) and auto-build any toolchains needed. (Which would be
> part of bisect in the cloud automatically.) I've been doing this
> locally lately and it is not a friendly process right now though.
>
> Of course on Windows it's an entirely different story. But one more
> reason to pursue clang-cl builds on Linux ;)
>
> -tom
>
>
> On Tue, May 15, 2018 at 12:53 PM, Randell Jesup <rjesu...@jesup.org>
> wrote:

Haik Aftandilian

unread,
May 17, 2018, 2:06:34 PM5/17/18
to Jean-Yves Avenard, Boris Zbarsky, dev-platform
On Sun, May 13, 2018 at 8:00 AM, Jean-Yves Avenard <jyav...@mozilla.com>
wrote:

> Hi
> On 12/05/2018 04:47, Boris Zbarsky wrote:
>
>> Just to be clear, when doing a bisect, one _can_ just deal with local
>> builds. But the point is that then it takes tens of minutes per build as
>> you point out. So a bisect task that might otherwise take 10-15 minutes
>> total (1 minute per downloaded build) ends up taking hours...
>>
>
> I've found it pretty difficult to build old versions once past a couple of
> months. Different version of rustc, dev tools not yet supported
> (particularly on Windows with requirements to always use the last version
> of Visual Studio
>

​Just a note about using an older version of rustc: the "rustup override"
command can be used to downgrade the rust toolchain for a given repo while
not affecting the version that is used on the rest of the system. That can
be handy when trying to build older trees.

Haik

Chris AtLee

unread,
May 18, 2018, 1:13:53 PM5/18/18
to dev-platform
The discussion about what to do about these particular buildbot builds has
naturally shifted into a discussion about what kind of retention policy is
appropriate for CI builds.

I believe that right now we keep all CI build artifacts for 1 year. Nightly
and release builds are kept forever. There's certainly an advantage to
keeping the CI builds, as they assist in bisecting regressions. However,
they become less useful over time.

IMO, it's not reasonable to keep CI builds around forever, so the question
is then how long to keep them? 1 year doesn't quite cover a full ESR cycle,
would 18 months be sufficient for most cases?

Alternatively, we could investigate having different expiration policies
for different type of artifacts. My assumption is that the Firefox binaries
for the opt builds are the most useful over the long term, and that other
build configurations and artifacts are less useful. How accurate is that
assumption?

Archiving these artifacts into Glacier would cut the cost of storing them
significantly, but also make them much harder to access. It can take 3-5
hours to retrieve objects from Glacier, and we would need to implement some
API or process to request access to archived objects.


On Thu, 17 May 2018 at 10:33, Mike Kaply <mka...@mozilla.com> wrote:

> Can we move the builds temporarily and see if it affects workflows over a
> few months and if not, then remove them?
>
> Mike
>
> On Thu, May 17, 2018 at 9:22 AM, Tom Ritter <t...@mozilla.com> wrote:
>
> > I agree with ekr in general, but I would also be curious to discover
> > what failures we would experience in practice and how we could
> > overcome them.
> >
> > I think many of the issues experienced with local builds are
> > preventable by doing a TC-like build; just build in a docker container
> > (for Linux/Mac) and auto-build any toolchains needed. (Which would be
> > part of bisect in the cloud automatically.) I've been doing this
> > locally lately and it is not a friendly process right now though.
> >
> > Of course on Windows it's an entirely different story. But one more
> > reason to pursue clang-cl builds on Linux ;)
> >
> > -tom
> >
> >
> > On Tue, May 15, 2018 at 12:53 PM, Randell Jesup <rjesu...@jesup.org>
> > wrote:

Karl Tomlinson

unread,
May 20, 2018, 7:38:09 PM5/20/18
to
On Fri, 18 May 2018 13:13:04 -0400, Chris AtLee wrote:

> IMO, it's not reasonable to keep CI builds around forever, so the question
> is then how long to keep them? 1 year doesn't quite cover a full ESR cycle,
> would 18 months be sufficient for most cases?
>
> Alternatively, we could investigate having different expiration policies
> for different type of artifacts. My assumption is that the Firefox binaries
> for the opt builds are the most useful over the long term, and that other
> build configurations and artifacts are less useful. How accurate is that
> assumption?

Having a subset of builds around for longer would be more useful
to me than having all builds available for a shorter period.

The nightly builds often include large numbers of changesets,
sometimes collected over several days, and so it becomes hard to
identify which code change modified a particular behavior.

I always use opt builds for regression testing, and so your
assumption is consistent with my experience.

I assume there are more pgo builds than nightly builds, but fewer
than all opt builds. If so, then having a long expiration policy
on pgo builds could be a helpful way to reduce storage costs but
maintain the most valuable builds.

Chris AtLee

unread,
May 28, 2018, 3:53:46 PM5/28/18
to moz...@karlt.net, dev-platform
Here's a bit of a strawman proposal...What if we keep the
{mozilla-central,mozilla-inbound,autoland}-{linux,linux64,macosx64,win32,win64}{,-pgo}/
directories in tinderbox-builds for now, and delete all the others. Does
that cover the majority of the use cases for wanting to access these old
builds?

I'm guessing the historical builds for old esr branches aren't useful now.
Nor are the mozilla-aurora, mozilla-beta, mozilla-release, or b2g-inbound
builds.

L. David Baron

unread,
May 29, 2018, 2:21:43 PM5/29/18
to dev-pl...@lists.mozilla.org, Chris AtLee, moz...@karlt.net
On Monday 2018-05-28 15:52 -0400, Chris AtLee wrote:
> Here's a bit of a strawman proposal...What if we keep the
> {mozilla-central,mozilla-inbound,autoland}-{linux,linux64,macosx64,win32,win64}{,-pgo}/
> directories in tinderbox-builds for now, and delete all the others. Does
> that cover the majority of the use cases for wanting to access these old
> builds?
>
> I'm guessing the historical builds for old esr branches aren't useful now.
> Nor are the mozilla-aurora, mozilla-beta, mozilla-release, or b2g-inbound
> builds.

This seems reasonable to me, with the one caveat that I think
b2g-inbound belongs in the other bucket. It was essentially used as
another peer to mozilla-inbound and autoland, and while many of the
changes landed there were b2g-only, many of them weren't, and may
have caused regressions that affect products that we still maintain.
signature.asc

Chris AtLee

unread,
May 31, 2018, 9:34:26 PM5/31/18
to dev-platform
On Tue, 29 May 2018 at 14:21, L. David Baron <dba...@dbaron.org> wrote:
>
> On Monday 2018-05-28 15:52 -0400, Chris AtLee wrote:
> > Here's a bit of a strawman proposal...What if we keep the
> > {mozilla-central,mozilla-inbound,autoland}-{linux,linux64,macosx64,win32,win64}{,-pgo}/
> > directories in tinderbox-builds for now, and delete all the others. Does
> > that cover the majority of the use cases for wanting to access these old
> > builds?
> >
> > I'm guessing the historical builds for old esr branches aren't useful now.
> > Nor are the mozilla-aurora, mozilla-beta, mozilla-release, or b2g-inbound
> > builds.
>
> This seems reasonable to me, with the one caveat that I think
> b2g-inbound belongs in the other bucket. It was essentially used as
> another peer to mozilla-inbound and autoland, and while many of the
> changes landed there were b2g-only, many of them weren't, and may
> have caused regressions that affect products that we still maintain.

Ok, we can do that.

For mobile, I haven't heard anybody express a desire to keep around
old CI builds in
https://archive.mozilla.org/pub/mobile/tinderbox-builds/, so I'm
planning to have those deleted in July.
0 new messages