Canary on Linux

12,597 views
Skip to first unread message

parse

unread,
Jul 12, 2011, 11:53:18 PM7/12/11
to Chromium-discuss
Hi,

I use chrome as my primary browser. I am interested in getting the
canary build for Ubuntu Linux.

Is this possible? Even if I have to build it my self (preferably with
a bash script) that's fine.

Is there any reason there isn't a canary build already? Is it viewed
as not worth the build/upload requirements?

Caleb Eggensperger

unread,
Jul 13, 2011, 1:05:02 PM7/13/11
to f.bag...@gmail.com, Chromium-discuss
I think that one of the reasons it's not a high priority at the moment is that there is already a popular PPA for Ubuntu of daily, auto-tested builds (which is basically what the canary is):


--
Chromium Discussion mailing list: chromium...@chromium.org
View archives, change email options, or unsubscribe:
   http://groups.google.com/a/chromium.org/group/chromium-discuss

Pavel Ivanov

unread,
Jul 13, 2011, 1:19:40 PM7/13/11
to cale...@gmail.com, f.bag...@gmail.com, Chromium-discuss
> there is already a popular PPA for Ubuntu of daily, auto-tested builds
> (which is basically what the canary is):
> https://launchpad.net/~chromium-daily/+archive/ppa

That is Chromium, not Chrome. So it's not exactly what Canary is. Do
you know what are the current problems in creating Canary for Linux?


Pavel

PhistucK

unread,
Jul 13, 2011, 4:01:44 PM7/13/11
to paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com, Chromium-discuss
I think there is an issue at crbug.com for that.
I think the only thing that prevents it is that the code does not support it.

If you know C++ and you are willing to help, ask around on chromium-dev where should you start and they will probably give you some useful pointers and all of the help you will need.
Though since this is a Google Chrome "feature" and not a Chromium feature, it might not be possible for you to engage in this implementation.
Anyway, ask around and the answer will be revealed.

☆PhistucK

Pavel Ivanov

unread,
Jul 13, 2011, 4:15:56 PM7/13/11
to PhistucK, f.bag...@gmail.com, cale...@gmail.com, Chromium-discuss
Issue is http://code.google.com/p/chromium/issues/detail?id=38598. And
there's no info there on what's currently blocking it.

I'd like to help but indeed I doubt it's possible to do that outside
of Google. But anyway probably the question about problems should go
to owner of that issue or somebody on CC list.


Pavel

Anthony LaForge

unread,
Jul 13, 2011, 4:16:49 PM7/13/11
to phis...@gmail.com, paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com, Chromium-discuss
Thanks for your interesting in testing out the latest and greatest builds of Chrome.  There are logistical challenges that make creating/ deploying Linux bundles prohibitively expensive to do on a daily basis, which effectively makes a Linux Canary fairly unlikely for the foreseeable future.  I'd encourage the community to use the PPA builds, those are fundamentally the same as what we would build save for a couple of plugins (PDF, Flash, FFMpeg).

Kind Regards,

Anthony Laforge
Technical Program Manager
Mountain View, CA

lkcl

unread,
Aug 8, 2012, 2:02:32 PM8/8/12
to chromium...@chromium.org, phis...@gmail.com, paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com


On Wednesday, July 13, 2011 9:16:49 PM UTC+1, Anthony LaForge wrote:
Thanks for your interesting in testing out the latest and greatest builds of Chrome.  There are logistical challenges that make creating/ deploying Linux bundles prohibitively expensive to do on a daily basis, which effectively makes a Linux Canary fairly unlikely for the foreseeable future.  I'd encourage the community to use the PPA builds, those are fundamentally the same as what we would build save for a couple of plugins (PDF, Flash, FFMpeg).


 ok.  so, what you're saying is that google doesn't do any gnu/linux builds.  what that means is that the code doesn't get built, and as a result, it gets broken.  i've just tried: nacl is entirely missing.  i'd done a 1.3gbyte download (it took 16 hours), which expands out to 9gb and includes a large number of DLLs and EXEs as well as containing 3gbyte of duplicated "third party" source code.  running "make" in the tools directory started pulling in *another* 1gbyte from a git repository to make nacl: that's just too much.

that and another build error means that i am not going to assist google by testing this build out because google cannot be bothered to keep it up-to-date, and makes the resources far too burdensome.  10 gigabytes for goodness sake!   that's absolutely insane!  webkit i know is a about a 1gbyte svn repository: that's acceptable.  but 10x that amount? not a chance.

what i *actually* wanted to do was to start developing early for WebRTC.  i have two computers.  one's running debian amd64 gnu/linux, the other MacOSX.  i'm going to have to try running chrome under Wine (!) to get 2 computers up and running so that i can do even a single simple LAN test between two machines.

come on guys, you can do better than this.

l.

lkcl

unread,
Aug 8, 2012, 2:06:58 PM8/8/12
to chromium...@chromium.org, phis...@gmail.com, paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com

> what i *actually* wanted to do was to start developing early for WebRTC.  i have two computers.  one's running debian amd64 gnu/linux, the other MacOSX.  i'm going to have to try running chrome under Wine (!) to get 2 computers up and running so that i can do even a single simple LAN test between two machines.


fixme:process:SetProcessShutdownParameters (00000280, 00000001): partial stub.
fixme:winhttp:WinHttpDetectAutoProxyConfigUrl discovery via DHCP not supported

nnope.  GoogleUpdate.exe (or ChromeSetup.exe) is trying - completely unnecessarily - to find a proxy via DHCP.  which is totally unnecessary.  there's absolutely no need to search for a proxy, because there isn't one.  so what the heck is this application failing just because one can't be found?

come on guys - you can do better than this.

Torne (Richard Coles)

unread,
Aug 9, 2012, 5:23:21 AM8/9/12
to luke.l...@gmail.com, chromium...@chromium.org, phis...@gmail.com, paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com
On 8 August 2012 19:02, lkcl <luke.l...@gmail.com> wrote:
> ok. so, what you're saying is that google doesn't do any gnu/linux builds.
> what that means is that the code doesn't get built, and as a result, it gets
> broken.

We build and test Chromium and Chrome for Linux on every change. We
just don't *release* a Chrome Canary channel for Linux for logistical
reasons. The Chrome Dev channel gets updated approximately weekly on
all platforms (more often than that if there are critical issues). The
Linux port is subject to the exact same review, test and QA procedure
as the Windows and Mac ports, and is broken no more often than the
other OSes.

> i've just tried: nacl is entirely missing. i'd done a 1.3gbyte
> download (it took 16 hours), which expands out to 9gb and includes a large
> number of DLLs and EXEs as well as containing 3gbyte of duplicated "third
> party" source code. running "make" in the tools directory started pulling
> in *another* 1gbyte from a git repository to make nacl: that's just too
> much.

Yes, the Chromium source is very very large, and this can be
inconvenient. Sorry, but there's not really anything we can do about
this.

> that and another build error means that i am not going to assist google by
> testing this build out because google cannot be bothered to keep it
> up-to-date, and makes the resources far too burdensome. 10 gigabytes for
> goodness sake! that's absolutely insane! webkit i know is a about a
> 1gbyte svn repository: that's acceptable. but 10x that amount? not a
> chance.
>
> what i *actually* wanted to do was to start developing early for WebRTC. i
> have two computers. one's running debian amd64 gnu/linux, the other MacOSX.
> i'm going to have to try running chrome under Wine (!) to get 2 computers up
> and running so that i can do even a single simple LAN test between two
> machines.

Why not just install Chrome Dev? (see
http://www.chromium.org/getting-involved/dev-channel for a link). It's
based on branch 1229 currently, which was created from trunk two days
ago, so compared to the Windows/Mac canaries it is only two days
behind. This is unlikely to make any difference to your ability to use
WebRTC.

>
> come on guys, you can do better than this.
>
> l.
>
> --
> Chromium Discussion mailing list: chromium...@chromium.org
> View archives, change email options, or unsubscribe:
> http://groups.google.com/a/chromium.org/group/chromium-discuss



--
Torne (Richard Coles)
to...@google.com

PhistucK

unread,
Aug 9, 2012, 5:38:00 AM8/9/12
to Torne (Richard Coles), luke.l...@gmail.com, chromium...@chromium.org, paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com
See my comments inline.

☆PhistucK



Furthermore, since it is going through a certain amount of QA, WebRTC will likely be more stable than in a canary build.
A few weeks ago, for example, there were a few canary builds in which WebRTC was not functioning. I believe that malfunction did not make it to any dev release.

lkcl luke

unread,
Aug 9, 2012, 6:50:25 AM8/9/12
to Torne (Richard Coles), chromium...@chromium.org, phis...@gmail.com, paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com
On 8/9/12, Torne (Richard Coles) <to...@google.com> wrote:
> On 8 August 2012 19:02, lkcl <luke.l...@gmail.com> wrote:
>> ok. so, what you're saying is that google doesn't do any gnu/linux
>> builds.
>> what that means is that the code doesn't get built, and as a result, it
>> gets
>> broken.
>
> We build and test Chromium and Chrome for Linux on every change.

oh good! that's different - and good to hear. it doesn't quiiite
explain why i ran into build difficulties (bugs reported on the issue
tracker) but it's good to know that there's a comprehensive and
thorough test process.

> Yes, the Chromium source is very very large, and this can be
> inconvenient. Sorry, but there's not really anything we can do about
> this.

well... there is :) you could use bitbake, which is designed to
perform cross-compiles such as using mingw so you could use it to do
win32/64 builds (even doing cross-compiles "natively" by running the
native gcc compiler, autoconf and other tools etc. within a qemu
headless environment. that's just an absolutely awesome and amazing
trick). it's designed to download all - and importantly *only* - the
tools that are required, from source code.

it's also designed to allow you to do "patches" in many many different
forms to source packages which again can be obtained from many many
difference sources, which means that you don't have to force people to
have to download multiple copies of the source code.

example: i already have a copy of webkit on my system. if you used
bitbake, i could drop in an "override" file which said "instead of
getting webkit from svn.webkit.org, please get it from
file:/opt/src/webkit/" and bitbake would then apply google's patches
to that, rather than... you see what i mean? yes, it would be my
responsibility to keep that already-checked-out copy of webkit
up-to-date, but that would be something i'd be prepared to do.

i'm keenly aware that you need to accelerate development somewhat of
various different packages, putting in bugfixes and adding in new
features, but there are ways to do that and there are ways *not* to do
that. look at what happened with the freeswitch team, for an absolute
nightmare scenario of how not to fork a project (freeswitch is a fork
of asterisk).

they made the severe, severe mistake of pulling in *every* dependency,
patching them because they had bugs and missing features at the time,
and effectively they ended up forking virtually every single dependent
library. including openssl (patched), libspeex (patched), and even
libapr (apache runtime)! i mean, don't get me wrong, it's fantastic
that the freeswitch team created an entirely new type of server type
(like the mpm_worker you see in apache2 but more advanced) but the
fact that they haven't submitted the patches upstream speaks volumes.

now, three years down the line they're in deep, deep shit, because of
course they have now become the de-facto maintainer of something like
*fifteen* forked libraries *in addition* to trying to be the
maintainers and developers of freeswitch, and have made themselves
responsible for all the security flaws and patches in all those
libraries! and of course, they don't have time to do that.

so yes - basically, like google, they "rolled their own" build system
and, like google's, it's nowhere near as advanced as things like
bitbake (or portage).

the other thing is that bitbake does builds, then cleans up after
itself automatically. so although e.g. compiling webkit may take
1.5gbytes to create the debug object files, once it's done the build
it *deletes* all the "work", leaving just the end result.

this "trick" allows me to build - even cross-compile - entire OSes in
a 10gb partition.

basically what i'm saying is that i was, when i learned that google
had built their own "build chain" tools i was amazed, puzzled and then
disappointed when i saw how behind that they are when compared to the
state-of-the-art that's been around for such a long time now that it's
both very very mature as well as deeply comprehensive.

so what google has been developing and ignoring the state-of-the-art
tools is what google is now reaping the results from: an
unsatisfactory build system that prevents and prohibits free software
developers from aiding and assisting google to accelerate the
development of products.

*shrug* :) you see what i'm getting at?

>> that and another build error means that i am not going to assist google
>> by
>> testing this build out because google cannot be bothered to keep it
>> up-to-date, and makes the resources far too burdensome. 10 gigabytes for
>> goodness sake! that's absolutely insane! webkit i know is a about a
>> 1gbyte svn repository: that's acceptable. but 10x that amount? not a
>> chance.
>>
>> what i *actually* wanted to do was to start developing early for WebRTC.
>> i
>> have two computers. one's running debian amd64 gnu/linux, the other
>> MacOSX.
>> i'm going to have to try running chrome under Wine (!) to get 2 computers
>> up
>> and running so that i can do even a single simple LAN test between two
>> machines.
>
> Why not just install Chrome Dev? (see
> http://www.chromium.org/getting-involved/dev-channel for a link). It's
> based on branch 1229 currently, which was created from trunk two days
> ago, so compared to the Windows/Mac canaries it is only two days
> behind. This is unlikely to make any difference to your ability to use
> WebRTC.

... i just found that out yesterday :)

so yes i can proceed, and actually started testing the sipml5.org
demo - which worked (hooray!). ok, it worked for a given value of
"work". what actually happened was that i found that the demo had
selected the 1st mic input, which is, on the macosx *and* on my amd64
debian gnu/linux system, a "LINE IN". so i have to use an AC97 USB
Audio device.

but.... the sipml5.org demo doesn't allow mic selection, nor does
chrome itself display any "default" mic settings at all (i looked -
couldn't find any). so, i raised this issue yesterday of a potential
solution (similar to that of the adobe flash "settings" system).

so that discussion is separate, and is all underway and so on, so
that's not the problem (in and of itself, as far as this thread is
concerned) the problem as far as this thread is concerned? i can't
help you to code it up, because the resources required to do so are
too onerous. i can't even help you to do preliminary testing of
patches prior to committing using my (not very unique, fortunately)
environment.

in this example, let's say that google does a patch, tests it,
releases it, then it gets out into the wild via the public
dev-releases. we are *entirely* dependent on having an extremely long
- over 72 hours or greater - round-trip on development and testing
cycles! and from what you're saying, testing for gnu/linux, my
primary system, would be even longer! that's.... insane!

so, even though it is really essential that the "mic selection"
system be fixed (allowing users to do experimental preview of inputs
prior to flipping over to a new input), because if it's not added
there will be hundreds of thousands of people all across the world
over the next decade or so going "i can't get my mic working on
WebRTC, does anyone have an online place i can test, i need help, i
need help", it's down to google to get this fixed: i can't help you
code it up, and i personally have to wait for google to fix it.

*shrug* :)

l.

Torne (Richard Coles)

unread,
Aug 9, 2012, 7:24:26 AM8/9/12
to lkcl luke, chromium...@chromium.org, phis...@gmail.com, paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com
On 9 August 2012 11:50, lkcl luke <luke.l...@gmail.com> wrote:
> On 8/9/12, Torne (Richard Coles) <to...@google.com> wrote:
>> On 8 August 2012 19:02, lkcl <luke.l...@gmail.com> wrote:
>>> ok. so, what you're saying is that google doesn't do any gnu/linux
>>> builds.
>>> what that means is that the code doesn't get built, and as a result, it
>>> gets
>>> broken.
>>
>> We build and test Chromium and Chrome for Linux on every change.
>
> oh good! that's different - and good to hear. it doesn't quiiite
> explain why i ran into build difficulties (bugs reported on the issue
> tracker) but it's good to know that there's a comprehensive and
> thorough test process.
>
>> Yes, the Chromium source is very very large, and this can be
>> inconvenient. Sorry, but there's not really anything we can do about
>> this.
>
> well... there is :) you could use bitbake, which is designed to
> perform cross-compiles such as using mingw so you could use it to do
> win32/64 builds (even doing cross-compiles "natively" by running the
> native gcc compiler, autoconf and other tools etc. within a qemu
> headless environment. that's just an absolutely awesome and amazing
> trick). it's designed to download all - and importantly *only* - the
> tools that are required, from source code.

Cross-compilation is not an option at all; we use the native compilers
for the different platforms on purpose. I also don't see how it's
relevant to what you were complaining about. The OS-specific source
repositories are only checked out if you're on that OS; the Linux
checkout doesn't (by default) include any of the repos that are only
needed to build for Windows or Mac, and the Linux build doesn't use
any win/mac binaries.

> it's also designed to allow you to do "patches" in many many different
> forms to source packages which again can be obtained from many many
> difference sources, which means that you don't have to force people to
> have to download multiple copies of the source code.
>
> example: i already have a copy of webkit on my system. if you used
> bitbake, i could drop in an "override" file which said "instead of
> getting webkit from svn.webkit.org, please get it from
> file:/opt/src/webkit/" and bitbake would then apply google's patches
> to that, rather than... you see what i mean? yes, it would be my
> responsibility to keep that already-checked-out copy of webkit
> up-to-date, but that would be something i'd be prepared to do.

We depend on *exact* versions of all of our dependencies, as listed in
src/DEPS. Unless you already have the exact same svn revision of
webkit checked out, we can't use it. So, this seems like it wouldn't
be worth the effort in general. If you can come up with a way to have
gclient reuse data you may already have, though, then feel free to
send a patch to depot_tools. This is more likely to work with git
checkouts than svn checkouts (e.g. hardlinking packs from an existing
git repo would work nicely and is supported by git), since svn
checkouts only contain a specific version that is unlikely to match
DEPS.

> i'm keenly aware that you need to accelerate development somewhat of
> various different packages, putting in bugfixes and adding in new
> features, but there are ways to do that and there are ways *not* to do
> that. look at what happened with the freeswitch team, for an absolute
> nightmare scenario of how not to fork a project (freeswitch is a fork
> of asterisk).

We don't fork most of our dependencies at all; instead we "garden"
them, which means roughly that we refer to a specific version of the
upstream copy, and then someone is in charge of rolling that forwards
incrementally and testing that this doesn't cause regressions. When
problems arise, we just don't roll those changes in until they're
fixed upstream.

A few of the deps contain patches, but this is the exception, not the
rule; we avoid it where possible. WebKit is an example of something
that we do not fork and simply check out specific versions from
upstream.

> they made the severe, severe mistake of pulling in *every* dependency,
> patching them because they had bugs and missing features at the time,
> and effectively they ended up forking virtually every single dependent
> library. including openssl (patched), libspeex (patched), and even
> libapr (apache runtime)! i mean, don't get me wrong, it's fantastic
> that the freeswitch team created an entirely new type of server type
> (like the mpm_worker you see in apache2 but more advanced) but the
> fact that they haven't submitted the patches upstream speaks volumes.
>
> now, three years down the line they're in deep, deep shit, because of
> course they have now become the de-facto maintainer of something like
> *fifteen* forked libraries *in addition* to trying to be the
> maintainers and developers of freeswitch, and have made themselves
> responsible for all the security flaws and patches in all those
> libraries! and of course, they don't have time to do that.

Yes, this is why we avoid forking things where possible (especially
things that are still developed upstream).

> so yes - basically, like google, they "rolled their own" build system
> and, like google's, it's nowhere near as advanced as things like
> bitbake (or portage).
>
> the other thing is that bitbake does builds, then cleans up after
> itself automatically. so although e.g. compiling webkit may take
> 1.5gbytes to create the debug object files, once it's done the build
> it *deletes* all the "work", leaving just the end result.

Virtually everyone who builds Chromium does so incrementally (i.e.
syncing to subsequent versions and building again), so cleaning up
intermediate files is not an option. I am familiar with bitbake, and
what it's doing is entirely different from how Chromium works; we are
*not* building an "OS" comprised of lots of separate binaries, we
build one giant binary with a link invocation that pulls in the
archived objects directly. In theory you could delete the individual
.o files that go into each target's static library, but the static
libraries themselves (which are, indeed, massive because they contain
full debug info) can't be deleted or else you can't build the main
binary any more, and if you did delete the .o files then every time
any file in that target was touched you'd have to recompile the whole
thing..

> this "trick" allows me to build - even cross-compile - entire OSes in
> a 10gb partition.
>
> basically what i'm saying is that i was, when i learned that google
> had built their own "build chain" tools i was amazed, puzzled and then
> disappointed when i saw how behind that they are when compared to the
> state-of-the-art that's been around for such a long time now that it's
> both very very mature as well as deeply comprehensive.
>
> so what google has been developing and ignoring the state-of-the-art
> tools is what google is now reaping the results from: an
> unsatisfactory build system that prevents and prohibits free software
> developers from aiding and assisting google to accelerate the
> development of products.
>
> *shrug* :) you see what i'm getting at?

I think you are entirely misunderstanding our build system. We are not
doing anything comparable to what bitbake does. We build a single main
binary in one go from a source tree that *happens* to be composed of
sources from multiple different projects. There is just one Makefile
(or equivalent for other build systems) that knows how to build the
entire thing, which is how we ensure that dependencies are correct on
rebuilds.

We're not inventing our own "build chain" tools at all really: we are
using make, the same as everyone else :) There is no meta-build tool
equivalent here, there is nothing that does the same job as bitbake.

We have our own makefile generator (gyp), the reasons for which are
numerous (other makefile generators typically support less platforms,
or are unwieldy to use for something so large, for a start), but all
it does is generate makefiles (or xcode projects, or msvc projects, or
whatever), and it only runs once after you sync the source, it's not
actually a build time tool.
If you want to be able to contribute directly to the Chromium code,
then build Chromium. I can assure you that in general, if you follow
the instructions it does build, unless you happen to be unlucky enough
to sync a revision that's between it being broken and it being
reverted (there is a way to avoid this by syncing to the last known
good release, which was at least green on the buildbots). If you have
problems, it's likely that your environment is not configured
correctly. If there's some part of the setup that's not adequately
explained, we'd like to know so we can fix the instructions, but many,
many linux users manage to sync and build the Chromium trunk
successfully (including virtually all of the Chromium committers, many
of whom develop primarily on Linux and build it numerous times per
day).

Having a Linux canary wouldn't help you actually contribute code; it
would slightly reduce the lag between us committing changes and you
seeing them, but you still need a source checkout of trunk and the
ability to build it if you're actually going to write patches or test
uncommitted patches.

> so, even though it is really essential that the "mic selection"
> system be fixed (allowing users to do experimental preview of inputs
> prior to flipping over to a new input), because if it's not added
> there will be hundreds of thousands of people all across the world
> over the next decade or so going "i can't get my mic working on
> WebRTC, does anyone have an online place i can test, i need help, i
> need help", it's down to google to get this fixed: i can't help you
> code it up, and i personally have to wait for google to fix it.

You *can* help us code it. We accept lots of patches from external
contributors, and many Chromium committers are not Google employees.
If you can't get your environment set up, come ask us for help on IRC,
or explain the problems here on the list. I'm on #chromium and
#chromium-support basically all day (UK time) and if you can concisely
explain the problems I will do my best to help you resolve them.

lkcl luke

unread,
Aug 9, 2012, 8:38:46 AM8/9/12
to Torne (Richard Coles), chromium...@chromium.org, phis...@gmail.com, paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com
yes. i was just mentioning it as an option. bitbake helps
developers who may not have the exact (native) setup by using qemu to
make it *look* like it's a native compile. this is incredibly
powerful.

i'm aware for example that in 2005 when amd64 wasn't as common as it
is now i had enormous difficulties doing 32-bit ARM cross-compiles on
a 64-bit AMD64 system, because running ./configure *locally* of course
detected completely the wrong stuff.

now of course, 8 years later, that problem's entirely gone, thanks to
the qemu trick. i *can* safely compile 32-bit ARM systems on a 64-bit
AMD64 system because the entire compilation is done in a qemu-arm
environment.

likewise, i could even do that the other way round! it would be
truly utterly dreadfully slow, but perfectly possible: i could compile
a 64-bit AMD64 binary using a 32-bit ARM host, confident that it would
actually work, because it could even be actually fired up and even the
"make tests" or even some interactive testing carried out in the
qemu-amd64 environment.

you see what i mean? and this is all handled *automatically*! no
need to force people to use ubuntu, or even a specific _version_ of
ubuntu. with millions of source code files on my system, there's no
longer room on a 250gb hard drive to put up a chroot ubuntu
environment in order to carry out the build.


> I also don't see how it's
> relevant to what you were complaining about. The OS-specific source
> repositories are only checked out if you're on that OS; the Linux
> checkout doesn't (by default) include any of the repos that are only
> needed to build for Windows or Mac, and the Linux build doesn't use
> any win/mac binaries.

... so - coming back to the issue at hand: why are there 2.7gbytes of
"3rd party" sources in the chrome svn repository, much of which
includes DLLs and EXEs for the w32 platform? that's 2.7 gigabytes of
space that i *could* use for building... but can't. and i don't even
know if i can safely delete it all (it might be needed, i just don't
know), but even if i *did* delete it, the next "svn update" would
bring it all back!

>> it's also designed to allow you to do "patches" in many many different
>> forms to source packages which again can be obtained from many many
>> difference sources, which means that you don't have to force people to
>> have to download multiple copies of the source code.
>>
>> example: i already have a copy of webkit on my system. if you used
>> bitbake, i could drop in an "override" file which said "instead of
>> getting webkit from svn.webkit.org, please get it from
>> file:/opt/src/webkit/" and bitbake would then apply google's patches
>> to that, rather than... you see what i mean? yes, it would be my
>> responsibility to keep that already-checked-out copy of webkit
>> up-to-date, but that would be something i'd be prepared to do.
>
> We depend on *exact* versions of all of our dependencies, as listed in
> src/DEPS. Unless you already have the exact same svn revision of
> webkit checked out, we can't use it.

yes. what bitbake does is, you have to specify the exact svn (or
git, or bzr, or hg) revision required. for safety you also add an MD5
and a SHA-1 checksum. so, having obtained all that from the
repository, it then caches that as a tarball, which is then unpacked
and cached.

> So, this seems like it wouldn't
> be worth the effort in general.

apologies: i didn't mention the bit about the revision number
capability of bitbake, in order to keep what was already becoming
quite a lengthy post down somewhat.

> If you can come up with a way to have
> gclient reuse data you may already have, though, then feel free to
> send a patch to depot_tools.

naah. not a chance. it's just not worth my time or effort to do so.
that's not a criticism: it's just being realistic. i have some
specific tasks and goals to follow: if i spend all my time patching
google's tools, i won't have time to achieve those goals.

however, if you're offering a financial incentive in the form of a
paid-for contract, money which i could then use to accelerate and
complete my goals, then yes i'm interested.

> This is more likely to work with git
> checkouts than svn checkouts (e.g. hardlinking packs from an existing
> git repo would work nicely and is supported by git), since svn
> checkouts only contain a specific version that is unlikely to match
> DEPS.

bitbake solves all that, already, through the use of obtaining
tarballs [and cacheing them, and checksumming them].

... i did say that they've had over a decade to refine bitbake! :)

>> i'm keenly aware that you need to accelerate development somewhat of
>> various different packages, putting in bugfixes and adding in new
>> features, but there are ways to do that and there are ways *not* to do
>> that. look at what happened with the freeswitch team, for an absolute
>> nightmare scenario of how not to fork a project (freeswitch is a fork
>> of asterisk).
>
> We don't fork most of our dependencies at all; instead we "garden"
> them, which means roughly that we refer to a specific version of the
> upstream copy, and then someone is in charge of rolling that forwards
> incrementally and testing that this doesn't cause regressions. When
> problems arise, we just don't roll those changes in until they're
> fixed upstream.

... mmm :) so what are those sources doing in the chrome svn *at
all*? repo (primarily used for android) has it somewhat right - it
goes off and downloads individual packages off the internet. why
isn't chrome developed in the same/similar way?

> A few of the deps contain patches, but this is the exception, not the
> rule; we avoid it where possible. WebKit is an example of something
> that we do not fork and simply check out specific versions from
> upstream.

yeah. at over a gigabyte just for the git repo that's a veery
sensible move! still doesn't explain why webkit is 40% of the "third
party" source code that's in the chrome svn tree though.

>> now, three years down the line they're in deep, deep shit, because of
>> course they have now become the de-facto maintainer of something like
>> *fifteen* forked libraries *in addition* to trying to be the
>> maintainers and developers of freeswitch, and have made themselves
>> responsible for all the security flaws and patches in all those
>> libraries! and of course, they don't have time to do that.
>
> Yes, this is why we avoid forking things where possible (especially
> things that are still developed upstream).

whew :)

>> so yes - basically, like google, they "rolled their own" build system
>> and, like google's, it's nowhere near as advanced as things like
>> bitbake (or portage).
>>
>> the other thing is that bitbake does builds, then cleans up after
>> itself automatically. so although e.g. compiling webkit may take
>> 1.5gbytes to create the debug object files, once it's done the build
>> it *deletes* all the "work", leaving just the end result.
>
> Virtually everyone who builds Chromium does so incrementally (i.e.
> syncing to subsequent versions and building again), so cleaning up
> intermediate files is not an option. I am familiar with bitbake, and
> what it's doing is entirely different from how Chromium works;

yes... because you're comparing a build tool with the output results
*of* a build tool :) sorry, had to point that out....

> we are
> *not* building an "OS" comprised of lots of separate binaries,

just because you're not [potentially / generally] building an "OS"
doesn't mean that it's not a more appropriate tool. even if you used
bitbake to compile one and only one application, that application
would pull in dependencies, those dependencies would pull in
dependencies and so on.

this is what bitbake is good at: working backwards to work out the
dependencies, until finally the "one dependency" - which for most
people will be "make me a complete OS" - is satisfied.

substitute "make me a complete chrome application" in there and you
find it's a good tool for the job.


> we
> build one giant binary with a link invocation that pulls in the
> archived objects directly. In theory you could delete the individual
> .o files that go into each target's static library, but the static
> libraries themselves (which are, indeed, massive because they contain
> full debug info) can't be deleted or else you can't build the main
> binary any more, and if you did delete the .o files then every time
> any file in that target was touched you'd have to recompile the whole
> thing..

yes. so there are libraries (dependencies) that need to be built;
these also get "cached" by bitbake for you, so that they need not be
built again. the only time that they would need to be rebuilt would
be if one of the files (say, in a dependent library) changed.

you're familiar with the process (but please allow me to describe it
for the benefit of others) - under these circumstances, you would do a
commit; the commit would change the git/svn revision number; this
would require a change to the .bb file for that dependency, adding in
that specific git/svn revision; this would be the "trigger" to bitbake
to junk the cache for that dependent library, and do a recompile.
again: once that completed, the compiled (static or dynamic) library
would be made available to its dependencies.

anyone familiar with repo should recognise this: it's the same thing
as the XML files.

but yes, *within* a package, you're absolutely right: it's an
all-or-nothing deal. actually what i've frequently found when using
bitbake for development is that on a per-package basis (especially the
one i'm working on) i have to switch *off* the "delete the .o cache
after completion" simply because otherwise it takes too long.

in that way, i can substitute time for disk space... but *on a
per-package basis*.

so, coming back to a concrete example: i wouldn't mind downloading
nacl's source code (1gbyte estimated?) *if* after that dependency was
completely built, the binaries were installed.

however, because after downloading chrome (1.3gb) and unpacking it
(9gb) and even then going in and removing some of the DLLs and .EXEs
in src/third_party i *still* only had 800mb free - not enough to run
"make" in the nacl subdirectory - not even to get the source code, let
alone build it.


>> this "trick" allows me to build - even cross-compile - entire OSes in
>> a 10gb partition.
>>
>> basically what i'm saying is that i was, when i learned that google
>> had built their own "build chain" tools i was amazed, puzzled and then
>> disappointed when i saw how behind that they are when compared to the
>> state-of-the-art that's been around for such a long time now that it's
>> both very very mature as well as deeply comprehensive.
>>
>> so what google has been developing and ignoring the state-of-the-art
>> tools is what google is now reaping the results from: an
>> unsatisfactory build system that prevents and prohibits free software
>> developers from aiding and assisting google to accelerate the
>> development of products.
>>
>> *shrug* :) you see what i'm getting at?
>
> I think you are entirely misunderstanding our build system. We are not
> doing anything comparable to what bitbake does.

yes - precisely.

> We build a single main
> binary in one go from a source tree that *happens* to be composed of
> sources from multiple different projects.

... where those sources are dropped into the chrome svn repository,
such that i have absolutely *no choice* but to hit "make". if you
used bitbake, i could at least build *some* of those "multiple
different projects", then go in and do on an individual basis "rm -fr
work/".

in this way, i would be able to keep the build output down to
manageable levels.


> There is just one Makefile
> (or equivalent for other build systems) that knows how to build the
> entire thing, which is how we ensure that dependencies are correct on
> rebuilds.

like buildroot.

> We're not inventing our own "build chain" tools at all really: we are
> using make, the same as everyone else :)

:) yes. i believe it was... ach... what's his name. frizzy hair.
einstein! was it einstein? anyway, you should recognise the quote:
"things should be simple, but not so simple that they don't work".

> We have our own makefile generator (gyp), the reasons for which are
> numerous (other makefile generators typically support less platforms,
> or are unwieldy to use for something so large, for a start),

*sigh* yeah i've run the gauntlet of cross-compiling autoconf'd
packages under mingw32... running under wine (!). don't laugh or
start coughing loudly, maybe i should have warned you to put the
coffee down first, sorry - but... christ... it _was_ truly insane.
each autoconf "test" took literally 10-30 seconds to run. i was
compiling python for mingw32 under wine (don't ask...) - completing
each ./configure run - just the ./configure run - took about two
hours. in the end i gave up, #ifdef'd vast sections of configure.in
and used a hard-coded config.h file.

so, i do sympathise here. it does however leave you entirely
divorced from a vast amount of cumulative expertise, and it will
burden you with the task of porting chrome to each and every platform.
with ARM beginning to take off, providing the headache of armel,
armhf, the abortion-that-is-the-raspberry-pi *and* then also the
Marvell PXA "ARM clones", and also ARM64 coming up, this is something
of a "heads up" that without that heads-up it's going to be headaches
all round if you're not careful :)

> [.... read and appreciated....]

> You *can* help us code it. We accept lots of patches from external
> contributors, and many Chromium committers are not Google employees.
> If you can't get your environment set up, come ask us for help on IRC,
> or explain the problems here on the list. I'm on #chromium and
> #chromium-support basically all day (UK time) and if you can concisely
> explain the problems I will do my best to help you resolve them.

thank you richard. i've raised a couple of bugreports already,
but... yeah, there's enough of WebRTC up-and-running for me to be able
to move on to completing the goal without needing to resort to source
code builds.

so, i'm really grateful for the opportunity to share some insights
with you, here - but i have to move on, actually get something done :)

thanks for your time.

l.

Torne (Richard Coles)

unread,
Aug 9, 2012, 9:03:40 AM8/9/12
to lkcl luke, chromium...@chromium.org, phis...@gmail.com, paiv...@gmail.com, f.bag...@gmail.com, cale...@gmail.com
You don't need to use ubuntu. Chromium builds just fine on a wide
variety of linux distros. We happen to use ubuntu, so we have
convenience scripts to apt-get the required packages for building from
the ubuntu archives, but you don't need to use those.

>
>> I also don't see how it's
>> relevant to what you were complaining about. The OS-specific source
>> repositories are only checked out if you're on that OS; the Linux
>> checkout doesn't (by default) include any of the repos that are only
>> needed to build for Windows or Mac, and the Linux build doesn't use
>> any win/mac binaries.
>
> ... so - coming back to the issue at hand: why are there 2.7gbytes of
> "3rd party" sources in the chrome svn repository, much of which
> includes DLLs and EXEs for the w32 platform? that's 2.7 gigabytes of
> space that i *could* use for building... but can't. and i don't even
> know if i can safely delete it all (it might be needed, i just don't
> know), but even if i *did* delete it, the next "svn update" would
> bring it all back!

The third_party sources are required for building, because they are
compiled into the binary. I'm not sure what dlls/exes you are talking
about; there are a few in the tree, but they only add up to 22MB;
they're there because they're so small and few that it's not worth
putting them in separate repositories that can be excluded.

>>> it's also designed to allow you to do "patches" in many many different
>>> forms to source packages which again can be obtained from many many
>>> difference sources, which means that you don't have to force people to
>>> have to download multiple copies of the source code.
>>>
>>> example: i already have a copy of webkit on my system. if you used
>>> bitbake, i could drop in an "override" file which said "instead of
>>> getting webkit from svn.webkit.org, please get it from
>>> file:/opt/src/webkit/" and bitbake would then apply google's patches
>>> to that, rather than... you see what i mean? yes, it would be my
>>> responsibility to keep that already-checked-out copy of webkit
>>> up-to-date, but that would be something i'd be prepared to do.
>>
>> We depend on *exact* versions of all of our dependencies, as listed in
>> src/DEPS. Unless you already have the exact same svn revision of
>> webkit checked out, we can't use it.
>
> yes. what bitbake does is, you have to specify the exact svn (or
> git, or bzr, or hg) revision required. for safety you also add an MD5
> and a SHA-1 checksum. so, having obtained all that from the
> repository, it then caches that as a tarball, which is then unpacked
> and cached.

That doesn't change that there is, generally, no point in caching
this, because they change constantly. Our version of webkit changes
multiple times per day; saving tarballs would be a huge *waste* of
disk space, not a saving.

>> So, this seems like it wouldn't
>> be worth the effort in general.
>
> apologies: i didn't mention the bit about the revision number
> capability of bitbake, in order to keep what was already becoming
> quite a lengthy post down somewhat.
>
>> If you can come up with a way to have
>> gclient reuse data you may already have, though, then feel free to
>> send a patch to depot_tools.
>
> naah. not a chance. it's just not worth my time or effort to do so.
> that's not a criticism: it's just being realistic. i have some
> specific tasks and goals to follow: if i spend all my time patching
> google's tools, i won't have time to achieve those goals.
>
> however, if you're offering a financial incentive in the form of a
> paid-for contract, money which i could then use to accelerate and
> complete my goals, then yes i'm interested.
>
>> This is more likely to work with git
>> checkouts than svn checkouts (e.g. hardlinking packs from an existing
>> git repo would work nicely and is supported by git), since svn
>> checkouts only contain a specific version that is unlikely to match
>> DEPS.
>
> bitbake solves all that, already, through the use of obtaining
> tarballs [and cacheing them, and checksumming them].

Tarballs would be far, far less efficient since the revisions change constantly.

> ... i did say that they've had over a decade to refine bitbake! :)
>
>>> i'm keenly aware that you need to accelerate development somewhat of
>>> various different packages, putting in bugfixes and adding in new
>>> features, but there are ways to do that and there are ways *not* to do
>>> that. look at what happened with the freeswitch team, for an absolute
>>> nightmare scenario of how not to fork a project (freeswitch is a fork
>>> of asterisk).
>>
>> We don't fork most of our dependencies at all; instead we "garden"
>> them, which means roughly that we refer to a specific version of the
>> upstream copy, and then someone is in charge of rolling that forwards
>> incrementally and testing that this doesn't cause regressions. When
>> problems arise, we just don't roll those changes in until they're
>> fixed upstream.
>
> ... mmm :) so what are those sources doing in the chrome svn *at
> all*? repo (primarily used for android) has it somewhat right - it
> goes off and downloads individual packages off the internet. why
> isn't chrome developed in the same/similar way?

Very few of them *are* in the chrome svn. gclient checks out ~70
different trees from different places to assemble the full checkout;
you can see the URLs in src/DEPS. We have local mirrors of some
repositories on our svn server because upstream's server is
slow/unreliable/etc, and we have local copies of the things that we do
have forks of, but most things are checked out from upstream directly
(e.g. webkit). Some very small things are just stored directly in the
main tree to avoid the overhead of having yet another repository, but
those are generally things that hardly ever change.

>> A few of the deps contain patches, but this is the exception, not the
>> rule; we avoid it where possible. WebKit is an example of something
>> that we do not fork and simply check out specific versions from
>> upstream.
>
> yeah. at over a gigabyte just for the git repo that's a veery
> sensible move! still doesn't explain why webkit is 40% of the "third
> party" source code that's in the chrome svn tree though.

You need WebKit to build chromium, so gclient checks out the correct
version of WebKit. WebKit happens to be very large, so accounts for a
lot of the size of your checkout. There's no mystery :)
It really, really isn't. You do not understand how our build works.

>> we
>> build one giant binary with a link invocation that pulls in the
>> archived objects directly. In theory you could delete the individual
>> .o files that go into each target's static library, but the static
>> libraries themselves (which are, indeed, massive because they contain
>> full debug info) can't be deleted or else you can't build the main
>> binary any more, and if you did delete the .o files then every time
>> any file in that target was touched you'd have to recompile the whole
>> thing..
>
> yes. so there are libraries (dependencies) that need to be built;
> these also get "cached" by bitbake for you, so that they need not be
> built again. the only time that they would need to be rebuilt would
> be if one of the files (say, in a dependent library) changed.

Yes, but many of them change every day. We roll to a new version of
webkit every few hours, and webkit is by far the largest third party
component of chromium, so the saving from optimising this for things
that change rarely would be miniscule by comparison to the time it
takes to rebuild webkit and chromium itself.

Our current build system *already* doesn't rebuild things unless
needed, so this isn't an improvement :)

> you're familiar with the process (but please allow me to describe it
> for the benefit of others) - under these circumstances, you would do a
> commit; the commit would change the git/svn revision number; this
> would require a change to the .bb file for that dependency, adding in
> that specific git/svn revision; this would be the "trigger" to bitbake
> to junk the cache for that dependent library, and do a recompile.
> again: once that completed, the compiled (static or dynamic) library
> would be made available to its dependencies.

Yes. This is exactly how our build already works, except ours is
*more* efficient because we don't start each target from scratch every
time. Someone checks in a change to DEPS, gclient syncs new code, the
makefiles get regenerated, and the sources that have actually changed
get rebuilt. We don't rebuild the whole of webkit every time, we only
build the specific files in webkit that have changed.
No, not at all like buildroot. Buildroot is a meta-build system as
well: it happens to use make for its top level build, but it still
invokes each package's own build system (i.e. it forks and runs
completely separate configure/make/make install processes for each
package). We do not do this. We have our own build files for *every
single thing* and they are all generated together as one giant
makefile that specifies how to compile every individual file across
all dependencies. We do not run configure/make for our libraries. They
are all built as if all the sources were part of a single project to
begin with. This makes much stronger guarantees that builds are
correct by considering all dependencies between projects at build time
in the way make normally does, instead of only the ones that the human
designing the meta-build system though of.
We already have a perfectly functional ARM port, this was done ages
ago. The build system was not a significant issue in making this work;
gyp handles it pretty much fine. The difficult parts are things that
are innately platform specific (e.g. making V8 generate native code
for the new architecture).

>> [.... read and appreciated....]
>
>> You *can* help us code it. We accept lots of patches from external
>> contributors, and many Chromium committers are not Google employees.
>> If you can't get your environment set up, come ask us for help on IRC,
>> or explain the problems here on the list. I'm on #chromium and
>> #chromium-support basically all day (UK time) and if you can concisely
>> explain the problems I will do my best to help you resolve them.
>
> thank you richard. i've raised a couple of bugreports already,
> but... yeah, there's enough of WebRTC up-and-running for me to be able
> to move on to completing the goal without needing to resort to source
> code builds.

Raising bugs for things that are virtually guaranteed to be
misconfigurations in your build environment is not particularly
useful. :)

> so, i'm really grateful for the opportunity to share some insights
> with you, here - but i have to move on, actually get something done :)
>
> thanks for your time.
>
> l.

I'm afraid your "insights" are based on a misunderstanding of how our
build system works, but I'm glad you have found a way to do what you
need to do.
Reply all
Reply to author
Forward
0 new messages