yes. i was just mentioning it as an option. bitbake helps
developers who may not have the exact (native) setup by using qemu to
make it *look* like it's a native compile. this is incredibly
powerful.
i'm aware for example that in 2005 when amd64 wasn't as common as it
is now i had enormous difficulties doing 32-bit ARM cross-compiles on
a 64-bit AMD64 system, because running ./configure *locally* of course
detected completely the wrong stuff.
now of course, 8 years later, that problem's entirely gone, thanks to
the qemu trick. i *can* safely compile 32-bit ARM systems on a 64-bit
AMD64 system because the entire compilation is done in a qemu-arm
environment.
likewise, i could even do that the other way round! it would be
truly utterly dreadfully slow, but perfectly possible: i could compile
a 64-bit AMD64 binary using a 32-bit ARM host, confident that it would
actually work, because it could even be actually fired up and even the
"make tests" or even some interactive testing carried out in the
qemu-amd64 environment.
you see what i mean? and this is all handled *automatically*! no
need to force people to use ubuntu, or even a specific _version_ of
ubuntu. with millions of source code files on my system, there's no
longer room on a 250gb hard drive to put up a chroot ubuntu
environment in order to carry out the build.
> I also don't see how it's
> relevant to what you were complaining about. The OS-specific source
> repositories are only checked out if you're on that OS; the Linux
> checkout doesn't (by default) include any of the repos that are only
> needed to build for Windows or Mac, and the Linux build doesn't use
> any win/mac binaries.
... so - coming back to the issue at hand: why are there 2.7gbytes of
"3rd party" sources in the chrome svn repository, much of which
includes DLLs and EXEs for the w32 platform? that's 2.7 gigabytes of
space that i *could* use for building... but can't. and i don't even
know if i can safely delete it all (it might be needed, i just don't
know), but even if i *did* delete it, the next "svn update" would
bring it all back!
>> it's also designed to allow you to do "patches" in many many different
>> forms to source packages which again can be obtained from many many
>> difference sources, which means that you don't have to force people to
>> have to download multiple copies of the source code.
>>
>> example: i already have a copy of webkit on my system. if you used
>> bitbake, i could drop in an "override" file which said "instead of
>> getting webkit from
svn.webkit.org, please get it from
>> file:/opt/src/webkit/" and bitbake would then apply google's patches
>> to that, rather than... you see what i mean? yes, it would be my
>> responsibility to keep that already-checked-out copy of webkit
>> up-to-date, but that would be something i'd be prepared to do.
>
> We depend on *exact* versions of all of our dependencies, as listed in
> src/DEPS. Unless you already have the exact same svn revision of
> webkit checked out, we can't use it.
yes. what bitbake does is, you have to specify the exact svn (or
git, or bzr, or hg) revision required. for safety you also add an MD5
and a SHA-1 checksum. so, having obtained all that from the
repository, it then caches that as a tarball, which is then unpacked
and cached.
> So, this seems like it wouldn't
> be worth the effort in general.
apologies: i didn't mention the bit about the revision number
capability of bitbake, in order to keep what was already becoming
quite a lengthy post down somewhat.
> If you can come up with a way to have
> gclient reuse data you may already have, though, then feel free to
> send a patch to depot_tools.
naah. not a chance. it's just not worth my time or effort to do so.
that's not a criticism: it's just being realistic. i have some
specific tasks and goals to follow: if i spend all my time patching
google's tools, i won't have time to achieve those goals.
however, if you're offering a financial incentive in the form of a
paid-for contract, money which i could then use to accelerate and
complete my goals, then yes i'm interested.
> This is more likely to work with git
> checkouts than svn checkouts (e.g. hardlinking packs from an existing
> git repo would work nicely and is supported by git), since svn
> checkouts only contain a specific version that is unlikely to match
> DEPS.
bitbake solves all that, already, through the use of obtaining
tarballs [and cacheing them, and checksumming them].
... i did say that they've had over a decade to refine bitbake! :)
>> i'm keenly aware that you need to accelerate development somewhat of
>> various different packages, putting in bugfixes and adding in new
>> features, but there are ways to do that and there are ways *not* to do
>> that. look at what happened with the freeswitch team, for an absolute
>> nightmare scenario of how not to fork a project (freeswitch is a fork
>> of asterisk).
>
> We don't fork most of our dependencies at all; instead we "garden"
> them, which means roughly that we refer to a specific version of the
> upstream copy, and then someone is in charge of rolling that forwards
> incrementally and testing that this doesn't cause regressions. When
> problems arise, we just don't roll those changes in until they're
> fixed upstream.
... mmm :) so what are those sources doing in the chrome svn *at
all*? repo (primarily used for android) has it somewhat right - it
goes off and downloads individual packages off the internet. why
isn't chrome developed in the same/similar way?
> A few of the deps contain patches, but this is the exception, not the
> rule; we avoid it where possible. WebKit is an example of something
> that we do not fork and simply check out specific versions from
> upstream.
yeah. at over a gigabyte just for the git repo that's a veery
sensible move! still doesn't explain why webkit is 40% of the "third
party" source code that's in the chrome svn tree though.
>> now, three years down the line they're in deep, deep shit, because of
>> course they have now become the de-facto maintainer of something like
>> *fifteen* forked libraries *in addition* to trying to be the
>> maintainers and developers of freeswitch, and have made themselves
>> responsible for all the security flaws and patches in all those
>> libraries! and of course, they don't have time to do that.
>
> Yes, this is why we avoid forking things where possible (especially
> things that are still developed upstream).
whew :)
>> so yes - basically, like google, they "rolled their own" build system
>> and, like google's, it's nowhere near as advanced as things like
>> bitbake (or portage).
>>
>> the other thing is that bitbake does builds, then cleans up after
>> itself automatically. so although e.g. compiling webkit may take
>> 1.5gbytes to create the debug object files, once it's done the build
>> it *deletes* all the "work", leaving just the end result.
>
> Virtually everyone who builds Chromium does so incrementally (i.e.
> syncing to subsequent versions and building again), so cleaning up
> intermediate files is not an option. I am familiar with bitbake, and
> what it's doing is entirely different from how Chromium works;
yes... because you're comparing a build tool with the output results
*of* a build tool :) sorry, had to point that out....
> we are
> *not* building an "OS" comprised of lots of separate binaries,
just because you're not [potentially / generally] building an "OS"
doesn't mean that it's not a more appropriate tool. even if you used
bitbake to compile one and only one application, that application
would pull in dependencies, those dependencies would pull in
dependencies and so on.
this is what bitbake is good at: working backwards to work out the
dependencies, until finally the "one dependency" - which for most
people will be "make me a complete OS" - is satisfied.
substitute "make me a complete chrome application" in there and you
find it's a good tool for the job.
> we
> build one giant binary with a link invocation that pulls in the
> archived objects directly. In theory you could delete the individual
> .o files that go into each target's static library, but the static
> libraries themselves (which are, indeed, massive because they contain
> full debug info) can't be deleted or else you can't build the main
> binary any more, and if you did delete the .o files then every time
> any file in that target was touched you'd have to recompile the whole
> thing..
yes. so there are libraries (dependencies) that need to be built;
these also get "cached" by bitbake for you, so that they need not be
built again. the only time that they would need to be rebuilt would
be if one of the files (say, in a dependent library) changed.
you're familiar with the process (but please allow me to describe it
for the benefit of others) - under these circumstances, you would do a
commit; the commit would change the git/svn revision number; this
would require a change to the .bb file for that dependency, adding in
that specific git/svn revision; this would be the "trigger" to bitbake
to junk the cache for that dependent library, and do a recompile.
again: once that completed, the compiled (static or dynamic) library
would be made available to its dependencies.
anyone familiar with repo should recognise this: it's the same thing
as the XML files.
but yes, *within* a package, you're absolutely right: it's an
all-or-nothing deal. actually what i've frequently found when using
bitbake for development is that on a per-package basis (especially the
one i'm working on) i have to switch *off* the "delete the .o cache
after completion" simply because otherwise it takes too long.
in that way, i can substitute time for disk space... but *on a
per-package basis*.
so, coming back to a concrete example: i wouldn't mind downloading
nacl's source code (1gbyte estimated?) *if* after that dependency was
completely built, the binaries were installed.
however, because after downloading chrome (1.3gb) and unpacking it
(9gb) and even then going in and removing some of the DLLs and .EXEs
in src/third_party i *still* only had 800mb free - not enough to run
"make" in the nacl subdirectory - not even to get the source code, let
alone build it.
>> this "trick" allows me to build - even cross-compile - entire OSes in
>> a 10gb partition.
>>
>> basically what i'm saying is that i was, when i learned that google
>> had built their own "build chain" tools i was amazed, puzzled and then
>> disappointed when i saw how behind that they are when compared to the
>> state-of-the-art that's been around for such a long time now that it's
>> both very very mature as well as deeply comprehensive.
>>
>> so what google has been developing and ignoring the state-of-the-art
>> tools is what google is now reaping the results from: an
>> unsatisfactory build system that prevents and prohibits free software
>> developers from aiding and assisting google to accelerate the
>> development of products.
>>
>> *shrug* :) you see what i'm getting at?
>
> I think you are entirely misunderstanding our build system. We are not
> doing anything comparable to what bitbake does.
yes - precisely.
> We build a single main
> binary in one go from a source tree that *happens* to be composed of
> sources from multiple different projects.
... where those sources are dropped into the chrome svn repository,
such that i have absolutely *no choice* but to hit "make". if you
used bitbake, i could at least build *some* of those "multiple
different projects", then go in and do on an individual basis "rm -fr
work/".
in this way, i would be able to keep the build output down to
manageable levels.
> There is just one Makefile
> (or equivalent for other build systems) that knows how to build the
> entire thing, which is how we ensure that dependencies are correct on
> rebuilds.
like buildroot.
> We're not inventing our own "build chain" tools at all really: we are
> using make, the same as everyone else :)
:) yes. i believe it was... ach... what's his name. frizzy hair.
einstein! was it einstein? anyway, you should recognise the quote:
"things should be simple, but not so simple that they don't work".
> We have our own makefile generator (gyp), the reasons for which are
> numerous (other makefile generators typically support less platforms,
> or are unwieldy to use for something so large, for a start),
*sigh* yeah i've run the gauntlet of cross-compiling autoconf'd
packages under mingw32... running under wine (!). don't laugh or
start coughing loudly, maybe i should have warned you to put the
coffee down first, sorry - but... christ... it _was_ truly insane.
each autoconf "test" took literally 10-30 seconds to run. i was
compiling python for mingw32 under wine (don't ask...) - completing
each ./configure run - just the ./configure run - took about two
hours. in the end i gave up, #ifdef'd vast sections of
configure.in
and used a hard-coded config.h file.
so, i do sympathise here. it does however leave you entirely
divorced from a vast amount of cumulative expertise, and it will
burden you with the task of porting chrome to each and every platform.
with ARM beginning to take off, providing the headache of armel,
armhf, the abortion-that-is-the-raspberry-pi *and* then also the
Marvell PXA "ARM clones", and also ARM64 coming up, this is something
of a "heads up" that without that heads-up it's going to be headaches
all round if you're not careful :)
> [.... read and appreciated....]
> You *can* help us code it. We accept lots of patches from external
> contributors, and many Chromium committers are not Google employees.
> If you can't get your environment set up, come ask us for help on IRC,
> or explain the problems here on the list. I'm on #chromium and
> #chromium-support basically all day (UK time) and if you can concisely
> explain the problems I will do my best to help you resolve them.
thank you richard. i've raised a couple of bugreports already,
but... yeah, there's enough of WebRTC up-and-running for me to be able
to move on to completing the goal without needing to resort to source
code builds.
so, i'm really grateful for the opportunity to share some insights
with you, here - but i have to move on, actually get something done :)
thanks for your time.
l.