Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The sad state of C++ unit testing in Mozilla

272 views
Skip to first unread message

Kyle Huey

unread,
Aug 12, 2011, 8:41:36 AM8/12/11
to mozilla.de...@lists.mozilla.org
Writing tests in C++ for Mozilla code is extremely painful, when its even
possible. We have code that exposes objects through XPCOM
<http://mxr.mozilla.org/mozilla-central/source/xpcom/io/nsScriptableBase64Encoder.cpp>or
to script<http://mxr.mozilla.org/mozilla-central/source/startupcache/nsIStartupCache.idl#49>for
the sole purpose of writing tests. You also have to deal with things
like the horrible external string API.

I've been kicking around the idea of a C++ test harness that lets tests use
things internal to libxul for a while. The basic idea would be to link
libxul twice, once as a dynamic shared object and once as a static library,
and then later in the build link the static version of libxul against the
tests to form 'libxulwithtests.so' (or whatever). There are some details to
handle with tests that require clean xpcom environments but nothing terribly
hard.

Ted has suggested using GoogleTest <http://code.google.com/p/googletest/> as
the underlying framework, and that seems reasonable to me. GoogleTest is
used by Chromium and LLVM among other large projects. It'd also let
us use GoogleMock
<http://code.google.com/p/googlemock/>which will likely come in very handy.
We ought to be able to teach pyxpidl to write out mock xpcom objects that
tests can use.

If nobody has any objections I'll likely give this a go sometime soon and
convert some of our existing compiled tests to use it.

Questions/comments/concerns?

- Kyle

Jeff Muizelaar

unread,
Aug 12, 2011, 9:02:34 AM8/12/11
to Kyle Huey, mozilla.de...@lists.mozilla.org

On 2011-08-12, at 8:41 AM, Kyle Huey wrote:
>
> If nobody has any objections I'll likely give this a go sometime soon and
> convert some of our existing compiled tests to use it.
>
> Questions/comments/concerns?

I think this is great. While we do quite well with integration testing, I get the impression that things
that aren't easy to test with our existing frameworks tend not to be tested. (the code in mfbt/ etc.)
This would hopefully make some of those things easier to test.

-Jeff

Neil

unread,
Aug 12, 2011, 9:45:32 AM8/12/11
to
Kyle Huey wrote:

>The basic idea would be to link libxul twice, once as a dynamic shared object and once as a static library
>

Can't we use fakelibs to simplify this?

--
Warning: May contain traces of nuts.

Kyle Huey

unread,
Aug 12, 2011, 9:57:09 AM8/12/11
to Neil, dev-pl...@lists.mozilla.org
On Fri, Aug 12, 2011 at 9:45 AM, Neil <ne...@parkwaycc.co.uk> wrote:

> Kyle Huey wrote:
>
> The basic idea would be to link libxul twice, once as a dynamic shared
>> object and once as a static library
>>
>> Can't we use fakelibs to simplify this?


Yes. In practice the "linking as a static library" will translate to "write
down a file with the list of all the object files that go into libxul".

I just didn't want to get too technical with build system internals here :-)

- Kyle

Chris Jones

unread,
Aug 12, 2011, 2:57:37 PM8/12/11
to Kyle Huey
On 08/12/2011 05:41 AM, Kyle Huey wrote:
> I've been kicking around the idea of a C++ test harness that lets tests use
> things internal to libxul for a while. The basic idea would be to link
> libxul twice, once as a dynamic shared object and once as a static library,
> and then later in the build link the static version of libxul against the
> tests to form 'libxulwithtests.so' (or whatever). There are some details to
> handle with tests that require clean xpcom environments but nothing terribly
> hard.

In case it helps, the IPDL unit tests do something very much like this:
when --enable-ipdl-tests, the C++ tests are built into libxul, and
there's a separate ipdlunittest binary that knows how to call into test
"Main()" functions, as a kind of meta-binary. For example,
|ipdlunittest TestSanity| invokes TestSanity::Main(), approximately.

Not familiar with the google stuff, but it's probably doing like that
but better. The IPDL harness was an afternoon hack.

Cheers,
Chris

Joshua Cranmer

unread,
Aug 12, 2011, 11:19:37 PM8/12/11
to
On 8/12/2011 7:41 AM, Kyle Huey wrote:
> Writing tests in C++ for Mozilla code is extremely painful, when its even
> possible. We have code that exposes objects through XPCOM
> <http://mxr.mozilla.org/mozilla-central/source/xpcom/io/nsScriptableBase64Encoder.cpp>or
> to script<http://mxr.mozilla.org/mozilla-central/source/startupcache/nsIStartupCache.idl#49>for
> the sole purpose of writing tests. You also have to deal with things
> like the horrible external string API.
As a side note, what are the reasons for the introduction of a totally
new external string API? One of the reasons was ABI compatibility (the
external string loads only extern-C functions from libxul), although we
now have C++-exported functions in libxul.so; I remember when I asked in
#developers, I got another reason but it escapes my mind right now...

> Ted has suggested using GoogleTest<http://code.google.com/p/googletest/> as
> the underlying framework, and that seems reasonable to me. GoogleTest is
> used by Chromium and LLVM among other large projects. It'd also let
> us use GoogleMock
> <http://code.google.com/p/googlemock/>which will likely come in very handy.
> We ought to be able to teach pyxpidl to write out mock xpcom objects that
> tests can use.

We already have a header somewhere for some basic testing. If it's not
too much to add, an ability to "expect" an assertion would be helpful.

In terms of mock objects, I doubt the framework would work nicely with
all of our xpidl objects (ACString, jsvals can be problematic). It
should be possible to use our stub library to do enough to replicate the
mock library, but I'm not looking to writing that. In any case, being
able to do mock xpcom stuff from xpcshell tests would also be useful, I
think.

Robert O'Callahan

unread,
Aug 13, 2011, 8:59:15 AM8/13/11
to Jeff Muizelaar, Kyle Huey, mozilla.de...@lists.mozilla.org
This sounds great to me too.

Rob
--
"If we claim to be without sin, we deceive ourselves and the truth is not in
us. If we confess our sins, he is faithful and just and will forgive us our
sins and purify us from all unrighteousness. If we claim we have not sinned,
we make him out to be a liar and his word is not in us." [1 John 1:8-10]

Benjamin Smedberg

unread,
Aug 13, 2011, 9:25:23 AM8/13/11
to Joshua Cranmer, dev-pl...@lists.mozilla.org
On 8/12/2011 11:19 PM, Joshua Cranmer wrote:
> On 8/12/2011 7:41 AM, Kyle Huey wrote:
>> Writing tests in C++ for Mozilla code is extremely painful, when its
>> even
>> possible. We have code that exposes objects through XPCOM
>> <http://mxr.mozilla.org/mozilla-central/source/xpcom/io/nsScriptableBase64Encoder.cpp>or
>>
>> to
>> script<http://mxr.mozilla.org/mozilla-central/source/startupcache/nsIStartupCache.idl#49>for
>> the sole purpose of writing tests. You also have to deal with things
>> like the horrible external string API.
> As a side note, what are the reasons for the introduction of a totally
> new external string API? One of the reasons was ABI compatibility (the
> external string loads only extern-C functions from libxul), although
> we now have C++-exported functions in libxul.so; I remember when I
> asked in #developers, I got another reason but it escapes my mind
> right now...
What C++ functions are being exported from libxul?

The purpose of the external string API was twofold: 1) to provide an
ABI-stable base (which is no longer important) 2) to avoid exporting the
internal API, which reduces the number of relocations on startup and
improved startup time by 2-5%.

I think the startup time improvement is still important to keep.

--BDS

Benoit Jacob

unread,
Jul 4, 2012, 12:53:03 AM7/4/12
to Benjamin Smedberg, Joshua Cranmer, dev-pl...@lists.mozilla.org
What's the status of this effort?

This would be very useful to me as I have things to test in the WebGL
implementation that are very inconvenient and inefficient to test
exhaustively by JavaScript. Specifically, drawElements index
validation and texture conversions. The former has been at the center
of some of our worst WebGL security bugs.

So far, the code to be tested is in a few self-contained .cpp and .h
files. So I am resorting (in not-yet-uploaded patch) to adding these,
including the .cpp files, to EXPORTS and #including them directly in
my unit tests.

This is not ideal and won't scale when I have non-tiny-self-contained
things to test, so I look forward to Kyle's idea being implemented.
The only thing that scares me here is doubling the libxul linkage
pain.

Benoit


2011/8/13 Benjamin Smedberg <benj...@smedbergs.us>:
> On 8/12/2011 11:19 PM, Joshua Cranmer wrote:
>>
>> On 8/12/2011 7:41 AM, Kyle Huey wrote:
>>>
>>> Writing tests in C++ for Mozilla code is extremely painful, when its even
>>> possible. We have code that exposes objects through XPCOM
>>>
>>> <http://mxr.mozilla.org/mozilla-central/source/xpcom/io/nsScriptableBase64Encoder.cpp>or
>>> to
>>> script<http://mxr.mozilla.org/mozilla-central/source/startupcache/nsIStartupCache.idl#49>for
>>> the sole purpose of writing tests. You also have to deal with things
>>> like the horrible external string API.
>>
>> As a side note, what are the reasons for the introduction of a totally new
>> external string API? One of the reasons was ABI compatibility (the external
>> string loads only extern-C functions from libxul), although we now have
>> C++-exported functions in libxul.so; I remember when I asked in #developers,
>> I got another reason but it escapes my mind right now...
>
> What C++ functions are being exported from libxul?
>
> The purpose of the external string API was twofold: 1) to provide an
> ABI-stable base (which is no longer important) 2) to avoid exporting the
> internal API, which reduces the number of relocations on startup and
> improved startup time by 2-5%.
>
> I think the startup time improvement is still important to keep.
>
> --BDS
>
>
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

Kyle Huey

unread,
Jul 5, 2012, 12:30:28 PM7/5/12
to Benoit Jacob, dev-pl...@lists.mozilla.org
On Tue, Jul 3, 2012 at 9:53 PM, Benoit Jacob <jacob.b...@gmail.com>wrote:

> What's the status of this effort?
>

I never found time to work on it. Brian Smith wrote patches for some of
this in bug 767231. Those patches are awaiting feedback from me.

ekr has also been working on some related stuff for WebRTC, but I don't
know how far along he's gotten.

- Kyle

Brian Smith

unread,
Jul 10, 2012, 7:58:11 PM7/10/12
to Benoit Jacob, Joshua Cranmer, Benjamin Smedberg, dev-pl...@lists.mozilla.org, Eric Rescorla
Benoit Jacob wrote:
> This would be very useful to me as I have things to test in the WebGL
> implementation that are very inconvenient and inefficient to test
> exhaustively by JavaScript. Specifically, drawElements index
> validation and texture conversions. The former has been at the center
> of some of our worst WebGL security bugs.
>
> So far, the code to be tested is in a few self-contained .cpp and .h
> files. So I am resorting (in not-yet-uploaded patch) to adding these,
> including the .cpp files, to EXPORTS and #including them directly in
> my unit tests.
>
> This is not ideal and won't scale when I have non-tiny-self-contained
> things to test, so I look forward to Kyle's idea being implemented.
> The only thing that scares me here is doubling the libxul linkage
> pain.

With both of those approaches, (#including and double-linking), we're not testing the bits that we ship, but rather something else. That makes me very nervous.

I have resorted (in *my* not-yet-updated patches) to exporting a wrapper function for each function I want to test. The wrapper does conversions like char* <-> nsCString to avoid INTERNAL/EXTERNAL API problems. The wrappers also help (AFAICT) to avoid adding too many relocations to libxul. This lets my tests test exactly the bits that will ship, AND it avoids the need to split out functions to be tested into their own files that would be suitable for #including. This seems like it will work well for me so I am not going to implement the double-linking scheme.

namespace mozilla {

class Bar
{
...
};

nsresult functionToTest(const nsCString & s, Bar * x);

namespace testonly {

MOZ_API(nsresult) functionToTest(const char * s, Bar * x)
{
return ::mozilla::functionToTest(PromiseFlatCString(s), x);
}

} // namespace testonly

} // namespace mozilla

I also spent a considerable amount of time refactoring code to get it into a testable state, to minimize the number of wrappers I have to write and to minimize the number of components I need to instantiate per test. This works well for what I need to test but it may not work well for you.

In bug 767231, besides getting gtest to build as part of mozilla-central, I also wrote some helpers:

EXPECT_NSRESULT_SUCCEEDED(rv)/ASSERT_NSRESULT_SUCCEEDED(rv):
These check that NS_SUCCEEDED(rv), and if not, they try to print the most useful error message they can.

ASSERT_NSRESULT_FAILED(expectedError, rv)/EXPECT_NSRESULT_FAILED(expectedError, rv):
These check that you get the error you expect, and if not, print the error you actually got in a useful way.

In particular, the idea is that these functions should eventually print out the symbolic name of an error (NS_ERROR_XXX) instead of the numerical value. But, I haven't completed the implementation of that yet.

Besides bug 767231, which is about integrating GTest into TestHarness.h, I have also been working on some utilities to make writing native-code tests less tedious:

Bug 767242: ScopedPrefChange allows a pref to be set for the duration of a test.
Bug 767241 and Bug 767240: Make it easier to create smart pointer types for C-based APIs.

This work isn't finished but I would appreciate any feedback on it that I can get while I'm finishing it up.

Cheers,
Brian

Ted Mielczarek

unread,
Jul 10, 2012, 8:20:53 PM7/10/12
to Brian Smith, dev-pl...@lists.mozilla.org
On Tue, Jul 10, 2012 at 7:58 PM, Brian Smith <bsm...@mozilla.com> wrote:
> With both of those approaches, (#including and double-linking), we're not testing the bits that we ship, but rather something else. That makes me very nervous.
>
> I have resorted (in *my* not-yet-updated patches) to exporting a wrapper function for each function I want to test. The wrapper does conversions like char* <-> nsCString to avoid INTERNAL/EXTERNAL API problems. The wrappers also help (AFAICT) to avoid adding too many relocations to libxul. This lets my tests test exactly the bits that will ship, AND it avoids the need to split out functions to be tested into their own files that would be suitable for #including. This seems like it will work well for me so I am not going to implement the double-linking scheme.

We discussed this again briefly in #developers today, and one idea
that was floated was simply linking all the test code into libxul, and
adding some way to call it. We could perhaps even add a -runtests
command line parameter that ran all the C++ tests that were linked in.

-Ted

Brian Smith

unread,
Jul 10, 2012, 10:13:53 PM7/10/12
to Ted Mielczarek, dev-pl...@lists.mozilla.org
I do not recommend putting any more testing code into libxul than necessary as then we potentially have to worry about security bugs, static initializers, code bloat, etc., in the testing code. I don't know exactly how GTest works under the covers but I suspect static initializers are used at least part of the time. Generally, we want test code to be as easy to read and write as possible, at the expense of "proper" error handling and other considerations that are more important in libxul than in test code.

On the other hand, exposing test-specific APIs like I have done also opens up some security surface since those APIs are (obviously) exported from the DLLs and thus easily callable.

Cheers,
Brian

Ehsan Akhgari

unread,
Jul 11, 2012, 1:05:50 AM7/11/12
to dev-pl...@lists.mozilla.org
On 12-07-10 7:58 PM, Brian Smith wrote:
> In bug 767231, besides getting gtest to build as part of mozilla-central, I also wrote some helpers:
>
> EXPECT_NSRESULT_SUCCEEDED(rv)/ASSERT_NSRESULT_SUCCEEDED(rv):
> These check that NS_SUCCEEDED(rv), and if not, they try to print the most useful error message they can.
>
> ASSERT_NSRESULT_FAILED(expectedError, rv)/EXPECT_NSRESULT_FAILED(expectedError, rv):
> These check that you get the error you expect, and if not, print the error you actually got in a useful way.

Hmm, are these checks defined only when DEBUG is defined? Because in
that case, you won't be testing the exact bits we ship. And otherwise,
the test code will be built into the binaries that we ship...

Ehsan

Neil

unread,
Jul 11, 2012, 5:25:59 AM7/11/12
to
Brian Smith wrote:

> MOZ_API(nsresult) functionToTest(const char * s, Bar * x)
> {
> return ::mozilla::functionToTest(PromiseFlatCString(s), x);
> }
>
I really hope that doesn't compile.

To convert a const char* that won't be null to an XPCOM string, use
nsDependentCString(s).

To convert a const char* that might be null to an XPCOM string, use
nsCString(s).

Nicholas Nethercote

unread,
Jul 11, 2012, 5:37:42 AM7/11/12
to Neil, dev-pl...@lists.mozilla.org
On Wed, Jul 11, 2012 at 7:25 PM, Neil <ne...@parkwaycc.co.uk> wrote:
>
>> MOZ_API(nsresult) functionToTest(const char * s, Bar * x)
>> {
>> return ::mozilla::functionToTest(PromiseFlatCString(s), x);
>> }
>>
> I really hope that doesn't compile.
>
> To convert a const char* that won't be null to an XPCOM string, use
> nsDependentCString(s).
>
> To convert a const char* that might be null to an XPCOM string, use
> nsCString(s).

PromiseFlatCString is just a typedef of nsCString.

Nick

Neil

unread,
Jul 11, 2012, 6:14:44 AM7/11/12
to
PromiseFlatCString is only supposed to accept a const nsACString&,
however as you point out the definition of the external API version is
simpler because it's unable to do the optimisations that the internal
API version does.

Ted Mielczarek

unread,
Jul 11, 2012, 7:16:17 AM7/11/12
to Brian Smith, dev-pl...@lists.mozilla.org
On Tue, Jul 10, 2012 at 10:13 PM, Brian Smith <bsm...@mozilla.com> wrote:
> I do not recommend putting any more testing code into libxul than necessary as then we potentially have to worry about security bugs, static initializers, code bloat, etc., in the testing code. I don't know exactly how GTest works under the covers but I suspect static initializers are used at least part of the time. Generally, we want test code to be as easy to read and write as possible, at the expense of "proper" error handling and other considerations that are more important in libxul than in test code.
>
> On the other hand, exposing test-specific APIs like I have done also opens up some security surface since those APIs are (obviously) exported from the DLLs and thus easily callable.

I don't personally think worrying about the security surface of
exported APIs from libxul is worthwhile. Anything looking to exploit
those would already have to have access to execute code on the user's
machine. As long as we are not adding web-visible functionality I
don't see the harm.

The static initializer issue is interesting, that would be worth looking into.

In general I'm not sure how well your approach will scale. You're
talking about writing wrappers for each function that you want to
test, but people are going to want to test complex bits of Gecko that
use a lot of XPCOM and tons of internal classes. Figuring out how to
wrap all that in externally-visible wrappers seems really hard.

-Ted

Benoit Jacob

unread,
Jul 11, 2012, 8:59:22 AM7/11/12
to Ted Mielczarek, Brian Smith, dev-pl...@lists.mozilla.org
2012/7/10 Ted Mielczarek <t...@mielczarek.org>:
> On Tue, Jul 10, 2012 at 7:58 PM, Brian Smith <bsm...@mozilla.com> wrote:
>> With both of those approaches, (#including and double-linking), we're not testing the bits that we ship, but rather something else. That makes me very nervous.
>>
>> I have resorted (in *my* not-yet-updated patches) to exporting a wrapper function for each function I want to test. The wrapper does conversions like char* <-> nsCString to avoid INTERNAL/EXTERNAL API problems. The wrappers also help (AFAICT) to avoid adding too many relocations to libxul. This lets my tests test exactly the bits that will ship, AND it avoids the need to split out functions to be tested into their own files that would be suitable for #including. This seems like it will work well for me so I am not going to implement the double-linking scheme.
>
> We discussed this again briefly in #developers today, and one idea
> that was floated was simply linking all the test code into libxul, and
> adding some way to call it. We could perhaps even add a -runtests
> command line parameter that ran all the C++ tests that were linked in.

The problem with that is that we would then have to worry about test
code size bloating libxul.

This is particularly worrying for tests covering c++ templates, which
is an area where compiled tests are particularly necessary. Indeed,
testing c++ templates exhaustively can require making many
instantiations.

For example, in a Android opt build, mfbt/tests/TestCheckedInt has
229k of actual code as measured by nm (i.e. not counting any goop)

obj-mobile-opt/mfbt/tests$ nm -S --radix=d TestCheckedInt | cut -d ' '
-f 2 | grep [0-9] | awk '{ SUM += $1} END { print SUM/1024 }'
229.299

So if we built tests into libxul, that couldn't be the configuration
that we actually ship, which seems to bring us back to the two-libxuls
idea from Kyle's original email.

If on the other hand we do Brian's idea to export some functions in
libxul to allow tests in separate executables, the problem is: how
expensive is it to have many exported functions in libxul? I don't
have experience with that, but the reason for -fvisibility=hidden is
to allow faster loading of binaries and we would defeat that if we had
to export many wrapper functions. Some numbers here, about how many
functions we can export before it starts to matter, would be very
useful.

Benoit

Benoit Jacob

unread,
Jul 11, 2012, 9:28:45 AM7/11/12
to Ted Mielczarek, Brian Smith, dev-pl...@lists.mozilla.org
2012/7/11 Benoit Jacob <jacob.b...@gmail.com>:
> 2012/7/10 Ted Mielczarek <t...@mielczarek.org>:
>> On Tue, Jul 10, 2012 at 7:58 PM, Brian Smith <bsm...@mozilla.com> wrote:
>>> With both of those approaches, (#including and double-linking), we're not testing the bits that we ship, but rather something else. That makes me very nervous.
>>>
>>> I have resorted (in *my* not-yet-updated patches) to exporting a wrapper function for each function I want to test. The wrapper does conversions like char* <-> nsCString to avoid INTERNAL/EXTERNAL API problems. The wrappers also help (AFAICT) to avoid adding too many relocations to libxul. This lets my tests test exactly the bits that will ship, AND it avoids the need to split out functions to be tested into their own files that would be suitable for #including. This seems like it will work well for me so I am not going to implement the double-linking scheme.
>>
>> We discussed this again briefly in #developers today, and one idea
>> that was floated was simply linking all the test code into libxul, and
>> adding some way to call it. We could perhaps even add a -runtests
>> command line parameter that ran all the C++ tests that were linked in.
>
> The problem with that is that we would then have to worry about test
> code size bloating libxul.
>
> This is particularly worrying for tests covering c++ templates, which
> is an area where compiled tests are particularly necessary. Indeed,
> testing c++ templates exhaustively can require making many
> instantiations.
>
> For example, in a Android opt build, mfbt/tests/TestCheckedInt has
> 229k of actual code as measured by nm (i.e. not counting any goop)
>
> obj-mobile-opt/mfbt/tests$ nm -S --radix=d TestCheckedInt | cut -d ' '
> -f 2 | grep [0-9] | awk '{ SUM += $1} END { print SUM/1024 }'
> 229.299

Some more data points.

The code size of all compiled tests can be obtained by this command:

(for d in `find . -type d -name 'test*'`; do (for p in `ls $d`; do
echo "`nm -S --radix=d $d/$p 2>&1| grep -v Warning | cut -d ' ' -f 2 |
grep [0-9] | awk '{ SUM += $1} END { print SUM }'` $d/$p"; done) |
grep ^[0-9] | grep -v \\.o | grep -v '^0 ' | sort -n; done) | tee log

What I get in my Android-opt objdir is that the profiler's libunwind
has by far the biggest test code size, over 120 M,

67302519 ./tools/profiler/libunwind/src/tests/Lperf-simple
67306357 ./tools/profiler/libunwind/src/tests/Gperf-simple

Omitting tools/profiler, the remaining tests total 1.4 MiB of code
which is already big enough to be a concern about libxul bloat. Again,
this is pure code size, not counting symbols or anything. It's a lower
bound on bloat.

The 20 biggest compiled test files are:

$ cat log | sort -n | tail -n 20
15064 ./storage/test/test_unlock_notify
15324 ./storage/test/test_AsXXX_helpers
15556 ./storage/test/test_asyncStatementExecution_transaction
15645 ./storage/test/test_true_async
15820 ./netwerk/test/TestProtocols
16306 ./storage/test/test_binding_params
16648 ./xpcom/typelib/xpt/tests/SimpleTypeLib
17191 ./netwerk/test/TestPageLoad
18742 ./storage/test/test_StatementCache
19177 ./netwerk/test/TestCookie
22920 ./xpcom/tests/TestHashtables
24828 ./mozglue/tests/TestZip
31127 ./xpcom/tests/TestTArray
40452 ./editor/txmgr/tests/TestTXMgr
55016 ./toolkit/crashreporter/test/libtestcrasher.so
136835 ./dom/plugins/test/testplugin/libnptest.so
159610 ./intl/unicharutil/tests/UnicharSelfTest
234802 ./mfbt/tests/TestCheckedInt
67302519 ./tools/profiler/libunwind/src/tests/Lperf-simple
67306357 ./tools/profiler/libunwind/src/tests/Gperf-simple

Benoit

Ted Mielczarek

unread,
Jul 11, 2012, 9:44:33 AM7/11/12
to Benoit Jacob, dev-pl...@lists.mozilla.org
On Wed, Jul 11, 2012 at 8:59 AM, Benoit Jacob <jacob.b...@gmail.com> wrote:
> The problem with that is that we would then have to worry about test
> code size bloating libxul.
>
> This is particularly worrying for tests covering c++ templates, which
> is an area where compiled tests are particularly necessary. Indeed,
> testing c++ templates exhaustively can require making many
> instantiations.

That's definitely a worry.

The other option I floated in #developers is making debug builds
disable the hidden visibility stuff we currently do, so that C++ tests
can link to whatever they want inside libxul. The downside is that we
obviously can't ship in this configuration, so we would only be able
to test tinderbox builds.

-Ted

Benjamin Smedberg

unread,
Jul 11, 2012, 9:47:19 AM7/11/12
to Ted Mielczarek, Benoit Jacob, dev-pl...@lists.mozilla.org
And it wouldn't work on Windows.

--BDS

Mike Hommey

unread,
Jul 11, 2012, 9:48:07 AM7/11/12
to Benoit Jacob, Brian Smith, dev-pl...@lists.mozilla.org, Ted Mielczarek
The libunwind tests obviously don't need to access libxul. They can
easily be linked with libunwind alone. I'm pretty sure some of the other
tests can also be linked with a minimal amount of stuff, provided,
maybe, some mock implementation of some interfaces.

Mike

Brian Smith

unread,
Jul 11, 2012, 3:16:14 PM7/11/12
to Ehsan Akhgari, dev-pl...@lists.mozilla.org
First, the GTest integration work I did creates a static library that is linked into standalone C++ unit test executables, not into libxul. The idea is that you write GTest tests and the integration work I did will cause us to build a standalone test executable that looks like a "normal" TestHarness.h-based test. (The main consequence of this is that you only have access to the exported external API from your test.)

GTest has tests of the form ASSERT_* and EXPECT_* that are like our Javascript is(), isnot(), ok(). These macros are to be used ONLY in test code. They are always defined, regardless of DEBUG. (EXPECT_* records a failure and continues; ASSERT_* records a failure and stops execution of the current test.) The naming convention ASSERT_* is a little unfortunate because it can be confused with debug-build-only assertion mechanisms but that is the GTest convention.

Cheers,
Brian

Brian Smith

unread,
Jul 11, 2012, 3:24:34 PM7/11/12
to Ted Mielczarek, dev-pl...@lists.mozilla.org
Ted Mielczarek wrote:
> I don't personally think worrying about the security surface of
> exported APIs from libxul is worthwhile. Anything looking to exploit
> those would already have to have access to execute code on the user's
> machine.

I mostly agree. My point is that any testing code that goes into libxul generally would need to live up to higher standards than test code outside of libxul.

> In general I'm not sure how well your approach will scale. You're
> talking about writing wrappers for each function that you want to
> test, but people are going to want to test complex bits of Gecko that
> use a lot of XPCOM and tons of internal classes. Figuring out how to
> wrap all that in externally-visible wrappers seems really hard.

EKR also mentioned that he thinks my approach isn't scalable. My brute-force solution isn't hugely different in terms of effort than exposing the functionality via XPCOM just for testing purposes.

I guess it really depends on what you're doing. For what I'm doing it isn't too tedious to do things the way I'm doing them. Maybe I've just been lucky so far; others may need a more flexible solution.

Note that the wrappers don't have to be as simple as the one I offered in my example. Often a test needs quite a bit a parameterized boilerplate to test slightly different conditions of the same general thing, and these wrapper functions may (or may not) be a good way of organizing that; it does add small amounts of test-specific code to libxul, but it should not be too significant.

Cheers,
Brian

Brian Smith

unread,
Jul 11, 2012, 3:28:12 PM7/11/12
to Mike Hommey, Benoit Jacob, dev-pl...@lists.mozilla.org, Ted Mielczarek
Mike Hommey wrote:
> The libunwind tests obviously don't need to access libxul. They can
> easily be linked with libunwind alone. I'm pretty sure some of the
> other tests can also be linked with a minimal amount of stuff,
> provided, maybe, some mock implementation of some interfaces.

The current tests already work without being linked into libxul so there's no reason to link them in now.

The takeaway from Benoit's email is that if/when we have a convenient way to write native code unit tests, that test code is likely to become too large (over time) for us to link it all into libxul.

Cheers,
Brian

Joshua Cranmer

unread,
Jul 11, 2012, 3:28:23 PM7/11/12
to
On 7/11/2012 3:24 PM, Brian Smith wrote:
> Note that the wrappers don't have to be as simple as the one I offered
> in my example. Often a test needs quite a bit a parameterized
> boilerplate to test slightly different conditions of the same general
> thing, and these wrapper functions may (or may not) be a good way of
> organizing that; it does add small amounts of test-specific code to
> libxul, but it should not be too significant.

There are some things you can't test effectively with separate
executables. For example, empty nsTArrays are known to cause crashes if
you pass them across library boundaries.


Robert O'Callahan

unread,
Jul 12, 2012, 2:24:50 AM7/12/12
to Brian Smith, Benoit Jacob, Joshua Cranmer, Benjamin Smedberg, dev-pl...@lists.mozilla.org, Eric Rescorla
On Wed, Jul 11, 2012 at 11:58 AM, Brian Smith <bsm...@mozilla.com> wrote:

> With both of those approaches, (#including and double-linking), we're not
> testing the bits that we ship, but rather something else. That makes me
> very nervous.
>

It doesn't make me nervous.

Sure, it's possible that relinking libxul with test code might occasionally
conceal a bug. But testing is always a cost/benefit exercise. If relinking
libxul makes it significantly easier to write tests, people will write more
tests, and we'll catch more bugs, in exchange for maybe missing a very
small number. Sounds like a win to me.

Rob
--
“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]
0 new messages