Cross browser automated tests

2 views
Skip to first unread message

Sylvain Pasche

unread,
Mar 2, 2009, 6:09:10 PM3/2/09
to
Hi,

This is an announcement of a project that may be of interest. The idea
is to run test cases automatically on some versions of the main
browsers. Test results should help authors/spec authors/vendors see what
feature is implemented where and vendors could integrate these tests in
their regression testing framework.

For now the framework is able to run a subset of WebKit and Mozilla
automated tests. For the Mozilla tests in particular, it runs the
Mochitests that don't use privileged APIs and Reftests by taking
screenshots.

Results are visible on http://www.browsertests.org. Note that there are
some features not implemented or limitations that result in tests that
should pass which fail and reversely. Source code and some documentation
is available at http://code.google.com/p/browsertests/.

One of the possible future improvements is to develop scripts/extension
in order to run WebKit LayoutTests on Mozilla. That means implementing
the layoutTestController global object which tests are using. Some work
has already been done in that direction by the Stanford Web Security
Research team (http://crypto.stanford.edu/websec/cross-testing/). Such
an extension could then be integrated in Mozilla's automated testing
harness.

If you have questions/suggestions/ideas they're welcome. There's also a
discussion group on http://groups.google.com/group/browsertests.


Sylvain

Jonas Sicking

unread,
Mar 2, 2009, 7:37:04 PM3/2/09
to
Hi Sylvain,

This is awesome. Having a standardized test suit that all browsers can,
and do, submit tests to will be a great way to reduce bugs and increase
compatibility.

We definitely do figure out a way to disable sets of tests that use
features that aren't supported by all browsers though. In order to avoid
having browser vendors (mozilla included) from ignoring failing tests
due to thinking that it's probably just another missing feature.

One thing that I've heard of other test suites doing is separating the
tests from the information about if the test is expected to pass or
fail. This way you can have one set of tests for all browsers, and then
a separate 'pass/fail' chart per browser.

Another approach would be to have a central repository of tests that
browsers can submit to, and copy out of. This way every browser runs a
fork of the suit that contains the tests they try to pass, while there
is a central repository that tracks overall status.

/ Jonas

Sylvain Pasche

unread,
Mar 3, 2009, 7:13:36 AM3/3/09
to
Jonas Sicking wrote:
> Hi Sylvain,
>
> This is awesome. Having a standardized test suit that all browsers can,
> and do, submit tests to will be a great way to reduce bugs and increase
> compatibility.

Thanks for your feedback.

> We definitely do figure out a way to disable sets of tests that use
> features that aren't supported by all browsers though. In order to avoid
> having browser vendors (mozilla included) from ignoring failing tests
> due to thinking that it's probably just another missing feature.

So you mean flagging tests for a given browser as not implemented, so
that it's possible to distinguish between unimplemented, failing or passing?

> One thing that I've heard of other test suites doing is separating the
> tests from the information about if the test is expected to pass or
> fail. This way you can have one set of tests for all browsers, and then
> a separate 'pass/fail' chart per browser.

Yes that would be useful feature to have. There could be two results
status for each tests, the "conformance" result and the regression result.
For instance, the Mochitests todo() assertions are marking tests as
failing in the current browsertests.org results. The idea would be to
store that information outside of the test, and have a way to identify a
given assertion and mark it as todo().

> Another approach would be to have a central repository of tests that
> browsers can submit to, and copy out of. This way every browser runs a
> fork of the suit that contains the tests they try to pass, while there
> is a central repository that tracks overall status.

That would be nice. Ideally the central repository would also contain
separate tests and pass/fail information, so that it's easier to manage
the forks (if at all).

Note that if test suites adopts test case formats that can be automated,
they could be integrated automatically in the central repository. That
could be the case for the future html5 test suite for instance.


Sylvain

Jonas Sicking

unread,
Mar 5, 2009, 6:09:54 PM3/5/09
to Sylvain Pasche
Sylvain Pasche wrote:
> Jonas Sicking wrote:
>> Hi Sylvain,
>>
>> This is awesome. Having a standardized test suit that all browsers
>> can, and do, submit tests to will be a great way to reduce bugs and
>> increase compatibility.
>
> Thanks for your feedback.
>
>> We definitely do figure out a way to disable sets of tests that use
>> features that aren't supported by all browsers though. In order to
>> avoid having browser vendors (mozilla included) from ignoring failing
>> tests due to thinking that it's probably just another missing feature.
>
> So you mean flagging tests for a given browser as not implemented, so
> that it's possible to distinguish between unimplemented, failing or
> passing?

Or flag as "known bug". It's very important that we don't mix "this is
something we know we don't do correctly yet" with "this is a recent
regression that we didn't used to have".

>> One thing that I've heard of other test suites doing is separating the
>> tests from the information about if the test is expected to pass or
>> fail. This way you can have one set of tests for all browsers, and
>> then a separate 'pass/fail' chart per browser.
>
> Yes that would be useful feature to have. There could be two results
> status for each tests, the "conformance" result and the regression result.
> For instance, the Mochitests todo() assertions are marking tests as
> failing in the current browsertests.org results. The idea would be to
> store that information outside of the test, and have a way to identify a
> given assertion and mark it as todo().

Yup. We'd probably have to move the is() vs. todo() distinction to
outside of the test so that one browser can flag as 'expected to pass'
whereas another flags as 'expected to fail'.

/ Jonas

Reply all
Reply to author
Forward
0 new messages