Test Automation

8 views
Skip to first unread message

b...@bclary.com

unread,
May 6, 2006, 12:29:54 AM5/6/06
to
As some of you may already know, I have been working on automating test
execution for Firefox for some time but have not made the project or
work public before now.

Although I have solutions for automating the execution of the JavaScript
Test Library as well as the W3C DOM Test Suite, the approaches I have
taken have been driven by the specific task at hand and aren't
necessarily the right approaches to use when this becomes part of the
Mozilla code base. In fact, the scripts are a collection of hacks and
should be considered (if at all) as starting points to be redesigned and
refined.

This thread, and the follow-up threads, are intended to introduce what I
have been using to the community and obtain consensus on the next steps
to take to make this available to everyone.

The goal is to get the discussion started, determine the basic questions
that must be answered, get answers quickly, file bugs and get code
checked into the tree as soon as possible so that everyone can use and
contribute to automated testing.

Ownership of this project is open to those with the skills, time and
motivation to step up and drive it. If no one else steps up, I'll
continue to muddle along with your help.

Please use this thread for general discussion and the follow-up threads
for the specific topics. I will also be announcing this on my blog in
order to help drive more people to the discussion.

Follow-up threads:

* Execution Environment
* Directory Paths | CVS Locations
* Nightly Build Installation
* Building
* Profile Creation and Initialization
* Test Invocation
* Result Reporting

Bob Clary
b...@bclary.com
http://bclary.com/

b...@bclary.com

unread,
May 6, 2006, 12:37:51 AM5/6/06
to
Execution Environment

I have been using a bash command shell for running tests on Windows
XP/Windows Server 2003 (using Cygwin), Red Hat Enterprise Linux, and Mac
OS X.

I feel that this environment is rich enough to allow bash scripts, perl
scripts, python scripts, etc. as well as C/C++ programs to drive tests.

Unless someone strenuously objects, I consider this decision a done deal.

/bc


Bob Clary

unread,
May 6, 2006, 1:15:34 AM5/6/06
to
Directory Paths | CVS Locations

The obvious location for the test automation scripts/programs is under
<http://lxr.mozilla.org/seamonkey/source/tools/>. Unfortunately, the
approach I have been using hasn't been designed with this choice in mind.

It is not clear to me what the best organization would be and I am
looking for guidance from those of you with experience in creating such
hierarchies in the cvs tree.

One thing that I am certain we should have is a standard "bin" like
directory where shared scripts can be placed for reuse. Examples are
things like perl modules to handle running external programs, reporting
exit codes, handling terminating long running programs etc. There are
examples of such code in the tinderboxen code, but they aren't really
designed to be reused outside of that environment. I recently had to
recreate such code for the JavaScript Test Library's jsDriver.pl and I
am sure that other test suites could use such functionality.

As a starting point for the discussion, you can look at the directory
structure I have been using at <http://test.bclary.com/>.

bin/
I have been using this as a dumping ground for common files and
scripts but with proper planning this could be
a repository of scripts which all test suites could reuse.

components/
I have been using this directory to contain files which are to
be copied into the build's installation components directory.
It is organized into subdirectory by operating system.

This is currently an all-or-nothing proposition. Again, this
could|should be improved.

data/
I have been driving the tests execution using environment
variables which specify the parameters such as product,
installation directory, etc. This directory contains files
which contain lists of environment variable assignments, one
to each line.

I am not entirely happy with this use of environment variables
and feel there must be a better way. With that said, this is
a very easy way to set up test environments which are easily
reused.

plugins/
As with the components directory, this directory contains files
which are to be copied into the build's installation directory.
It is organized into subdirectory by operating system.

This is currently an all-or-nothing proposition. Again, this
could|should be improved.

prefs/
This directory contains user.js pref files which are selected
via the environment variable TEST_USERPREFS and then copied into
the profile before test is executed.

profiles/
Although it is not shown on the test.bclary.com site, I also
use a profiles directory for use as templates for specific
profiles since it is sometimes (think Thunderbird) necessary
to have pre-created profiles available.

results/
This directory is a mirror image of the tests/ directory and
is used to collect log files from tests.

talkback/
This directory contains Talkback.ini files which are used to set
a default email address for talkback reports and for setting a
url which is available in the internal talkback incident
reports.
It is organized into subdirectory by operating system.

tests/
This directory contains the tests nominally organized by
organization. On test.bclary.com, this currently contains
mozilla.org/js/ and w3.org/2001/DOM-Test-Suite/.

Tests would not necessarily have to physically live in this
directory. For example, neither the JavaScript Test Library and
the DOM Test Suite live here.

xpi/
This directory contains extensions which are to be installed
into the build. Currently the test environment uses global
installations and will install all extensions in the all/
subdirectory as well as any extension found in the operating
system specific subdirectory.

This is currently an all-or-nothing proposition. Again, this
could|should be improved.

/bc

Bob Clary

unread,
May 6, 2006, 1:38:14 AM5/6/06
to
Nightly Build Installation

I use <http://test.bclary.com/bin/install-build.sh> to download and
install builds of Firefox and Thunderbird to /tmp/test/.

usage:

install-build.sh -p product -u url -d directory -f filename

product firefox|thunderbird
url url to download build from
directory subdirectory of /tmp/test/ where install build
filename filename to store build

This script will download the build from the specified url to the file
/tmp/test/$filename, optionally unpacking it and installing the build to
/tmp/test/$directory/. The filename is necessary since for release
installations, the actual file name is not available from the download url.

This script will handle Firefox and Thunderbird on Windows (zip,
installer), Linux (zip, bz2, installer) and Mac OS X (dmg).

I use subdirectories of /tmp/test/ since I am paranoid about
accidentially doing an rm -fR /. ;-)

/bc

Bob Clary

unread,
May 6, 2006, 1:57:45 AM5/6/06
to
Building

The obvious choice for building Firefox and Thunderbird for testing
purposes is to use Tinderbox however I feel this is not the right choice.

* Tinderbox is a mission critical application and is not easy to install
or administer. I initially tried to set up Tinderbox and gave up. Making
a tester install Tinderbox in order to run the automated tests is a no
go in my opinion.

* Modifying Tinderbox is not trivial and is not something to be
considered lightly.

* Tinderbox doesn't currently build Debug builds which are critical for
some testing scenarios.

* Simple scripts can be used to perform the necessary steps for building
without the overhead of Tinderbox.

I currently use an environment which is identical across my local
Windows XP workstation and laptop and the machines in the QA server farm
which consists of Red Hat Enterprise Linux, Windows Server 2003 and Mac
OS X machines.

Since it is not recommended to build under Windows in your cygwin home
directory, I have placed everything in a special directory /work off of
the root directory. The build trees are actually in
/work/mozilla/builds/ff/$rv/ where rv is the gecko branch.

Each of my build trees contains:

Configuration files for each type of build and operating system:

.mozconfig-debug-linux
.mozconfig-debug-mac
.mozconfig-debug-win32
.mozconfig-linux
.mozconfig-mac
.mozconfig-win32

Cross platform (Windows, Linux, Mac) scripts (see attached)

build.sh
checkout.sh
clean.sh
set-firefox.sh

These scripts don't handle the case where the tree has not been
initially checked out.

I am certain this approach can and should be improved, but keeping it
simple and modular is important I think.

/bc


build.sh
checkout.sh
clean.sh
set-firefox.sh

Bob Clary

unread,
May 6, 2006, 2:14:05 AM5/6/06
to
Profile Creation and Initialization

I use <http://test.bclary.com/bin/init-profile.sh> to create and
initialize profiles in the /tmp/test/ directory. Again I use /tmp/test/
to prevent disasters such as rm -fR /.

usage:

init-profile.sh -p product -b binary -n name -t template -e extensions \
-u user -i talkbackid

product firefox|thunderbird

binary path to program binary

name profile name and subdirectory of /tmp/test
where profile is installed

template optional location of a template profile to be used. If
a template is not specified, a new profile is created.

extensions path to directory containing xpis to be installed

user path to user.js prefs file to be used. Optional

talkbackid identifier to be placed in the Talkback.ini file in the URL
to track the test.

This script needs to be factored since it:

* creates the profile (and optionally copies a template onto the
profile).

* copies the specified user.js preferences file into the profile.

* edits the Talkback.ini file

* globally installs the specified extensions.

* patches the start up scripts for Linux and Mac OS X.

* Starts the application twice to get around the problem of installing
extensions when NO_EM_RESTART is set.

* Starts Spider <http://bclary.com/projects/spider/>. I use Spider for
many tasks, but it should not be a requirement for a test to run.
It is less of a problem recently, but getting an extension installed
and initialized so that it can be used without hanging the application
can be a problem.

/bc

Andrew Schultz

unread,
May 6, 2006, 2:39:55 AM5/6/06
to
Bob Clary wrote:
> Building
>
> The obvious choice for building Firefox and Thunderbird for testing
> purposes is to use Tinderbox however I feel this is not the right choice.
>
> * Tinderbox is a mission critical application and is not easy to install
> or administer. I initially tried to set up Tinderbox and gave up. Making
> a tester install Tinderbox in order to run the automated tests is a no
> go in my opinion.

Did you have trouble with tinderbox client or tinderbox server? I set
up a tinderbox client without much trouble. Some online docs would help
to make it easier. Setting up a tinderbox server is more difficult.

> * Modifying Tinderbox is not trivial and is not something to be
> considered lightly.
>
> * Tinderbox doesn't currently build Debug builds which are critical for
> some testing scenarios.

Not usually, but it certainly can. balsa on the Firefox tree builds
with debug enabled.

> * Simple scripts can be used to perform the necessary steps for building
> without the overhead of Tinderbox.

Yes. But in addition to building (continuously), tinderbox also
provides infrastructure to perform tests and report the status/test
results to a server so its progress can be monitored by a wider audience
(like the developers who just committed patches).

Tinderbox also already handles Firefox, Thunderbird, SeaMonkey and a
laundry list of platforms.

But perhaps those things are not so important for what you have
envisioned. It would be great if the existing tinderboxes could run
more automated tests so developers would be aware of breakage they cause
more quickly, but maybe this would work better for a single person to
use to run / monitor a bunch of tests on a single build.

--
Andrew Schultz
ajsc...@verizon.net
http://www.sens.buffalo.edu/~ajs42/

Bob Clary

unread,
May 6, 2006, 3:04:51 AM5/6/06
to
Test Invocation

Makefiles and stuff
==================
The approach I have used is to create a GNU Makefile called Maketests
for each test suite. An example can be found at
<http://test.bclary.com/tests/mozilla.org/js/Maketests>. Ideally, this
makefile should be runnable without any external setup or external
dependencies except for perhaps dependencies on the common "bin"
directory I mentioned in "Directory Paths | CVS Locations".

This Maketests file could do anything in principle such as building test
programs or launching the browser to execute tests.

To manage a collection of such tests, I use
<http://test.bclary.com/Makemake>,
<http://test.bclary.com/bin/generate-targets.pl> and
<http://test.bclary.com/bin/generate-targets.sh> to create a top-level
makefile <http://test.bclary.com/Makefile> which contains targets for
each test suite's targets.

Each target in the top-level Makefile is constructed by converting
slashes and dots in the path to the test suite's Maketests file and
appending the target in the Maketests file. For example, the target all
in the tests/mozilla.org/js/Maketests file would be represented in the
top-level Makefile by the target tests_mozilla_org_js_all.

Assuming the environment variables are set, the build installed|built,
the profile created and initialized a test could be executed simply by
making the appropriate target.

To manage the setting up of the environment and kicking off of the tests
I use a script <http://test.bclary.com/bin/test.sh>

usage: test.sh datafile target1 target2...

where datafile points to a file in <http://test.bclary.com/data/> which
contains the necessary environment variable definitions and target1
target2 ... are the targets in the top-level Makefile to execute.

Lately I have hidden the complexity of using test.sh by creating custom
scripts such as
<http://test.bclary.com/tests/mozilla.org/js/javascript.sh> which can be
used to test multiple builds, branches and targets.

Spider as a framework
=====================
I use Spider <http://bclary.com/projects/spider/> to execute specific
chunks of JavaScript on web pages. Its "userhook" scripts can be used to
drive test suites such as JsUnit (See
<http://test.bclary.com/tests/w3.org/2001/DOM-Test-Suite/userhook.js>)
or the JavaScript Test Library (See
<http://test.bclary.com/tests/mozilla.org/js/userhook-js.js>).

Spider can be invoked as a chrome XUL application from the command line
using a _properly_ encoded url to initialize it, specify the userhook,
start, and quit. As such it has been a very useful building block. It is
not perfect and could be improved. Also, it is a descendant of CSpider
<http://devedge-temp.mozilla.org/toolbox/examples/2003/CSpider/index_en.html>
which is tri-licensed under MPL, GPL and LGPL but the original copyright
is owned by Netscape. It probably should be rewritten _by someone else_
to remove any of its original heritage.

I also use <http://test.bclary.com/bin/spider.pl> to sequentially start
Spider, load a url from a file and exit.

For example, the browser based JavaScript Test Library is executed by
spider.pl which starts Spider for each of the urls in a given file (such
as <http://test.bclary.com/tests/mozilla.org/js/js-list.txt>) with a
specified userhook script (such as
<http://test.bclary.com/tests/mozilla.org/js/userhookeach-js.js>).

/bc

Bob Clary

unread,
May 6, 2006, 3:44:26 AM5/6/06
to
Result Reporting

Currently, for the most part, I use dump and redirect stdout and stderr
to log files for later processing using grep|sed|...

Ideally this would still be available but should be complemented for
individual testers by nice interactive XUL based UI such as found in the
xslt tests
<http://lxr.mozilla.org/mozilla/source/content/xslt/tests/XSLTMark/>, or
foxunit
<http://www.allpeers.com/blog/2005/09/28/foxunit-unit-test-framework-for-firefox/>.

For "official" tests, a web service where results could be posted to a
database for later retrieval and analysis is a requirement.

Much work remains on how to properly handle, present and analyze test
output.

Examples of current output
==========================
(BE CAREFUL, SOME OF THESE FILES ARE HUGE!)

JavaScript Test Library
-----------------------
<http://people.mozilla.com/~bclary/results/mozilla.org/js-public/>

A single run of the JavaScript Test Library for a given build consists
of running the tests in the JavaScript Shell as well as running them in
the browser. The overall log contains a text version of the test results
for both the browser and shell
(<http://people.mozilla.com/~bclary/results/mozilla.org/js-public/2006-05-03-04-49-24-firefox-1.5.0-opt-1.8.0.4_2006050222-plum.mozilla.org.log>)
and the normal jsDriver.pl html output such as
<http://people.mozilla.com/~bclary/results/mozilla.org/js-public/2006-05-03-04-49-24-firefox-1.8.0.4_2006050222-opt-plum.mozilla.org-e4x.html>
and
<http://people.mozilla.com/~bclary/results/mozilla.org/js-public/2006-05-03-04-49-24-firefox-1.8.0.4_2006050222-opt-plum.mozilla.org-js.html>.

When reviewing the JavaScript Test Library results, I typically combine
the overall logs for several different builds in order to easily compare
changes overtime and from build to build.

*** WARNING **** this file is 161 Megabytes **** WARNING ****
An example is
<http://people.mozilla.com/~bclary/results/mozilla.org/js-public/js-2006-05-03.log>
*** WARNING **** this file is 161 Megabytes **** WARNING ****

If you want to review this file, I recommend downloading it to disk,
then reviewing using less -S. Once the file is loaded you can look for
failures by using the regular expression search
/.*(CRASHED|TIMED OUT|result: FAIL).*


DOM Test Suite
--------------
<http://people.mozilla.com/~bclary/results/w3.org/2001/DOM-Test-Suite/>

A normal run of the DOM Test Suite runs each level (1, 2, 3) and each
feature (Core, HTML, ...) for each supported "builder" and mime type and
writes the result to a raw log file such as
<http://people.mozilla.com/~bclary/results/w3.org/2001/DOM-Test-Suite/2006-05-04-09-37-10-firefox-1.5.0-opt-1.8.0.4_2006050222-pear.mozilla.org.log>.

To make the results more readable, I filter out only the test results as
in
<http://people.mozilla.com/~bclary/results/w3.org/2001/DOM-Test-Suite/2006-05-04-09-37-10-firefox-1.5.0-opt-1.8.0.4_2006050222-pear.mozilla.org.log.summary>.

/bc

Bob Clary

unread,
May 6, 2006, 3:58:33 AM5/6/06
to Andrew Schultz
Andrew Schultz wrote:
>
> Did you have trouble with tinderbox client or tinderbox server? I set
> up a tinderbox client without much trouble. Some online docs would help
> to make it easier. Setting up a tinderbox server is more difficult.
>

Its been such a long time, I don't remember exactly what I tried to do.
Probably the server.

>>
>> * Tinderbox doesn't currently build Debug builds which are critical
>> for some testing scenarios.
>
> Not usually, but it certainly can. balsa on the Firefox tree builds
> with debug enabled.
>

It can since it is a relatively simple configuration change, but from
experience it is difficult to get done. I asked for debug builds to be
produced back in 2004 and am still waiting... ;-)

>> * Simple scripts can be used to perform the necessary steps for
>> building without the overhead of Tinderbox.
>
> Yes. But in addition to building (continuously), tinderbox also
> provides infrastructure to perform tests and report the status/test
> results to a server so its progress can be monitored by a wider audience
> (like the developers who just committed patches).
>

Tinderbox may be the solution, I don't know.

The tests that run on the current tinderboxes are fairly limited in
scope and time. Running the full JavaScript Test Library or the DOM Test
Suite can take several hours which is quite different from the current
situation. Other tests that I run can take _days_.

Also, from what I hear from the people who deal with Tinderboxes it is
not that pleasant an experience and its demise is sincerely wished by many.

The reporting of the results to a wider audience is part of the overall
test automation project. But I am not sure that emailing log files
around the way Tinderbox does is the right solution.

> Tinderbox also already handles Firefox, Thunderbird, SeaMonkey and a
> laundry list of platforms.
>

True and it is something worth considering. I didn't rule it out and am
open to suggestions.

> But perhaps those things are not so important for what you have
> envisioned. It would be great if the existing tinderboxes could run
> more automated tests so developers would be aware of breakage they cause
> more quickly, but maybe this would work better for a single person to
> use to run / monitor a bunch of tests on a single build.
>

That is the goal of the project, but it doesn't necessarily require
Tinderbox to do it. As the automation stands now, it could automatically
download and install builds from the existing tinderboxes and execute
tests without blocking the tinderbox from completing the next build
while the tests run.

I don't know the Tinderbox code and perhaps it is the best solution for
the automatic building and launching of tests. I'll leave that up to
others to decide.

/bc

Axel Hecht

unread,
May 6, 2006, 9:31:30 AM5/6/06
to

IMHO, we should try to split building and testing onto different
machines. (Startup tests aside.)

A build environment has other restrictions and demands than a test
environment, esp when it comes down to perf tests.

When we're talking about long running tests, it's pretty important that
we have "as you go" logs, which tinderbox (currently?) doesn't offer.
This is one of the known RFEs for the build farm, so I guess that a
bunch of these questions will be answered by getting our build farm into
a good state already.

Axel

makz...@gmail.com

unread,
May 6, 2006, 3:09:01 PM5/6/06
to
I'm not sure if this is what you're really looking for, but for running
repetitive tests in the UI itself, I think Dogtail
(http://people.redhat.com/zcerza/dogtail/) may come in handy. It seems
to be designed specifically push an app's UI around in a scripted
manner, and could be very useful to look for bugs, performance
regressions, etc etc.

Bob Clary

unread,
May 6, 2006, 4:50:41 PM5/6/06
to makz...@gmail.com

We looked at it a while back and found it very interesting. The
requirements <http://people.redhat.com/zcerza/dogtail/downloads.html>
seemed to rule out a cross platform (Windows, Mac OS X) solution
although I could be wrong about that. It is GPL'd, so perhaps it could
be modified to work cross platform. Does anyone know how to get this
running on all three major platforms?

Using the accessibility features of the browser to drive tests has been
a recurring theme and is something we could use help with.

dogtail is one example of a number of approaches where the browser UI is
driven by an external program and as such would fit into the category
(in my opinion) of any number of test programs or test drivers which we
might want to plug into the over all test automation.

For example, someone might want to develop a set of UI tests using
dogtail, while someone else might use EggPlant or someone else would
want to use JsUnit to test JavaScript APIs, or someone else would want
to kick off the Layout tests, etc.

What I am trying to say, is I think dogtail (or something like it) would
be very very useful, but not the full solution. And we would need help
in getting it in a form we would find useful. Volunteers?

/bc

Allan Beaufour

unread,
May 8, 2006, 7:43:35 AM5/8/06
to mozilla.d...@lists.mozilla.org, dev-q...@lists.mozilla.org
On 5/6/06, b...@bclary.com <b...@bclary.com> wrote:
> This thread, and the follow-up threads, are intended to introduce what I
> have been using to the community and obtain consensus on the next steps
> to take to make this available to everyone.

It's good to see! I guess I am not the only one who has been wanting
this for a while.

> The goal is to get the discussion started, determine the basic questions
> that must be answered, get answers quickly, file bugs and get code
> checked into the tree as soon as possible so that everyone can use and
> contribute to automated testing.

My personal aim in this is to get automated testing of XForms up and
running. On a general level I guess XForms needs should not be
different than f.x. XHTML. As it is also page oriented: "get the
browser to open a series of pages, run (JS) tests, and report errors
to the user/a server"

"make -f client.mk; make tests/xforms; less report.xml"
and I would be a truly happy man :)

Except from that there are some specific issues with regards to
submission, where we need some sort of server that can handle
submissions of XForms, possibly alter the content, and send it back to
the browser. But again that also goes for XHTML submit, except that
XForms is mostly XML.

--
... Allan

Bob Clary

unread,
May 8, 2006, 1:07:36 PM5/8/06
to
Allan Beaufour wrote:
>
> My personal aim in this is to get automated testing of XForms up and
> running. On a general level I guess XForms needs should not be
> different than f.x. XHTML. As it is also page oriented: "get the
> browser to open a series of pages, run (JS) tests, and report errors
> to the user/a server"
>
> "make -f client.mk; make tests/xforms; less report.xml"
> and I would be a truly happy man :)

So would I.

>
> Except from that there are some specific issues with regards to
> submission, where we need some sort of server that can handle
> submissions of XForms, possibly alter the content, and send it back to
> the browser. But again that also goes for XHTML submit, except that
> XForms is mostly XML.

I can see the need to have a service similar to this for a variety of
tests: (X)HTML|XForms|XMLHttpRequest submission. While setting something
up like this for use by "official" tests probably isn't a huge task, it
would need IT support and probably wouldn't be something that could be
exposed to the general community. I think any of these type requirements
will need to be supported on developers work stations in some fashion so
they are not dependent on IT.

I would think that all this particular service would be required to do
is echo back the submitted data. Is that correct?

Simple cgi programs written in Perl would be candidates for implementing
such services since they could easily be run on the same web servers
where the tests will be hosted which could be at MoCo or your own local
web servers.

I have been using Apache on Linux, Mac OS X and Windows/Cygwin as a
cross-platform web server. I think that its availability and similar
setup/configuration make it the only choice. Is there anything we
couldn't do in such an environment?

Bob

Bob Clary

unread,
May 8, 2006, 1:20:58 PM5/8/06
to
What do people think of using Atom as format and API for submitting
results to the test result database and distributing results back to
those who are interested?

The "test data" could be contained in the Atom "content" element and
could be text, XHTML or vanilla XML.

The API has the kind of functionality (add, modify, delete) we would need.

It is an IETF standard, has a variety of implementations we might be
able to be use without us having to implement it ourselves.

I would envision that as the test ran it would submit a sequence of posts:

test run start
test result 1
test result 2
...
test result N
test run stop

/bc

Axel Hecht

unread,
May 8, 2006, 7:01:28 PM5/8/06
to

Sounds like yet another argument for a testing sandbox. I doubt that we
really want to audit those services, so if we had a sandbox to test in,
that server could be exposed to the test-running machines only.

Axel

Dave Liebreich

unread,
May 8, 2006, 7:18:01 PM5/8/06
to
Bob Clary wrote:

> Simple cgi programs written in Perl would be candidates for implementing
> such services since they could easily be run on the same web servers
> where the tests will be hosted which could be at MoCo or your own local
> web servers.

I'd prefer a simple web server (HTTP::Server::Simple) bound to a local
port and started/stopped by the test harness, rather than a centralized
web server. With the simple option, maintaining the "cgi" is much
easier since you don't have to have worry about server permissions.
--
Dave Liebreich
Test Architect, Mozilla Corporation

Bob Clary

unread,
May 8, 2006, 7:46:01 PM5/8/06
to

I think you misunderstood me. I am not advocating the universal use of
centralized web servers to serve tests. I do think there should be
centralized results gathering and reporting servers though.

Currently each test machine hosts a copy of the tests and has a local
virtual web server test.mozilla.com at 127.0.0.1 (along with several
others virtual web servers used for other purposes) which it uses to
serve the tests to itself. With the exception of performance tests where
you may not want to have any extra services running on the machine
executing the tests, I think this is the best approach. It partitions
each test environment and removes dependencies upon other machines and
allows developers to setup and run the tests on their local machines
using identical setups.

There will be a need for special server environments which would
possibly be better hosted at MoCo due to special or bizarre
configurations, but those aren't the normal case.

I don't know anything about HTTP::Server::Simple but if it can do
everything that is required, I don't see why not but then I don't see
the advantage over Apache either. I would prefer to have a single type
of server that is easy to setup and maintain which is why I use Apache
on all of the machines.

/bc

Allan Beaufour

unread,
May 9, 2006, 11:10:01 AM5/9/06
to Bob Clary, dev-q...@lists.mozilla.org
On 5/8/06, Bob Clary <b...@bclary.com> wrote:

> Allan Beaufour wrote:
> > Except from that there are some specific issues with regards to
> > submission, where we need some sort of server that can handle
> > submissions of XForms, possibly alter the content, and send it back to
> > the browser. But again that also goes for XHTML submit, except that
> > XForms is mostly XML.
>
> I would think that all this particular service would be required to do
> is echo back the submitted data. Is that correct?

Yes, for most cases that should be enough.

Some transformation mechanisms are need too though. For example for
REST-type tests, a simple "convert uri params to XML" would be good
though. Some other transformations from format Y to XML is also needed
for XForms (as XForms can submit in many formats, but expect XML
back). But that should all be pretty easy to implement.

> Simple cgi programs written in Perl would be candidates for implementing
> such services since they could easily be run on the same web servers
> where the tests will be hosted which could be at MoCo or your own local
> web servers.

That is a good idea. It also solves cross-domain issues if everything
is running on the same host.

> I have been using Apache on Linux, Mac OS X and Windows/Cygwin as a
> cross-platform web server. I think that its availability and similar
> setup/configuration make it the only choice. Is there anything we
> couldn't do in such an environment?

Hmm, not at first though. Except from cross-domain stuff? :)

--
... Allan

Dave Liebreich

unread,
May 9, 2006, 12:32:09 PM5/9/06
to
Bob Clary wrote:

> I think you misunderstood me. I am not advocating the universal use of
> centralized web servers to serve tests. I do think there should be
> centralized results gathering and reporting servers though.

I agree. I did not think we were discussing results gathering and
reporting.

>
> Currently each test machine hosts a copy of the tests and has a local
> virtual web server test.mozilla.com at 127.0.0.1

This means that a developer who wants to run these tests on his or her
own machine must first configure apache. That might be too much to ask.

>
> There will be a need for special server environments which would
> possibly be better hosted at MoCo due to special or bizarre
> configurations, but those aren't the normal case.

I'd like to see reduced test cases that embody the defect and can be
reproduced with a captive web server (like one based on
HTTP::Server::Simple). We can and should still use the full end-to-end
stuff in our hosted test environments so we can find other bugs.

>
> I don't know anything about HTTP::Server::Simple but if it can do
> everything that is required, I don't see why not but then I don't see
> the advantage over Apache either. I would prefer to have a single type
> of server that is easy to setup and maintain which is why I use Apache
> on all of the machines.

The advantage of a captive web server is that it requires zero setup and
maintenance by the person running the test.

Do you know of a particular test case or bug # that I could use as an
example and build a captive-web-server-based test script around?

Bob Clary

unread,
May 9, 2006, 12:46:28 PM5/9/06
to
Dave Liebreich wrote:

>>
>> Currently each test machine hosts a copy of the tests and has a local
>> virtual web server test.mozilla.com at 127.0.0.1
>
> This means that a developer who wants to run these tests on his or her
> own machine must first configure apache. That might be too much to ask.
>

Could be too much. Lets see what they say here ;-)

Another issue that I left out completely that might require centralized
servers is dealing with mail|news.

>>
>
> The advantage of a captive web server is that it requires zero setup and
> maintenance by the person running the test.
>
> Do you know of a particular test case or bug # that I could use as an
> example and build a captive-web-server-based test script around?
>

The current js browser and dom tests require a web server. Nothing
special about the web server requirements for the js browser, but the
dom tests require svg mime types.

For some other examples ping me on irc.

/bc

Allan Beaufour

unread,
May 10, 2006, 4:14:32 AM5/10/06
to Bob Clary, dev-q...@lists.mozilla.org
On 5/9/06, Bob Clary <b...@bclary.com> wrote:
> Dave Liebreich wrote:
>
> >>
> >> Currently each test machine hosts a copy of the tests and has a local
> >> virtual web server test.mozilla.com at 127.0.0.1
> >
> > This means that a developer who wants to run these tests on his or her
> > own machine must first configure apache. That might be too much to ask.
> >
>
> Could be too much. Lets see what they say here ;-)

I would have no problem with it, but I'm not the right one to ask :)
But I think that is it too much. Do people in general have perl
installed? I'm more in favor of the simple perl server approach.

--
... Allan

Mike Shaver

unread,
May 12, 2006, 3:49:45 PM5/12/06
to dev-q...@lists.mozilla.org
On 5/6/06, b...@bclary.com <b...@bclary.com> wrote:
> As some of you may already know, I have been working on automating test
> execution for Firefox for some time but have not made the project or
> work public before now.

Great, *great* to see this discussion and work happening.

I am going to be lazy and reply here rather than to specific messages,
because my intentions of doing the latter have led to me not replying
so far. I hope it's still helpful.

- I think requiring Apache is probably a bad idea. It's hard to get
it configured just the right way, especially if you already have an
Apache setup on your system (as with Linux and OS X boxes), and you
have to make sure the right things happen with system names for
redirects, and MIME types, and CGI paths, and error logging, and such.
It picks up a lot of stuff from its environment, and that will tend
to make the tests less reliable, IMO. A perl or python self-contained
server would work pretty well for almost everything, I think. (I
looked hard at the Apache route when I was building web administration
tools in a recently-previous life, and decided instead to use the
Python http-server module instead, because of these issues -- and I
was targetting high-end sysadmin types, not mere developers! :) )

- I think these are the questions that developers will ask, and we
should work hard to make the answers simple and pleasant:

1) How do I run all/some tests against my Firefox build?

A good answer would be something like "from your objdir, run
`./test-firefox.sh dom js-suite xslt cache:bug-122233-regress` and
look for "NEW FAIL" messages".

2) How do I run a specific test?

(I used the "cache:bug-122233-regress" thing above. A way to point to
a specific pathname would be very nice too.)

3) How do I add a test to the suite?

A few different types of tests that might need to be added, for which
we should have some pat answers:

- "I have an HTML page that the browser needs to load"
- "I have a script to run in xpcshell"
- "I have a shell script to run"

We should have standard and simple ways of indicating "PASSED",
"FAILED: details", "CAN'T RUN" (the latter for things like "running
Wibble tests, but build has it disabled") for each of those cases,
which are picked up by the "templates" or "subharnesses" or
whathaveyou.

A JS library (that works in content and chrome) to mechanize things
like "green or red?" or other such visual tests would help a lot too,
and I bet we have clever people following this thread who would love
to write that sort of thing.

Once we get to the point that we have good answers for those
questions, I think we will be able to add some additional automation,
centralized collection/republishing (I like the Atom idea!) and such.

I hope that's helpful in some way. I suspect people have already
figured a lot of this stuff out, but I thought I'd share my thoughts
nonetheless.

Mike

Reply all
Reply to author
Forward
0 new messages