Automated testing comment

0 views
Skip to first unread message

John Boyer

unread,
May 22, 2009, 11:54:50 AM5/22/09
to ubiquity-...@googlegroups.com, mark.b...@webbackplane.com, alex....@webbackplane.com

Hi Mark and Alex,

We spoke last week about a course of action regarding the automated test suite that, I realized yesterday, is probably much more work than is needed and also perhaps less valuable.  So apologies for the high urgency marking on this email, but I was hoping to catch you before you did a lot of work on this.

The plan formulated on the call last week was to parameterize the automated test suite so that we could have one version which ran all tests, resulting in an automated way of producing our implementation report going forward, and a second mode in which only the "green" tests pass.

The intent of the green-only version is, to follow Mark's "broken window" analogy, to ensure it is easy for someone to see that we have regressed on a nightly build because any red at all means that a regression occurred.

However, it seems a lot of work to reorganize things to allow this green-only mode.  By comparison, I think it would be a lot less work to simply save the prior day's results at the start of the test cycle, obtain new results, and then compare the differences.  Aside from being less work, this approach would seem to give us the advantage of automatically discovering when a fix for a particular test happens to be the fix for many other tests.  If we are only running tests we know are green, then we have to manually find out each red test that turns green as a result of fixing some other test.

Thanks,
John M. Boyer, Ph.D.
STSM, Interactive Documents and Web 2.0 Applications
Chair, W3C Forms Working Group
Workplace, Portal and Collaboration Software
IBM Victoria Software Lab
E-Mail: boy...@ca.ibm.com  

Blog:
http://www.ibm.com/developerworks/blogs/page/JohnBoyer
Blog RSS feed:
http://www.ibm.com/developerworks/blogs/rss/JohnBoyer?flavor=rssdw

Reply all
Reply to author
Forward
0 new messages