On the call we had: Udai, Michael V., Keith W., Jakub, Deepak P., and
myself.
We had a very good discussion! Thanks to all who took time out of their
evening to join. We focused on these problem statements:
1. Acceptance tests are slow to develop and maintain.
2. Data sets used for acceptance tests are expensive to maintain and
take up significant portion of execution time.
Observations and findings from group:
- Work towards testing pyramid with less UI based tests, more lower
level and API level tests.
- When an acceptance test fails, it's difficult to determine why tests
has failed.
- Page objects are good. The implementation of page objects could be
better.
- There are good reasons to go towards Selenium 2.0/ Webdriver -
-- an api that's easier to use,
-- has support packages,
-- gets rid of the required startup of selenium server since Webdriver
doesn't need a server
-- supports more browsers
- Michael V. has made some good progress already with Webdriver [1]. He
experimented with the Selenium Emulation for Webdriver [2], [3].
- Data sets have two usages right now - seed data and comparison of
actual/expected test results.
- Deepak said he's generating unique pre-test data via the UI instead of
creating new seed data sets.
- Many of the seed data sets are quite close in content, and have very
small differences. We need to consolidate the number of data sets used
for seed data.
- Move away from using data sets for verification of acceptance tests.
Instead use UI to validate test result.
- When reviewing Acceptance Criteria for a story, determine whether to
cover requirements as integration or at acceptance test level. Go
towards lower level tests where possible.
- developers can run tests faster using RAMDisk [4]
[1] - http://tinyurl.com/23azb8l
[2] - http://code.google.com/p/selenium/wiki/SeleniumEmulation
[3] -
http://github.com/vorburger/mifos-head/commit/5805f84b0f9169e261e3730862
346633b591a384#diff-3
[4] - http://mifosforge.jira.com/wiki/display/MIFOS/RAMDisk
Next steps:
Short term:
- Michael V. will try to run all acceptance tests with WebDriver. If it
works, we will move towards WebDriver on head master. Question:
Selenium 2.0 is still in beta. Do we wait for Selenium 2.0 to ship?
- Deepak and Vivek will look at reducing number of data sets required
for acceptance testing. Gather a list of tests that use data sets for
result validation and start changing those tests to validate via UI.
- Kojo working on testing coverage reporting for integration and
acceptance testing levels. Also working on providing more information
when test fails.
Longer term:
- Continue to add and expose APIs to allow functional testing and data
setup to be done via APIs.
------------------------------------------------------------------------------
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
>I think we should look at something like JBehave only for such tests. This provides a >framework in which developer/BA/tester/customers can collaborate.
Vivek, JBehave looks interesting. I looked at Twist over a year ago. Would you say they are similar? Can you point us to any projects that are using JBehave today to get an idea on how it is practically used?
>After we move to selenium-webdriver mode, we should configure our tests so that some build >continues to run them with browser to verify correctness.
Agreed.
>[Context: Deepak said he's generating unique pre-test data via the UI instead of creating new >seed data sets.]
I didn't make this clear in my notes, but we agreed this wasn't an optimal approach, but a tactic used to avoid creating extra data sets. I agree it increases complexity of each test and can make it slower.
>I feel we should do this using an API which is piece we want to change, in order to run it >faster and be decoupled. In our case this means we should use SQL.
I like us using an API when it exists. You mention using SQL - my concerns with SQL:
1. test writer may miss some business logic (enforced by Mifos UI) not validated at DB level
2. the SQL could break tests when ongoing db changes are made
Regards,
Jeff
Vivek,
I understand what you are saying and agree that using the application to
set up data is not ideal, but I am very concerned about trying to write
proper SQL to create the entities needed for an acceptance test. I
think this raises the barrier to writing new tests and allows potential
for faulty or incomplete data to be inserted.
As a test writer, what I would like is a mark-up style file to define
the entities that my test requires. For example a YAML
(http://en.wikipedia.org/wiki/YAML) file that defines the data I need to
create for my test - e.g.
Loanproduct(flatweekly):
Name: loanProdAbc
ShortName: labc
Category: foo
Start Date: 2008-01-01
Applicable For: Client
MinAmount: 100
MaxAmount: 100000
DefaultAmount: 5000
RateType: Flat
MinRate: 10
MaxRate: 50
DefaultRate: 18.9
MinInstallments: 1
MaxInstallments: 30
DefaultInstallments: 12
InterestGL: 12313
PrincipalGL: 3212
So if we could have that, then we could debate about what the fixture
behind this object would do to inject the data. Thoughts?
------------------------------------------------------------------------------
Nokia and AT&T present the 2010 Calling All Innovators-North America contest
Create new apps & games for the Nokia N8 for consumers in U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store
http://p.sf.net/sfu/nokia-dev2dev