Yesterday we talked a lot about automated testing during tech hours and we were asked to share any additional thoughts we have so here goes. :)
First, I think we should think about what our principles are. Here's what I'm thinking.
- Writing tests takes time now but saves you time in the long run.
- Testing in depth is good, just like security in depth.
- Code coverage is more than just a number. You can see if a feature (a Command, for example) is green or red in the sense that the feature is either being exercised by the tests or not.
- We should use open source tools for our testing instead of proprietary ones and shouldn't feel locked in to any particular vendor.
- All business logic should be able to be exercised via API. Otherwise, you have to write browser based tests, which are more difficult to write and more brittle.
Here's what I would suggest in terms of a prioritized list of goals or steps:
- Get
https://jenkins.dataverse.org to do the same job that the older Jenkins server and phoenix do which is to catch regressions before a release.
- Get
https://jenkins.dataverse.org to measure code coverage of integration tests using the method documented as "Measuring Coverage of Integration Tests" on the testing page of the dev guide.
- Once it's clear from code coverage reports which features are not being exercised by API tests, prioritize critical features for testing and add APIs to make them testable if necessary.
I think this enough to keep us busy for a while.
Phil