thoughts on automated testing

Skip to first unread message

Philip Durbin

May 22, 2019, 4:24:57 PM5/22/19
Yesterday we talked a lot about automated testing during tech hours and we were asked to share any additional thoughts we have so here goes. :)

First, I think we should think about what our principles are. Here's what I'm thinking.

- Writing tests takes time now but saves you time in the long run.
- Testing in depth is good, just like security in depth.
- Code coverage is more than just a number. You can see if a feature (a Command, for example) is green or red in the sense that the feature is either being exercised by the tests or not.
- We should use open source tools for our testing instead of proprietary ones and shouldn't feel locked in to any particular vendor.
- All business logic should be able to be exercised via API. Otherwise, you have to write browser based tests, which are more difficult to write and more brittle.

Here's what I would suggest in terms of a prioritized list of goals or steps:

- Get to do the same job that the older Jenkins server and phoenix do which is to catch regressions before a release.
- Get to do a new job of catching regressions before merging pull requests.
- Get to measure code coverage of integration tests using the method  documented as "Measuring Coverage of Integration Tests" on the testing page of the dev guide.
- Once it's clear from code coverage reports which features are not being exercised by API tests, prioritize critical features for testing and add APIs to make them testable if necessary.

I think this enough to keep us busy for a while.

Feedback on all this is welcome, of course. There's probably stuff I'm forgetting. Some of it may be written down at but above I'm trying to give a snapshot of what I'm thinking these days.

Reply all
Reply to author
0 new messages