Review of regression testing and improvements for tests

17 views
Skip to first unread message

pmir...@soldevelo.com

unread,
Sep 25, 2018, 9:31:59 AM9/25/18
to OpenLMIS Dev
Hello everyone,

In order to move further with work on the epic to release faster OLMIS-4565, together with Klaudia and Sebastian, we created a document with a review of Regression Testing and improvements.
You can find this document here: Regression Testing Guide And Improvements

Let us know, if you have some suggestions, how to improve this approach. 

Thank you,
Paulina Mironiuk.

Josh Zamor

unread,
Sep 27, 2018, 8:24:07 PM9/27/18
to OpenLMIS Dev
Thank you Paulina et al, this is looking good.  I added feedback, quite a bit of it.  Enough that you might wonder why I gave so much.  I put this in a comment in the document, however I'd like to x-post it here for further discussion.

What I asked is:  The book Accelerate makes a strong case that automated testing, even at the cost of maintaining it, is of greater value if it leads to reduced lead times and more frequent deployments.  i.e. when manual testing isn't playing the gatekeeper function.  I know I've recommended keeping automated tests out of edge cases before to avoid brittle tests becoming a burden, however the case made by Accelerate has me reconsidering this.  While we may not completely switch gears right now, I think we should consider giving it a try, and that means challenging some of these assumptions - even if they're hard learned lessons from the past.

Overall what I'm thinking is if we should really rethink how we utilize manual and automated testing.  In retrospect this regression testing process has undeniably added considerable overhead, and it's challenging our ability to deliver value to stakeholders which don't want to wait months.  We're making improvements in this process to reduce the burden it puts on us, however there is a strong case made in the book Accelerate that any considerable manual testing which is acting as a gatekeeper function actually hurts quality.

Obviously we can't just yank the regression tests and expect good things.  However we can rethink what's holding our quality, and frequency of delivery, back.  A few brainstorm type topics:

- Automated tests, of course.
- Integrating the QA mindset in with the development mindset (e.g. what if we didn't have a QA column on our sprint board)
- Ensuring that when a release breaks things in production, we know about it first and we know how to quickly release something that makes the world right again.

This is a conversation that's in parallel to the document posted - i.e. we don't need to rewrite the document, though some of the assertions it makes about the value of manual and automated testing should get to more specifics without limiting the brainstorming above.

Thoughts?

Best,
Josh

pmir...@soldevelo.com

unread,
Sep 28, 2018, 9:53:16 AM9/28/18
to OpenLMIS Dev
Hi Josh,

Thank you for your feedback. We will continue to make some of the issues, about which you mentioned in the document, more specific.

Answering some of your questions in the document:

We should modify these parts of tests cases which are checking the exact text on UI by making them more general to avoid situation, that someone fails the manual test case because the information is not identical. I can create a ticket to ensure these tests will be modified.
However, we still should check whether UI information make sense, regardless whether it is exactly the same text as presented in a test case. Similarly, we should check whether report was generated with right content. That's why we thought that these parts of the system should be checked manually, as it requires some intuitiveness. 

Currently a new manual test is created when a new functionality is added to the system and there are no adjusted test cases yet. When a given test case partly covers a functionality, this test case is being updated - we do not create a new manual test then. Moreover, there are no longer new manual test cases created after ticket has landed in QA column. 

Finally, referring to automated tests for edge cases - we are open to giving it a try. For example, in CCE service there is one edge case, which we thought that can be replaced with automated test. This test case is about adding a solar device with battery and there is a ticket for writing this test OLMIS-5422.

Regards, 
Paulina.

Reply all
Reply to author
Forward
0 new messages