Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Risk Based Testing

44 views
Skip to first unread message

she...@osel.netkonect.co.uk

unread,
Oct 17, 2005, 1:27:24 AM10/17/05
to

I've just come across this term and am wondering what it implies.

I've bought into the truisms that:

'you can't test quality in' and 'testing only shows the presence of
defects...', and also understand how slippery test metrics are.

So what do you do with risk based testing? Is there a presumption that
by testing 'risky' aspects of s/w that quality can be tested in, or
simply confirm that its likely to give problems when it goes live?

gre...@my-deja.com

unread,
Oct 17, 2005, 9:50:33 AM10/17/05
to
In a nutshell, risk based testing focuses on features and areas of
functionality that are either prone to error or highly visible to the
end user. That is, testing is an effort in risk mitigation.

With virtually any modern application or device, you can't test every
path through the code or every facet of the hardware design. Further,
often an already tight test schedule becomes even more so due to
problems in development or delays due to outside issues. Testing
focused on high risk areas is generally the best way to deal with
these realities.

So 'risk based testing' is generally a good approach due to the
complexity of the item under test, but can also be absolutely
necessary on some level once your test schedule becomes
'squeezed'.

As time permits, I'll try to respond with a more detailed answer.

Greg G.
http://www.testengineering.info

xtremetester

unread,
Oct 17, 2005, 10:23:27 AM10/17/05
to
Agree with Greg, only would like to add that you can have technical
risk, business risk and any function under test can be used with
different frequency during its' life cycle. Any function /class
/object under test has different risk depending of complexity too.
Alex
http://www.geocities.com/xtremetesting/

gre...@my-deja.com

unread,
Oct 17, 2005, 12:01:13 PM10/17/05
to
I thought I'd add to my previous response a few scenarios that
illustrate using a 'risk based' approach.

1) You are testing an accounting application that runs on a central
server and has two different GUIs, one for administrators and one for
regular users. These GUIs can run on any system on the organization's
network. Due to staffing issues, you can't test each GUI to the extent
you had planned. In this case, a strong argument could be made for
reducing testing on the administrative GUI in favor of the regular
GUI due to the prevalence of use of the latter. It could also be
argued that administrative users are less likely to make mistakes
that might reveal problems. In this scenario, the risk assessment
is based on the visibility of errors in a particular functional area,
as well as the impact those errors might have.

2) Your organization is developing a device for an automaker that
will be part of the information and entertainment system in their
autos. You know from past experience that another company's
device that will also be in the car places frequent and repeated
requests for GPS coordinates that your device provides. In the
past, these repeated requests in short succession have caused
the device to lock up. With this knowledge, you decide to spend
considerably more time performing load and stress testing of the
new release, with the focus being requests such as those for
GPS coordinates.

3) You are tasked with performing unit testing on a complex
application. Your organization has acquired the latest tools for
identifying paths through the code. Based on your analysis,
you determine that there are 12,000 distinct paths through the
code. You further estimate that it would take about a half hour
to develop an automated or manual test for each of these paths
through the code, which would mean that you would need 6,000
hours just to develop the tests. Executing the tests would take
more time yet. Since you are working alone on this task, and you
have nine weeks to complete this test effort, there is no way you
can do this much testing. You instead select a subset of these
paths based on various criteria.

Frequently, there are more than one definition for test terms.
If anyone has a different interpretation of 'risk based testing',
I'd really like to hear it.

Regards,

Greg Gonnerman
http://www.testengineering.info

.

unread,
Oct 17, 2005, 12:23:49 PM10/17/05
to

You cannot test everything. This is been proven time and time again. You
have to select what you are going to test. In some cases you can group a
number of tests into one test. Risk based testing is another way to narrow
down what you are going to test.

I believe the original truism still holds true. It is simply saying that
no matter what testing you do, the results of testing will tell you if
the software is of a poor quality.

Ideally, good software engineering practices will improve the quality of
the software. Testing will just confirm that your practices are not
failing.

--
Send e-mail to: darrell dot grainger at utoronto dot ca

gre...@my-deja.com

unread,
Oct 17, 2005, 3:22:19 PM10/17/05
to

I did a few searches on the web and found a paper by Hans
Schaefer titled "Risk Based Testing: Strategies for Prioritizing
Tests Against Deadlines". It offers a more comprehensive
explanation that what I've been able to provide in this forum.

It can be found at: http://home.c2i.net/schaefer/testing/risktest.doc

Regards,

Greg G.

she...@osel.netkonect.co.uk

unread,
Oct 20, 2005, 6:39:22 AM10/20/05
to

Thanks for that - it helped me think around the issues. My take is that
RBT is an approach to help find the most important defects first; with
'important' and deciding what to do about them being dependent on
context.

(My background is development and my mindset tends to be that testing
is to eliminate defects (even if not a realizable objective).
Approaching testing from a tester's perspective seems a little more
pragmatic - doing he best that is possible within constraints.)

Shrinik

unread,
Oct 21, 2005, 4:06:31 AM10/21/05
to
I have a question related to RBT....

Can we employ risk based testing in early stage of test life cycle say
test case design. I think NO.

In my opinion, it is a wrong approach. There will be budget and time
constraints at system testing or regression testing level so that one
would attempt to do "Just good enough testing" so that we can save
time and money. Why one would risk the whole project by following
"Risk based testing" at early stages of testing. I assume that the
context is a lifecycle testing model where there is a test team that is
involved in the project if not very early at least at the beginning of
the coding or design.

Risk based testing is nothing but testing of a sub set of features of
the application based on the risk profile of the features and is
typically adopted where you short cycles of testing or service packs or
regression testing. It is not a standard methodology to applied in
early stages of a full blown testing cycle.

I also wonder whether any client would say - Here is an application
to be built and you have 6 months for development and testing. You can
choose to test only certain high risk areas of application rest you can
either test lightly or don't test at all.

Any views?

I would define RBT in a nutshell as a methodology to optimize test
execution when you ( as a test manager or a PM) are faced with time
constraints not as a general rule.

Shrini

Shrini

dumitru.c...@gmail.com

unread,
Oct 21, 2005, 7:26:21 AM10/21/05
to
Shrinik wrote:
> I have a question related to RBT....
>
> Can we employ risk based testing in early stage of test life cycle say
> test case design. I think NO.

What exactly do you mean by "early stage of test life cycle" or
what type of TLC. It could be first UNIT test or you already have
functionality to test.

Andy Tinkham

unread,
Oct 21, 2005, 11:21:40 AM10/21/05
to
gre...@my-deja.com wrote:
> Frequently, there are more than one definition for test terms.
> If anyone has a different interpretation of 'risk based testing',
> I'd really like to hear it.

Risk-based testing is indeed one of those test terms that have two
definitions. In the course of my studies, I've had the opportunity to
talk about RBT with Cem Kaner and James Bach, among others. They use
risk-based testing in a different way. While they acknowledge that the
term can mean prioritizing areas of testing based on how risky that area
is perceived as being, I believe that they prefer the definition of the
term where risk-based testing describes how testers design new tests.

This form of risk-based testing involves the tester going through the
thought process of "What could fail in this piece of the application I'm
responsible for testing at the moment?" Once they have identified a
potential failure, they then think about how to test for that particular
failure, and then execute those tests. Thus, the thinking about risk is
much more focused and narrow in this version of RBT.

The attacks that Alan Jorgensen and James Whittaker defined (which
Whittaker published in How to Break Software and How to Break Software
Security) are examples of tests based on this type of risk-based testing.

This is often done as one form of exploratory testing (though it should
not necessarily be the only form). There's lots more information on RBT
(and particularly the definition I describe, though he does talk about
Greg's definition as well) in Cem's online black-box testing course
(freely available at
http://www.testingeducation.org/BBST/BBSTRisk-BasedTesting.html)

Andy

--
Andy Tinkham
Doctoral Student, FIT (Studying with Cem Kaner)

she...@osel.netkonect.co.uk

unread,
Oct 24, 2005, 7:03:50 AM10/24/05
to

Just for interest - I've just been asked to help with the formal review
of *huge* numbers of test scripts, in next to no time of course, which
are the basis for RBT.

Of course its not practicable so I've found myself having to sample
scripts for review - risk based reviews on top of risk based testing!

I'm waiting for someone to come along and sample my reviews.

Shrinik

unread,
Oct 24, 2005, 9:58:11 AM10/24/05
to
Great Views Andy thanks. Here are my follow-up questions

1. In all available forms and definitions of RBT - it has a main focus
on RISK - a narrower vision ( as you mentioned). Hence serves as tool
to prioritize given set of test cases. Hence this is not a main stream
technique -like EQ partitioning, Decision tables, State machines, Pair
wise combinatorial technique. Moreover, while reviewing these test
cases, (created using Main stream techniques) an informed reviewer
would also make sure to check for adequate coverage for risky areas.
Agree?

2. The main theme I was debating was about "A test team choosing RBT
as main stream testing technique right from test design - by designing
the cases solely on the basis of risk and executing them. How
exhaustive such test suite be? How viable such testing strategy would
be?

3. Would I be right I tell them as "Use RBT only at test execution
phase so that you will execute important/Risky areas on priority but if
time permits you have other non - risky areas tested out as well" It
can happen that a non risk is full of bugs as no one tests it OR there
could be error in identifying Risk area/features properly...

Shrini

Shrinik

unread,
Oct 24, 2005, 10:01:48 AM10/24/05
to
Early stage of Test life cycle - I meant "Test design". This is a phase
that follows Requirement testing phase where in test team ratified the
requirements and is proceeding to design the test cases. It is before
UNIT test and before some functionality being available to testers as a
formal "build".

I assume that there is a separate test team that works from the
beginning with development team

Phlip

unread,
Oct 24, 2005, 10:14:31 AM10/24/05
to
Shrinik wrote:

> Early stage of Test life cycle - I meant "Test design". This is a phase
> that follows Requirement testing phase where in test team ratified the
> requirements and is proceeding to design the test cases. It is before
> UNIT test and before some functionality being available to testers as a
> formal "build".

Is that once per iteration or once per project?

--
Phlip
http://www.greencheese.org/ZeekLand <-- NOT a blog!!!


Shrinik

unread,
Oct 24, 2005, 10:49:22 AM10/24/05
to
I assumed a Non Agile life cycle where bulk of the requirements of the
release are thrown open to project team, dev and test ratify them from
their respective angles and some un-easy Requirement freeze happens.
And then bulk of test cases are designed. I use the word "bulk" in both
cases to mean both requirement analysis, test case writing continue
beyond presumed boundaries.
In traditional Quality process (likes of CMM) they are handled as
"Change requests".

So I meant it is per project/Release.

If you take agile projects, you go in small iterations of 1-2 weeks.
Line up few user stories ( Agile equivalents of Requirements) both dev
and test work together to deliver, test and deploy them in the given
time - demonstrate business value to the user and then come back repeat
the iteration next set of "user stories" and the cycle repeats. So the
product evolves in iterations.

Shrini

Andy Tinkham

unread,
Oct 25, 2005, 1:14:19 PM10/25/05
to
Shrinik wrote:
> 1. In all available forms and definitions of RBT - it has a main focus
> on RISK - a narrower vision ( as you mentioned). Hence serves as tool
> to prioritize given set of test cases. Hence this is not a main stream
> technique -like EQ partitioning, Decision tables, State machines, Pair
> wise combinatorial technique. Moreover, while reviewing these test
> cases, (created using Main stream techniques) an informed reviewer
> would also make sure to check for adequate coverage for risky areas.
> Agree?

Risk doesn't necessarily serve solely as a prioritizing technique. Under
the definition that Greg used, it certainly does -- run the tests that
are most likely to find high-impact bugs first. Under the definition I
used, risk isn't so much a prioritization strategy as it is a test
design tool -- given an area to test (which may have been chosen by
Greg's description of RBT), think of ways that things COULD go wrong in
that area and devise tests to show if these things HAVE gone wrong.

Thus, I disagree with your claim that my definition of RBT is different
from what you call "main stream techniques". First, from a picky
viewpoint, almost none of the things you listen are actually test design
techniques in and of themselves (the exception being pair-wise
combinatorial testing). They're ways of modeling the program that
testers have found useful as they design tests. Equivalence partitioning
models the system by reducing the number of input values. Decision
tables and state machines model the system's actions or state. RBT (in
either definition) models the system in terms of its risks. None of
those modelings generate test cases simply because the model exists --
testers generate test cases from the model.

This is picky, I admit. However, it is my belief that RBT IS a technique
just like equivalence partitioning. It's not necessarily a better
technique, but it's not necessarily a worse technique, either.

Ultimately, testing is all about risk. If some software is absolutely
zero risk, why test it at all? If there's no risk in the software, then
it will work perfectly the first time, be precisely what the customer
needs and wants, and it will ship exactly when it's supposed to ship.
Doesn't sound like any software project of which I've ever heard. There
are very few interpretations of the testing role that would be at all
needed in those circumstances. Even when a technique other than RBT is
used, there's still an underlying model of risk -- we do boundary value
testing because there's higher risk for the boundary values than there
are for other values in the equivalence class -- not only can all the
errors common to the entire class happen for the boundary values, but
there can also be off-by-one errors at the edges. RBT simply makes the
risks explicit, rather than leaving them implicit.

As far as coverage goes, it might be something that a reviewer could
look for, but then again, it might not. It depends on the context in
which the reviewer is working. If I were reviewing a set of test cases,
I would at least consider it much of the time. If I were reviewing test
cases for compliance with a documentation standard, however, it would be
totally inappropriate for me to look for gaps in test coverage. Even
within the context of a content review of the test cases, I would
probably do more of a sampling-based risk coverage than an exhaustive
one. Rather than spend a large amount of time trying to think of every
risk I could think of and then determining whether there are tests to
cover that risk, I'd probably identify a few key risks as well as a
couple of smaller, more obscure ones and examine how well they're
covered. I would then possibly probe more deeply and thoroughly,
depending on how well covered my sample risks were.

> 2. The main theme I was debating was about "A test team choosing RBT
> as main stream testing technique right from test design - by designing
> the cases solely on the basis of risk and executing them. How
> exhaustive such test suite be? How viable such testing strategy would
> be?

Given my above comments on testing being ultimately all about risk, I
would say that a test suite could be quite exhaustive and viable,
provided that RBT was done well. The limit on the test suite from RBT is
the imagination and skill of the testers doing the testing. The more
risks they can imagine and the better the tests they come up with to
tests for those risks, the more exhaustive and viable the test suite
will be.

Just as with other testing techniques, it's possible to do it well or
poorly, however. For example, in equivalence partitioning, for most
situations, there is no need to partition upper case letters from lower
case letters. It certainly could be done, there are certainly situations
where they are different classes, but in most cases, they're actually
part of the same class. Testing done with just the one class will
probably involve fewer test cases than a similar level of testing done
with two classes. Partitioning to that level is a misapplication of the
technique most of the time -- it's making the model too specific. In the
same fashion, taking risk identification to the level of "An army of
rampaging barbarians might invade the server room, raining destruction
on the equipment. Just as our server is about to complete it's critical
transaction, they hack the network cable to our server. The battle axe
used to cut the cable scrapes along the metal case, causing the
electricity used to transmit the network packet to short circuit the
computer and crash the hard drive." is also too specific. Could the
above story happen? Well, yes, but it's a VERY remote possibility (at
least everywhere I've ever worked ;) Do we really need to come up with
tests to cover barbarian attack? Probably not for most systems.

> 3. Would I be right I tell them as "Use RBT only at test execution
> phase so that you will execute important/Risky areas on priority but if
> time permits you have other non - risky areas tested out as well" It
> can happen that a non risk is full of bugs as no one tests it OR there
> could be error in identifying Risk area/features properly...

No, you wouldn't be right to tell them that. RBT (in the test the
high-risk areas first sense Greg gave) applies in any phase of testing
-- in design, perhaps you want to design the high-risk tests first. In
execution, you might run the high-risk tests. Even in defect
reporting/fixing, you might want to focus on the high-risk tests first.
As you state, however, there are problems. If an area is identified as a
non-risk and it's full of bugs, then the risks in that area were not
completely identified or not identified correctly.

RBT (in the using risks as a technique to devise test cases sense I
brought up) is a test design technique -- used during the test design
process. This could be a separate phase at the beginning in a more
scripted testing process or it could be a spur of the moment test design
technique in a more exploratory testing process. Either way, just as you
don't use equivalence partitioning directly in test execution, you don't
use RBT directly in execution either. Once the test is designed, there's
no more need to use the design techniques.

0 new messages