Summary: With all of the advancements in defect tracking systems within
the past few years, companies are still using the same ambiguous,
canned fields known as Severity and Priority to categorize their
defects. Let's examine a better way to assign importance to a defect.
[Severity-Functional based; Priority- Customer impact]
Every software development company uses a defect tracking system. A
number of vendors have been providing the software development
community with useful and powerful defect tracking tools during the
past few years. But are these packages being used properly? It seems
that everyone uses the same data fields when defining the issue or
defect, with the same types of values and the same definitions of these
fields and values. But with all the changes and advancements in defect
tracking systems, perhaps we need to revise the fields available and
the way we categorize defect reports. Two defect tracking system fields
in particular, the "severity" and "priority" fields, seem
prevalent, but they allow ambiguity to slip into the process.
I have worked for several different companies and have had the
opportunity to work with different tracking systems. Different tools
provide varying levels of functionality in the software defect tracking
process. But most of these tools have the following fields in common:
Title, Description, Submitter, Owner, Subsystem, Component, Status,
Resolution, ID, Priority, and Severity.
Most of these fields serve a useful purpose. The Title obviously
provides a brief description of the issue that can be used in quick
ticket management and review. The Description is obviously needed.
Without it, the other fields lose their meaning. The Submitter allows
for tracking the source of the issue, so that additional information
about the defect can be obtained by development if necessary. The Owner
field provides us with knowledge of who to go to for current status of
the issue. Subsystem and Component help to categorize the issue and
allow us to map it to a particular component of the system for use in
metrics analysis of number of defects per system module. Status and
Resolution are needed to allow us to determine what issues have been
resolved and how they have been resolved. And of course, an ID is
needed to easily order the issues and assign a unique parameter to
them.
But the last two fields, Priority and Severity, seem of questionable
usefulness. The tester or test manager usually fills out the Severity
field when an issue is first submitted into the defect tracking system.
Product management then usually fills out the priority field, following
a meeting to gather information about the issue. Some may argue that
these fields are the most important in the whole report, allowing a
degree of impact and urgency to be associated with the description. The
values for the priority and severity fields are usually High, Medium,
and Low (or something similar) with the following types of definitions:
Severity:
· High: A major issue where a large piece of functionality or major
system component is completely broken. There is no workaround and
testing cannot continue.
· Medium: A major issue where a large piece of functionality or major
system component is not working properly. There is a workaround,
however, and testing can continue.
· Low: A minor issue that imposes some loss of functionality, but for
which there is an acceptable and easily reproducible workaround.
Testing can proceed without interruption.
Priority:
· High: This has a major impact on the customer. This must be fixed
immediately.
· Medium: This has a major impact on the customer. The problem should
be fixed before release of the current version in development, or a
patch must be issued if possible.
· Low: This has a minor impact on the customer. The flaw should be
fixed if there is time, but it can be deferred until the next release.
So the priority and severity fields tell us how severe an issue is to
the customer, how severe it is to the testing schedule, and how urgent
it is that this issue be resolved. This is indeed vital information to
have when identifying issues with software. But while the severity and
priority fields serve the purpose of communicating this information,
they do not do it in the most effective and unambiguous way possible.
Two Examples
Think of the following type of problem: a spelling error on a
user-interface screen. What severity does this issue deserve? Well,
judging from our canned definitions, it would seem that this is a
low-severity item. After all, the server doesn't crash due to a
spelling error. But is this truly a low-severity problem? A spelling
error will probably not hinder a customer's ability to use the
system, but it greatly affects the customer's perception of the
company that created the product and of the quality of the product. So
from customer-relations and corporate-image points of view, the
severity of this type of issue is indeed high. But the severity field
doesn't allow us to express that properly. So the need for the
priority field becomes apparent. The priority field does allow product
management to define this issue as high priority, but this creates the
case where something is low severity but high priority. To me, this is
an ambiguous duality. How exactly does this issue stack up against the
others in the system? When should a developer look at this issue?
Let's consider another case: the anomalous server crash. We've all
seen this type of issue. A server crash that occurs on the first full
moon of every leap year but that is not reproducible by any human means
on a consistent basis. So how would this issue be categorized within
the defect tracking system? Well, since it is a server crash, many
would argue it should be a high-severity issue. After all, the system
is inoperable until the server is restarted. But what is the impact to
the customer? In this case, the impact is quite small. Since the
customer may never see this issue present itself at all in a production
environment, it would be given a low priority by product management.
Here then is another case where an issue has an ambiguous duality: a
high-severity issue that is not a high-priority issue.
A Modest Proposal
I recommend eliminating the Severity and Priority fields and replacing
them with a single field that can encapsulate both types of
information: call it the Customer Impact field. Every piece of software
developed for sale by any company will have some sort of customer.
Issues found when testing the software should be categorized based on
the impact to the customer or the customer's view of the producer of
the software. In fact, the testing team is a customer of the software
as well. Having a Customer Impact field allows the testing team to
combine documentation of outside-customer impact and testing-team
impact. There would no longer be the need for Severity and Priority
fields at all. The perceived impact and urgency given by both of those
fields would be encapsulated in the Customer Impact field.
Let's consider our previous examples. In the first example, the
spelling error on the user interface screen was rated as low severity
by the testing team, but as high priority by product management. This
ambiguous duality disappears when the Customer Impact field is used in
place of Severity and Priority. This particular issue would have a high
impact on the customer's view of the company that produced the
software and the quality systems in place at that company. Therefore,
this issue would have a "High" Customer Impact. Is there any other
information needed in order to categorize this issue? No. This is a
high-impact issue to the customer and customer relations, and would
therefore be properly scheduled by the development staff for fix before
official release. In this case, the Customer Impact field has allowed
the same information to be given, while replacing two fields with one,
removing ambiguity from the issue, and reducing the need for a meeting
to determine its priority.
Now let's look at our second example. The anomalous server crash
under the severity/priority method would again have had a duality: high
severity and low priority. This issue would have had a high severity
because it was a server crash and caused data loss to the user,
requiring the user to reboot the system. But since the user would
almost never have noticed it, it had a low priority. Again we can
eliminate two fields and a meeting by simply using the Customer Impact
field. How does this issue, a server crash on the first full moon of
every leap year, impact the customer? It won't affect the customer
very much. In some cases, the customer might not even have this release
of the software installed long enough on their system to even notice
it. So in this case, the issue would have a low customer impact. Two
fields replaced by one, and a meeting eliminated, while still providing
a categorization of the issue so that it can be scheduled properly by
the development team for resolution.
Conclusion
I hope this paper stimulates further review of the way defects and
issues are tracked and managed during the software development
lifecycle. The canned fields and definitions that seem to have
proliferated in many companies should be periodically evaluated to see
if there are more meaningful ways to categorize the issues properly in
order to resolve them for customers. With all of the recent advances in
workflow definition and reporting capabilities in defect tracking
systems, this may be an opportune time for such a reevaluation.
Hopefully this paper provides ideas for a good place to start to get
the most out of your defect tracking system and to ease the pain of
dealing with ambiguously categorized and prioritized issues.
Differentiate Priority and Severity
The effect of a bug on the software does not automatically correlate
with the priority for fixing it. A severe bug that crashes the software
only once in a blue moon for 1% of the users is lower priority than a
mishandled error condition resulting in the need to re-enter a portion
of the input for every user every time.
Therefore:
Track priority and severity separately, then triage appropriately. It
helps to have input from others on the team on priority. The importance
of a bug is a project decision, different from the bug's perception by
the Customer. In some cases it makes sense to track Urgency, the
customer's point of view, separately.
Question: What makes a bug severe? This page gives no clue. Difficulty
of fixing? But that can seldom be known in advance.
Microsoft uses a four-point scale to describe severity of bugs.
Severity 1 is a crash or anything that loses persistent data , i.e.,
messing up your files on disk. Sev 2 is a feature that doesn't work.
Sev 3 is an aspect of a feature that doesn't work. Sev 4 is for purely
cosmetic problems, misspellings in dialogs, redraw issues, etc. This
system works very well. (Interestingly, sev 4 bugs end up getting set
to priority 1 fairly often, because they are frequently VERY annoying
to users, and fixing them is generally easy and doesn't destabilize
things.) -- MichaelGates
The exact definition of severity is project-specific. Here is one that
is reasonable for many projects, however:
· Enhancement: New features
· Low: Improvement to existing code, e.g. performance enhancement, or
problems with an easy workaround
· Normal: Broken or missing functionality
· High: Problems causing crashes, loss of data, severe performance
problems or excessive resource use.
· Blocker: Problems that prevent testing or development work
We differentiate them so that the dispatcher or whoever reviews bug
reports can set priority based not only on how severe the problem is,
but the customer's importance, business needs, etc. Many bugs cause
crashes (High severity in the example above), but aren't fixed because
the crash is very infrequent or on a version/platform/feature low on
the vendor's support list.
There's a pretty good example in "Lessons Learned in Software Testing",
a book by CemKaner, et al. Lesson 73 there is "Keep clear the
difference between severity and priority". The example given says "a
start-up splash screen with your company logo backwards and the name
misspelled is purely a cosmetic problem. However, most companies would
treat it as a high-priority bug."
-- StevenNewton
For a while at my last company, you had to specify both priority and
severity when you reported a bug. Everyone was confused about what
those terms meant.
It's much better to merge them into a single value. The bug filer can
provide additional information, such as when a fix is needed and
whether a workaround exists. The responsible team uses all of those
factors to prioritize bug fixing, enhancements, and other work.
I had the same experience at my last company. I think that
differentiating priority and severity is a great idea in theory, but in
practice humans have too much difficulty with the concept to make it
worthwhile. It gets mixed up too often. Although perhaps a change in
terminology would help; how about "Importance" and "Destructiveness"
instead of "Priority" and "Severity"?
I had the same problems in my company. I managed the problem that every
"bug" is authorised and set the priority and severity to best fitting
values. Beyond that I checked who uses most often the highest/lowest
priority resp. severity and gave a talk to that users.
If using different words like "Importance" and "Destructiveness"
instead of "Priority" and "Severity" improves communication, then by
all means choose your SystemOfNames appropriately. The choice of this
page's name comes from existing usage in places like the bugzilla as
used in TheMozillaProject.
I understand the difference between priority and severity. But time is
linear, so I have to do things in a particular order. How do I use two
rankings to decide what do next?
As described in AutomatedTodoList: Priority, which ideally is adjusted
by someone acting in the role of dispatcher, is the deciding factor.
The true severity of a bug as determined by a tester or other
gatekeeper can be used as an input to the priority number, but it is
secondary. That said, if a developer has been assigned multiple bugs of
the same priority then it probably makes sense to use severity as a
guide to choosing what to do next.
Right now I just maintain issues, with no rankings, on IndexCards. And
then I keep asking my manager (outside of work, I'm my own manager, of
course) which card is most important to do next. The ExtremeProgramming
PlanningGame essentially does this when it's a team developing software
for a client.
Also, any attributes your BugTrackingSystem? requires had better not
confuse users - otherwise they will accidentally or deliberately enter
incorrect or meaningless values. KeepItSimple! And if you let users set
a bug's priority/severity when they enter it, won't they just make
everything Very Important and Highly Destructive? -- ApoorvaMuralidhara
Somewhere else is described the following idea:
1. Rank each item of work according to its value V where 1 is the most
valuable.
2. Rank each item of work according to its perceived difficulty D where
1 is the most difficult.
3. For each item of work, compute P as V/D
4. Rank the items, starting with the smallest value of P
5. Rearrange where there are obvious dependency problems.
6. Work first on the item with the smallest value of P
I strongly suggest you try a few experiments for yourself to see the
interplay between the values - reading examples of this just doesn't do
it justice. We've now use it for 12 months and it's proved very good at
increasing our value-delivered per time unit ratio.
See TaskSchedulingUsingZipfsLaw
Consider also the bugs which affect only developers, but do so on a
very regular basis. If the bug adds another 30 seconds to each test
cycle...
Rather than trying to set priorities in a vacuum, I suggest the
following: first address the issue that most improves your users'
quality of life the most. This may be a new feature, a screen change,
data validation routines, performance enhancements, crash fixes. Check
with your users; their prioritizations may surprise you.
As mentioned above, Priority and Severity are two, very distinct
properties of a Bug/Task. The example of a backwards logo on a splash
screen is a good one - Minimal Severity (zero effect on software
functionality), but High Priority (extremely obvious and affects every
user).
The problem, of course, is to determine what tasks are the most
important in order to decide what should be worked on first. And,
unfortunately, neither property (Pri/Sev) alone can be used to "sort
the list" in order to make this decision. You can't work on all the
HiPri? stuff while ignoring HiSev?. Or vice-versa.
Instead, rather than relying on Severity or Priority alone, you must
sort using a combination of both values to determine which tasks are
more "important" than others. Implementing this is fairly simple: If
both Priority and Severity parameters were assigned values from 1 (low)
to 5 (high), the Importance value is simply the summation of both
values. (For you database-types: SELECT severity + priority AS
importance FROM tasks ORDER BY importance DESC)
Using this method, a HiPri/HiSev task would have an Importance rating
of 10 and would receive attention before a LoPri/HiSev task
(Importance=1+5=6) or a HiPri/LoSev task (5+1=6) or a LoPri/LoSev task
(1+1=2).
-- KevinTraas
The real thing we want to know is "What should we work on next?" This
is something that needs to be determined by users (or user surrogates).
There is no magic equation based on priority, severity, level of
effort, risk, or dozens of other possible valuations that will answer
that basic question.
In my experience, I have usually found it best to schedule a mix of
important but risky changes with low risk and unimportant changes. If
the important changes take longer than expected, the unimportant
changes can be dropped. It also seems to help with programmer morale to
get some easy, check the box type corrections now and then.
Don't get caught up in trying to determine precise numeric evaluations
for change requests. It is going to be a subjective evaluation to
create an appropriate mix of changes within a release cycle.
The answer to "what to work on next" is (or should be) determined by
priority alone; that seems to be the definition of "priority". (Of
course, given there is a small number of priority levels in most
tracking systems--usually 4 or 5--there's the issue of breaking ties).
The problem is is how to assign priority levels. Severity is a bit more
clear-cut--the effects of a bug can be observed. But when you factor in
things like effort, risk, customer impact, etc.... then there is no
clear-cut answer. The best strategy seems to be having humans making
such decisions.
I think we might have slight semantic disagreement that masks
underlying agreement. I believe we have agreement that the decision on
"what to work on next" is subjective and best made by humans. The
"priority" of what to work on next, however, is not necessarily the
same as the "priority" of what a specific user may want addressed next.
Priority is a matter of perspective, and a project manager is often hit
with many competing "perspectives." The bottom line is that no
algebraic equations exists that gives an answer to what to work on
next, and any tried equation will not even come close to being fair and
appropriate.
· Concur with the disagreement. At my employer, though "priority" is
unambiguously used as an indicator of what the ChangeControlBoard?
thinks ought to be worked on next; not a metric of how pissed-off the
customer is. In other words, it's the output of the decision making
process. The defect tracking software we use (an old one called DDTS)
only explicitly tracks one of the input parameters--"severity"--issues
like risk, difficulty are handled in the notes. But all are considered
in determining what gets fixed and what does not.
This document provides rules for priority adjustment of accessibility
bugs. Following list should help to set and evaluate the priority of
a11y issues.
P1 = showstopper
· Action leading to assistive technology crash (JAWS, Access Bridge,
Java VM or NetBeans crash)
· User doesn't know where is and what to do (no accessible description
and name) = Accessibility APIs not implemented for a component1
· One of accessible description, name, mnemonic is missing for a major
component such as a window, dialog box,or main menu
· User is in situation from where no exit by keyboard exists (unable
to switch to another component)
· Some features exist that will blind user never find - a component is
completely unreachable by keyboard (problem with tab-traversal)2
· User invokes feature that is not focused - dialog hides, minimizes
or stays on background immediately
· Sound is being used as the only cue to the user, e.g. a beep to
indicate wrong selection
· Color is being used as the only cue to the user and the bug would
affect a large group of color-blind users, e.g. a green button changes
to red as a warning
· Exception loop while A11Y tools are running
Notices:
1Transparent components are not supposed to implement a11y API - e.g.
it is not important to know what is the accessible name of JScrollPane
containing just JTable
2Some features are pure GUI tools and there is no reason to get it
accessible. In such case an alternative way must exist for users with
disabilities. Such non a11y-functionality must have then clue for blind
users (some support that tells the user how to get desired
functionality) - somethink like folowing code:
setAccessibleDescription("<Feature> is not accessible by keyboard this
way - use <shortcut> on <component> instead");
P2
· Inconsistent or non-standard methods of access by keyboard. For
example, if guidelines or IDE conventions say that a key should work in
a certain way, and a component is only keyboardable in some other way
(it is not completely unreachable, but reaching it would be hard)
· No accessible name for a sub-component such as a button or icon
· Fonts are hardcoded
· Exception fired while A11Y tools are running
P3
· A component is difficult to reach by keyboard, e.g. wrong tab order
· No accessible description or mnemonic for a sub-component such as a
button or icon
· Color is being used as the only cue to the user but the bug would
only affect a small group of color-blind users (e.g. blue/yellow color
distinction) or some non-crucial function (compiled/uncompiled is not
crucial. Failed/succeeded is.)
· Accessible description is not understandable, user is confused and
not sure what to do
P4
· Accessible description is hard to understand (not expressed
strictly, but tells user the right way)
P5
· Typo in accessible description or name