Defect Management

Nick Jenkins

Defects need to be handled in a methodical and systematic fashion. There's no point in finding a defect if it's not going to be fixed. There's no point getting it fixed if it you don't know it has been fixed and there's no point in releasing software if you don't know which defects have been fixed and which remain.

How will you know?

The answer is to have a defect tracking system.

The simplest can be a database or a spreadsheet. A better alternative is a dedicated system which enforce the rules and process of defect handling and makes reporting easier. Some of these systems are costly but there are many freely available variants.

Importance of Good Defect Reporting

Cem Kaner said it best - “the purpose of reporting a defect is to get it fixed.”

A badly written defect report wastes time and effort for many people. A concisely written, descriptive report results in the elimination of a bug in the easiest possible manner.

Also, for testers, defect reports represent the primary deliverable for their work. The quality of a tester’s defect reports is a direct representation of the quality of their skills.

Defect Reports have a longevity that far exceeds their immediate use. They may be distributed beyond the immediate project team and passed on to various levels of management within different organisations. Developers and testers alike should be careful to always maintain a professional attitude when dealing with defect reports.

Characteristics of a Good Defect Report

• Objective – criticising someone else’s work can be difficult. Care should be taken that defects are objective, non-judgemental and unemotional. e.g. don’t say “your program crashed” say “the program crashed” and don’t use words like “stupid” or “broken”.

• Specific – one report should be logged per defect and only one defect per report.

• Concise – each defect report should be simple and to-the-point. Defects should be reviewed and edited after being written to reduce unnecessary complexity.

• Reproducible – the single biggest reason for developers rejecting defects is because they can’t reproduce them. As a minimum, a defect report must contain enough information to allow anyone to easily reproduce the issue.

• Explicit – defect reports should state information clearly or they should refer to a specific
source where the information can be found. e.g. “click the button to continue” implies the reader knows which button to click, whereas “click the ‘Next’ button” explicitly states what they should do.

• Persuasive – the pinnacle of good defect reporting is the ability to champion defects by presenting them in a way which makes developers want to fix them.

Isolation and Generalisation

Isolation is the process of examining the causes of a defect.

While the exact root cause might not be determined it is important to try and separate the symptoms of the problem from the cause. Isolating a defect is generally done by reproducing it multiple times in different situations to get an understanding of how and when it occurs.

Generalisation is the process of understanding the broader impact of a defect.

Because developers reuse code elements throughout a program a defect present in one element of code can manifest itself in other areas. A defect that is discovered as a minor issue in one area of code might be a major issue in another area. Individuals logging defects should attempt to extrapolate where else an issue might occur so that a developer will consider the full context of the defect, not just a single isolated incident.

A defect report that is written without isolating and generalising it, is a half reported defect.

Severity

The importance of a defect is usually denoted as its “severity”.

There are many schemes for assigning defect severity – some complex, some simple.

Almost all feature “Severity-1” and “Severity-2” classifications which are commonly held to be defects serious enough to delay completion of the project. Normally a project cannot be completed with outstanding Severity-1 issues and only with limited Severity-2 issues.

Often problems occur with overly complex classification schemes. Developers and testers get into arguments about whether a defect is Sev-4 or Sev-5 and time is wasted.

I therefore tend to favour a simpler scheme.

Defects should be assessed in terms of impact and probability. Impact is a measure of the seriousness of the defect when it occurs and can be classed as “high” or “low” – high impact implies that the user cannot complete the task at hand, low impact implies there is a workaround or it is a cosmetic error.

Figure: Relative severity in defects
Probability is a measure of how likely the defect is to occur and again is assigned either “Low” or “High”.
Defects can then be assigned a severity based on :
I\

Impact/Probability = Severity

High / High

High

High / Low

Medium

Low / Low

Low

This removes the majority of debate in the assignment of severity.Severity

Status

Status represents the current stage of a defect in its life cycle or workflow.

Commonly used status flags are :

• New – a new defect has been raised by testers and is awaiting assignment to a developer for resolution

• Assigned – the defect has been assigned to a developer for resolution

• Rejected – the developer was unable to reproduce the defect and has rejected the defect report, returning it to the tester that raised it
• Fixed – the developer has fixed the defect and checked in the appropriate code

• Ready for test – the release manager has built the corrected code into a release and has passed that release to the tester for retesting

• Failed retest – the defect is still present in the corrected code and the defect is passed back to the developer

• Closed – the defect has been correctly fixed and the defect report may be closed, subsequent to review by a test lead.

The status flags above define a life cycle whereby a defect will progress from “New” through “Assigned” to (hopefully) “Fixed” and “Closed. The following swim lane diagram depicts the roles and responsibilities in the defect management life cycle :

Figure : Defect life cycle swim lane diagram Elements of a Defect Report


Title

A unique, concise and descriptive title for a defect is vital. It will allow the
defect to be easily identified and discussed.
Good : “Closed” field in “Account Closure” screen accepts invalid date
Bad : “Closed field busted”

Severity

An assessment of the impact of the defect on the end user (see above).

Status

The current status of the defect (see above).

Initial configuration

The state of the program before the actions in the “steps to reproduce” are to be followed. All too often this is omitted and the reader must guess
or intuit the correct pre-requisites for reproducing the defect.

Software Configuration

The version and release of software-under-test as well as any relevant
hardware or software platform details (e.g. WinXP vs Win95)

Steps to Reproduce

An ordered series of steps to reproduce the defect
Good :
1. Enter “Account Closure” screen
2. Enter an invalid date such as “54/01/07” in the “Closed” field
3. Click “Okay”
Bad: If you enter an invalid date in the closed field it accepts it!

Expected behaviour

What was expected of the software, upon completion of the steps to
reproduce.
Good: The functional specification states that the “Closed” field should only
accept valid dates in the format “dd/mm/yy”
Bad: The field should accept proper dates.

Actual behaviour

What the software actually does when the steps to reproduce are
followed.
Good: Invalid dates (e.g. “54/01/07”) and dates in alternate formats (e.g.
“07/01/54”) are accepted and no error message is generated.
Bad: Instead it accepts any kind of date.

Impact

An assessment of the impact of the defect on the software-under-test. It is
important to include something beyond the simple “severity” to allow
readers to understand the context of the defect report.
Good: An invalid date will cause the month-end “Account Closure” report to crash
the mainframe and corrupt all affected customer records.
Bad: This is serious dude!

(Proposed solution)

An optional item of information testers can supply is a proposed solution.
Testers often have unique and detailed information of the products they
test and suggesting a solution can save designers and developers a lot of
time.

Priority

 


Root Cause

An optional field to allow development managers to assign relative priorities to defects of the same severity

An optional field allowing developers to assign a root cause to the defect such as “inadequate requirements” or “coding error”








}