Test Scripting

Nick Jenkins

There are several schools of thought to test scripting. In risk averse industries such as defence and finance there is a tendency to emphasise scripting tests before they are executed. These industries are more concerned with the potential loss from a software defect than the potential gain from introducing a new piece of software. As a consequence there is a heavy emphasis on verifiable test preparation (although observance of this
verification might only be lip-service!) And in some industries, external compliance issues (legal compliance for example) mean that a script-heavy approach is mandated.

On the other hand, in Commercial-Off-The-Shelf (COTS) software development, a looser approach is normally taken. Since speed-to-market is more important than the risk of a single software defect, there is considerable latitude in the test approach. Specific test cases may not be documented or loosely documented and testers will be given a great deal of freedom in how they perform their testing.

The ultimate extension of this is exploratory or unscripted testing.

In this form of testing, there is a considerable amount of preparation done but test cases are not pre-scripted. The tester uses their experience and a structured method to 'explore' the software and uncover defects. They are free to pursue areas which they think are more risky than others.

Scripting, it is argued, is a waste of time. On a big project the amount of time spent on scripting can actually exceed the amount of time in execution. If you have an experienced, educated tester with the right set of tools and the right mindset, it would be more effective and more cost efficient to let them get at the software right away and find some defects.

This concept is almost heresy in some camps.

If you are responsible for releasing a piece of software that causes financial loss you could be liable for damages. Further, if you cannot prove that you have conducted due diligence through adequate testing you may be guilty of professional negligence. One of the goals of test preparation therefore is to provide an audit trail which shows the efforts you have made to verify the correct behaviour of the software.

Test Cases

A test case documents a test, intended to prove a requirement. The relationship is not always one-to-one, sometime many test case are required to prove one requirement. Sometimes the same test case must be extrapolated over many screens or many workflows to completely verify a requirement. There should be at least one test case per requirement however.

Some methodologies (like RUP) specify there should be two test cases per requirement – a positive test case and a negative test case. A positive test case is intended to prove that the function-under-test behaves as required with correct input and a negative test is intended to prove that the function-under-test does not provoke an error with incorrect input (or responds gracefully to that error).

In reality the number of cases depends on the latitude you allow your testers.

Storing Test Cases

There are a variety of ways to store test cases. The simplest way is in a word-processing document of a spreadsheet.
One of the common form is a TSM or Test Script Matrix (also known as a Traceability matrix). In a TSM each line item represents a test case with the various elements of each case stored in the columns. These can be good for a small test effort since it is relatively easy to track scripting and execution in a spreadsheet but in larger projects they prove difficult to manage. The
extent to which they actually aid traceability is also questionable since they don't enforce change control and aren't very good at one-to-many mappings.

In more complex software development efforts a database or specialist test case management tool can be used. This has the benefit of enforcing a standard format and validation rules on the contents of the test case. It can also be used to record execution on multiple test runs, produce reports and even assist with traceability by linking back to requirements in a separate database. It can also enforce change control and track the history of changes and execution.

Elements of a Test Case

The following table list the suggested items a test case should include:

Item

Description

Title

A unique and descriptive title for the test case

Priority

The relative importance of the test case (critical, nice-to-have, etc.)

Status

For live systems, an indicator of the state of the test case.
Typical states could include :
Design – test case is still being designed
Ready – test case is complete, ready to run
Running – test case is being executed
Pass – test case passed successfully
Failed – test case failed
Error – test case is in error and needs to be rewritten

Initial configuration

The state of the program before the actions in the “steps” are to be followed.
All too often this is omitted and the reader must guess or intuit the correct
pre-requisites for conducting the test case.

Software Configuration

IThe software configuration for which this test is valid. It could include the
version and release of software-under-test as well as any relevant hardware
or software platform details (e.g. WinXP vs Win95)

Steps

An ordered series of steps to conduct during the test case, these must be
detailed and specific. How detailed depends on the level of scripting required
and the experience of the tester involved.

Expected behaviour

What was expected of the software, upon completion of the steps? What is
expected of the software. Allows the test case to be validated with out
recourse to the tester who wrote it.

Tracking Progress

Depending on your test approach, tracking your progress will either be difficult or easy. If you use a script-heavy approach, tracking progress is easy. All you need to do is compare the number of scripts you have left to execute with the time available and you have a measure of your progress.

If you don't script, then tracking progress is more difficult. You can only measure the amount of time you have left and use that as a guide to progress.

If you use advanced metrics (see next chapter) you can compare the number of defects you've found with the number of defects you expect to find. This is an excellent way of tracking progress and works irrespective of your scripting approach.

Adjusting  the plan

But tracking progress without adjusting your plan is wasting information. Suppose you script for 100 test case, each taking one day to execute. The project manager has given you 100 days to execute your cases. You are 50 days into the project and are on schedule, having execute 50% of your test cases.
But you've found no defects.

The hopeless optimist will say, “Well! Maybe there aren't any!” and stick to their plan. The experienced tester will say something unprintable and change their plan.

The chance of being 50% of the way through test execution and not finding defects is extremely slim. It either means there is a problem with your test cases or there is a problem with the way in which they are being executed. Either way, you're looking in the wrong place.

Regardless of how you prepare for testing you should have some kind of plan. If that plan is broken down into different chunks you can then examine the plan and determine what is going wrong. Maybe development haven't delivered the bulk of the functional changes yet? Maybe the test cases are our of date or aren't specific enough? Maybe you've underestimated the size of the test effort?

Whatever the problem you need to jump on it quick.

The other time you'll need your plan is when it gets adjusted for you.

You've planned to test function A but the development manager informs you function B has been delivered instead, function A is not ready yet. Or you are halfway through your test execution when the project manager announces you have to finish two weeks earlier if you have a plan, you can change it.

Coping with the Time Crunch
The single most common thing a tester has to deal with is being 'crunched' on time. Because testing tends to be at the end of a development cycle it tends to get hit the worst by time pressures. All kinds of things can conspire to mean you have lest time than you need. Here's a list of the most common causes:

• Dates slip – things are delivered later than expected

• You find more defects than you expected

• Important people leave the company or go off sick

• Someone moves the end date, or changes the underlying requirements the software is supposed to fulfil
There are three basic ways to deal with this :

• Work harder – the least attractive and least intelligent alternative. Working weekends or overtime can increase your productivity but will lead to burn out in the team and probably compromise the effectiveness of their work.

• Get more people – also not particularly attractive. Throwing people at a problem rarely speeds things up. New people need to be trained and managed and cause additional communications complexity that gets worse the more you add.

• Prioritise – we've already decided we can't test everything so maybe we can make some intelligent decisions about what we test next? Test the riskiest things, the things you think will be buggy, the things the developers think will be buggy, the things with highest visibility or importance. Push secondary or 'safe' code to one side to test if you have time later, but make everyone aware of what you're doing – otherwise you might end up being the only one held accountable when buggy code is released to the customer.








}