Unit, Integration and System testing

The first type of testing that can be conducted in any development phase is unit testing. In this, discrete components of the final product are tested independently before being assembled into larger units. Units are typically tested through the use of ‘test harnesses’ which simulate the context into which the unit will be integrated. The test harness provides a number of known inputs and measures the outputs of the unit under test, which are then compared with expected values to determine if any issues exist.

In integration testing smaller units are integrated into larger units and larger units into the overall system. This differs from unit testing in that units are no longer tested independently but in groups, the focus shifting from the individual units to the interaction between them.

At this point “stubs” and “drivers” take over from test harnesses.

A stub is a simulation of a particular sub-unit which can be used to simulate that unit in a larger assembly. For example if units A, B and C constitute the major parts of unit D then the overall assembly could be tested by assembling units A and B and a simulation of C, if C were not complete. Similarly if unit D itself was not complete it could be represented by a “driver” or a simulation of the super-unit.

As successive areas of functionality are completed they can be evaluated and integrated into the overall project. Without integration testing you are limited to testing a completely assembled product or system which is inefficient and error prone. Much better to test the building blocks as you go and build your project from the ground up in a series of controlled steps.

System testing represents the overall test on an assembled software product. Systems testing is particularly important because it is only at this stage that the full complexity of the product is present. The focus in systems testing is typically to ensure that the product responds correctly to all possible input conditions and (importantly) the product handles exceptions in a controlled and acceptable fashion. System testing is often the most formal stage of testing and more structured.

The SIT or Test Team

In large organisations it is common to find a “SIT” or independent test team. SIT usually stands for “Systems Integration Testing” or “Systems Implementation Testing” or possibly “Save It, Testing!”

And is the role of this team unit, system testing or integration testing?

Well, nobody really knows. The role of the SIT team usually is not unit, integration nor system testing but a combination of all three. They are expected to get involved in unit testing with developers, to carry through the integration of units into larger components and then to provide end-to-end testing of the systems.

Sometimes the expectation is that the SIT team will become the companies Quality Assurance team, even though they have no direct influence on the way software is developed. The assumption is that by increasing the length and rigour of testing it will improve the quality of released products – and so it does.

But it does nothing to increase the quality of built products – so it's not really QA.
In the best of worlds, this team can act as an agent of change. It can introduce measures and processes which prevent defects from being in written into the code in the first place; they can work with development teams to identify areas which need fixing; and they can highlight successful improvements to development processes.

In the worst of worlds the pressure on software development drives longer and longer projects with extended test cycles where huge amounts of defects are found and project schedules slip. The testing team attracts blame for finding defects and for long testing cycles and nobody knows how to solve the problem.

Acceptance Testing

Large scale software projects often have a final phase of testing called “Acceptance Testing”.

Acceptance testing forms an important and distinctly separate phase from previous testing efforts and its purpose is to ensure that the product meets minimum defined standards of quality prior to it being accept by the client or customer.

This is where someone has to sign the cheque.

Often the client will have his end-users to conduct the testing to verify the software has been implemented to their satisfaction (this is called “User Acceptance Testing” or “UAT”). Often UAT tests processes outside of the software itself to make sure the whole solution works as advertised.

While other forms of testing can be more ‘free form’, the acceptance test phase should represent a planned series of tests and release procedures to ensure the output from the production phase reaches the end-user in an optimal state, as free of defects as is humanly possible. In theory Acceptance Testing should also be fast and relatively painless. Previous phases of testing will have eliminated any issues and this should be a formality. In immature software development, the Acceptance Test becomes a last trap for issues, back-loading the project with risk.

Acceptance testing also typically focusses on artefacts outside the software itself. A solution often has many elements outside of the software itself. These might include : manuals and documentation; process changes; training material; operational procedures; operational performance measures (SLA's).

These are typically not tested in previous phases of testing which focus on functional aspects of the software itself. But the correct delivery of these other elements is important for the success of the solution as a whole. They are typically not evaluated until the software is complete because they require a fully functional piece of software, with its new workflows and new data requirements, to evaluate.

Test Automation

Organisations often seek to reduce the cost of testing. Most organisations aren't comfortable with reducing the amount of testing so instead they look at improving the efficiency of testing. Luckily, there are a number of software vendors who claim to be able to do just this! They offer automated tools which take a test case, automate it and run it against a software target repeatedly. Music to management ears!

However, there are some myths about automated test tools that need to be dispelled :

  • Automated testing does not find more bugs than manual testing – an experienced manual tester who is familiar with the system will find more new defects than a suite of automated tests.
  •  Automation does not fix the development process – as harsh as it sounds, testers don’t create defects, developers do. Automated testing does not improve the development process although it might highlight some of the issues.
  • Automated testing is not necessarily faster – the upfront effort of automating a test is much higher than conducting a manual test, so it will take longer and cost more to test the first time around. Automation only pays off over time. It will also cost more to maintain.
  • Everything does not need to be automated – some things don’t lend themselves to automation, some systems change too fast for automation, some tests benefit from partial automation – you need to be selective about what you automate to reap the benefits.

But, in their place, automated test tools can be extremely successful.

What is Automated Testing Good For?

Automated testing is particularly good at :

  • Load and performance testing – automated tests are a prerequisite of conducting load and performance testing. It is not feasible to have 300 users manually test a system simultaneously, it must be automated.

    • Smoke testing – a quick and dirty test to confirm that the system ‘basically’ works. A system which fails a smoke test is automatically sent back to the previous stage before work is conducted, saving time and effort

    • Regression testing – testing functionality that should not have changed in a current release of code. Existing automated tests can be run and they will highlight changes in the functionality they have been designed to test (in incremental
    development builds can be quickly tested and reworked if they have altered functionality delivered in previous increments)

    • Setting up test data or pre-test conditions – an automated test can be used to set-up test data or test conditions which would otherwise be time consuming

    • Repetitive testing which includes manual tasks that are tedious and prone to human error (e.g. checking account balances to 7 decimal places)








}