Software System Testing

Software testing is a process to find defects in a given software system and improve the overall quality of that software. It is a process of validation and verification that the software system meets the business and technical requirements, performs as expected and achieves the goals or results the system is designed for.

Validation = the system purpose is correct and delivers overall satisfaction to the system user. Did we build the software right?

Verification = the system meets the requirements given by the user. Did we build the right software?


Seven Principles of Software Testing


  1. Testing shows presence of defects

Testing doesn’t prove there are no defects, but that it has identified the defects in those areas tested. In other words – testing is a discovery of defects but it is assumed there are other undiscovered defects remaining in the system.


  1. Exhaustive testing is impossible

Tests should focus on high potential problem areas (Defect Clustering) and may include some risk analysis to determine which areas may not need as much extensive testing


  1. Early Testing

Testing should be done as early as possible in SDLC. It is much cheaper to fix defects early than later. It is also good to find these defects early as it may refine the system requirements early.


  1. Defect Clustering

In actuality most defects are found clustered together in specific areas of the system. This is often found during pre-release testing. The Pareto Principle states that 80% of problems are contained in 20% of the modules.


  1. Pesticide Paradox

Running the same tests in the same area over and over again will result in errors happening outside of those areas. As a result, test cases need to be revised and updated as the system evolves.


  1. Testing is context dependent

You cannot run the same test cases across different systems. The tests must be appropriate to that system, the requirements and the scenario.


  1. Absence of errors fallacy

Even if we are able to complete exhaustive testing and determine that 99% of defects are identified, it doesn’t guarantee absence of errors because the requirements that the tests executed again could have been wrong, or the use case of the system changes. In other words, there can be scenarios that go beyond the original tests that may deem the system to be in error.


Testing Methods


There are two main methods when testing a software system. These are:

  • Static Testing

Finds defects without execution of the system. Generally this is determined by testing the code and is often used as part of the verification process. It goes closely with White-Box Testing.

White-box testing tests the internal code and functionalities of the software system. It often includes tests like

Unit Tests, Integration Tests, and System tests. This process is often run against the source code or system an


  • Dynamic Testing

Finds defects when system is under execution. Generally an end user can determine this by executing the software and evaluating the results. This can be used as part of the validation process and goes closely with Black-Box Testing.

Black-box testing tests the system’s functionality from the end user’s point of view. It is often performed by testers who mimic the end user behaviors. Like white-box testing this too can include Unit Tests, Integration Tests and System Tests. But these tests are not testing code but rather the overall use case.


Automation Testing


Instead of traditional manual testing where a tester executes the test cases, automation testing is done through some other software that performs the test cases. This helps improve cost and accuracy while also removing risk factors when using a tester (human prone errors, for multi-lingual systems errors in translations, etc).

Automation tests are best for:

  • Test cases that are executed repeatedly (reduce human cost)
  • Tedious test cases (avoids human-prone errors)
  • Difficult test cases (avoids human-prone errors)
  • Time consuming test cases (reduce human cost)
  • High risk or business critical test cases (accuracy)

Automation tests are not good when:

  • Newly designed test cases that have not been manually tested yet
  • Test cases that have frequently changing requirements
  • Test cases that are executed ad-hoc