Software Testing
What is Testing?
Testing is the process of demonstrating that errors are
not present
The purpose of testing is to show that a program
performs its intended functions correctly
Testing is the process of establishing confidence that a
program does what it is suppose to do.
These definitions are incorrect. Why???
“Testing is the process of executing a program with the intent of finding errors”
Why should We Test?
Although Software Testing is an expensive activity. However, if we produce a software product without Testing may lead to cost potentially much higher than of testing
In so-called “Life Critical System”, economics must not be the prime consideration while deciding whether a product should be released to a customer.
In commercial system it is often difficult to estimate the
cost of errors.
Who should Do the Testing?
It is very difficult for software developer to point out errors
from own creations(Why?)
Testing persons are different from Development persons for
the overall benefit of the system
Developers provide guide lines during testing
Whole responsibility is owned by Testing persons
What should We Test?
It is not possible to test the software for all possible
combination of input cases (example)
It is impossible to execute all path the program (example)
Complete Testing is impossible, although we may wish
to do so.
Organizations should develop strategies and policies
for choosing effective testing techniques
TERMINOLOGIES Error, Mistake, Bug, Fault and Failure
Error: People make errors. A good synonym is MISTAKE.
Mistakes during coding is called Bug
Fault: An error may lead to one or more faults. Fault is the representation of error. Defect is a good synonym for fault
Failure: occurs when a fault executes. One fault may lead to
many Failures
TERMINOLOGIES Test, Test Case, Test Suite
A Test: is the act of exercising software with test cases. There are two distinct goals of a test: either to find failures, or to demonstrate correct execution.
TERMINOLOGIES Test, Test Case, Test Suite
Test Case: Describes an input description and an expected
A Good Test Case has high probability of finding an error
The main objective of Test Case Designer is to identify good test
cases
output description (example)
Test Suite: the set of test cases. Any combination of test
cases may generate a Test Suite
TERMINOLOGIES Verification and Validation
Verification: the process of conforming that software meets its specification (Checking the software with respect to specification)
Validation: the process of conforming that software meets the customer’s requirements (Checking the software with respect to customer’s expectation)
If there is a gap at SRS level that will only be know during
validation activities
Poor understand of expectation may lead to incorrect
specifications
TERMINOLOGIES Alpha, Beta and Acceptance Testing
Alpha and Beta Testing: used when the software developed
Alpha tests are conducted at developer’s site by customer. This test
may be stared when formal testing process near completion
Beta tests are conducted by the customers/end users at their site. Beta testing is conducted in a real environment that cannot be control by the developer.
as a product for anonymous customers.
TERMINOLOGIES Alpha, Beta and Acceptance Testing
Acceptance Testing: used when the software is developed for
a specific customer.
A series of test are conducted to enable the customer to validate all requirements. These test are conducted by the end user/customer
Example 1
A program requires two 8 bit integers as inputs,
so total possible combinations are: 28 x 28
If one second is required to execute one set of
inputs, it will take 18 hours.
Example 2
Suppose we have a program which uses loop and
If statement as following:
The number of paths in
the example is 1014
If only 1 minute is require to test one path, it may take 1/5 billion years to execute all path.
<=20 times
Functional Testing
14
Complete Testing is not all possible We may like to reduce this incompleteness as much as possible.
What we are looking for is a set of thought processes
that allow us to select a set of data more intelligently.
15
(the poorest methodology is random input testing)
What is Functional Testing?
Functional Testing (also know as Behavior testing) is based on the Functionality of the program. It is involves only observation of the output for certain input values. There is no attempt to analysis the code, which produces the output
Input domain
Input test data
Output test data
System Under test
16
Functional Testing also referred as Black Box Testing Ouput domainS
Techniques used to design Test Cases for Functional Testing
Boundary Value Analysis
Equivalence Class Testing
Decision Table Base Testing
Cause Effect Graphing Technique
17
Special Value Testing
Boundary Value Analysis (1)
Experience show that test cases that are close to boundary
condition have higher chances of detecting an error.
Boundary condition means: an input value may be on the boundary, just below the boundary (lower side ) or just above the boundary (upper side).
18
Based on “Single Fault” assumption
Uses input variable values at their:
400
300
Just above minimum
200
100
Minimum Norminal value Just Maximum Maximum
0
100
200
300 400
Test cases are obtained by
Combine values of one variable with the nominal values of remain
variables
Repeat above step until all variable are traversed
There are 4n+1 test cases for a program has n
variables
19
Boundary Value Analysis (2)
Assume we have a program with two input variables, which have any value from 100 to 300 Test cases are: (200,100), (200,101), (200, 200), (200, 299), (200, 300), (100, 200), (101, 200), (299, 200), (300, 200) Generate Test cases
the program
for
that determination of the nature of roots of quadratic equation. Values of coefficients are ranged from 0 to 100. The program may outputs one of following words: Not a quadratic equation, Real Root, Imaginary Root, Equal Root. [Show Result]
20
Boundary Value Analysis (3)
The extension of boundary value analysis We would like to see what happen when the
extreme values are exceeded
400
Total test cases: 6n + 1 Robustness test cases
300
for 2 variables x, y with
200
range [100,300]
100
0
100
200
300 400
21
Robustness Testing (1)
400
(200,99), (200,100), (200,101)
300
(200,200), (200,299), (200,300)
200
(200,301)
100
0
100
200
300 400
(99,200), (100,200),(101,200)
(299,200),(300,200),(301,200)
22
Robustness Testing (2)
Reject “single fault” assumption theory of
reliability
We would like to see what happen when more
than one variables has an extreme value.
Require more effort Total test cases for Worst-case testing: 5n Example:
23
Worst-Case Testing
Chapter 8 Software testing
24
Equivalence partitions
Input domain of a program is partition into a finite
number of equivalence classes
The equivalence classes are identified by taking each input condition and partitioning it into valid and invalid classes
Generate the test cases using the equivalence
classes .
This performed by writing test cases covering all the valid equivalence classes. Then a test case is written for each invalid equivalence class (no test case contain more than one invalid class
25
Equivalence Class Testing(1)
Example: For a program that supposed to accept
any number between 1 and 99:
There are four equivalence classes form input
side Any number between 1 and 99 is valid input Any number less than 1 (invalid input) Any number greater than 99 (invalid input) If it not a number, it should not be accepted
The test cases are: (50), (-1), (100) Example:
1. Generate test cases for the program that accept any three integer
numbers in range [100..200]
2. Generate test cases for the program that determine the nature of
26
root of a aquaratic equation (a, b, c are in range [0..100].
Equivalence Class Testing(2)
Equivalence Class Testing(3)
27
Most of the time, equivalence class testing defines classes of the input domain. However, equivalence classes should also be defined for output domain.
Useful for describing situation in which number of combinations of
actions are taken under varying sets of condition Four portions of decision table: Conditions stub, Action stub, Condition entries, Action entries.
Decision Table Based Testing (1)
Entry
True
False
Condition Stub
True
False
True
False
True
False
True
False
True
False
True
False
c1 c2 c3
X
X
X
X
X
X
Action stub
X
X
X
X
X
a1 a2 a3 a4
28
To identify test cases with decision tables we interpret condition as input, and action as output. Example: Decision table for triangle program:
29
Decision Table Based Testing (2)
The test cases derived from Decision table:
30
Decision Table Based Testing (3)
Ad Hoc Testing
Testing carried out using no recognized test case design
technique.
Also known as Special value testing.
Mostly intuitive and least uniform.
It occurs when the tester uses domain knowledge
experience and information about soft spots to derive test
cases.
Dependent on the abilities of the tester.
31
Other terms: “hacking”, “out-of-box testing”
Structural Testing
What is Structural Testing?
Based on Source Code
Examine the internal structure of the program
Test cases are derived from an examination of program‘s
logic
Do not pay any attention to specification.
The knowledge to the internal structure of code can be used to find the number of test cases required to guarantee a given level of test coverage.
Why Structural Testing is Required?
Part of code not fully exercised.
Section of code may be surplus to the requirements.
Errors may be missed by functional requirements.
Levels of Coverage
1. Statement (Line) coverage
2. Decision (Branch) coverage
3. Condition coverage
4. Decision/Condition coverage
5. Multiple Condition coverage
6. Loop coverage
7. Path coverage
Levels of Coverage Statement Coverage
Levels of Coverage Branch Coverage
Levels of Coverage Condition Coverage
Condition coverage reports
the true or false outcome of
each boolean sub-
expression, separated by
logical-and and logical-or if
they occur. Condition
coverage measures the
sub-expressions
independently of each
other.
Levels of Coverage Decision/Condition Coverage
A hybrid metric composed by the union of condition
coverage and decision coverage
Levels of Coverage Multiple Condition Coverage
Levels of Coverage Loop Coverage
This metric reports whether you executed each loop body
zero times, exactly once, and more than once.
Path Testing(1)
Group of test techniques based on judicious selecting a set
of test paths through the program
Most applicable to new software for module testing or unit
testing
The effectiveness of path testing rapidly deteriorates as the
size of the software under test increases.
Path Testing(2)
Generating a set of paths that will cover every branch in the
program.
Finding the set of test cases that will execute every path in this set of
program paths.
This type of testing involves:
Generate flow graph of a program.
Generate DD path graph. Identify independent paths
Generate test case for each path.
Steps Involved in Path testing:
Path Testing Flow Graph (1)
Used to analysis the control flow of a program
Definition: Given a program written in a programming language, its program graph is a directed graph in which nodes are statement fragments, and edges represent flow of control.
the statement to node J if I
If I and J are nodes in the program graph, an edge exists from node fragment corresponding to node J can be executed immediately after the statement fragment corresponding to node I.
Path Testing Flow Graph (2)
Sequence
If then else
Repeat until loop
While loop
…
Case
Path Testing Flow Graph (3)
Example:
3
3. begin
4. (a<=b+c or b<=a+c or c<=b+c)?
yes
no
5
5. IsATriangle := true
6. IsATriangle := false
7
7. IsATriangle?
no
yes
9. (a=b and b=c)?
15. “Not a triangle”
15
Triangle (a, b, c: Integer): String 4 IsATriangle: Boolean begin 6 if (a
yes
no
12. (ab and ac and bc)?
10. “Equilateral”
12
10
yes
no
14. “Equilateral”
13. “Scalene”
16. end
16
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. then return “Equilateral” 11. else 12. if (ab and ac and bc) 13. then return “Scalene” 14. else return “Iso..” ; 15. else return “Not a triangle” ; 14 13 16. end
Path Testing DD Path Graph (1)
Decision to decision path graph.
DD path graph is a directed graph in which nodes are
sequences of statements and edges represent control flow
between nodes.
Used to find independent path
Path Testing DD Path Graph (2)
A
7
9
10
11
12
13
8
Example:
B
14
15
16
17
18
19
C
20
21
22
23
24
25
26
27
28
D
29
34
30
35
38
E
31
F
G
36
32
H
37
I
39
33
J
40
41
42
K
L
Path Testing Independent Path (1)
An independent path is any path through the program that
introduces at least one new set of processing statements or
a new condition.
When stated in terms of a flow graph, an independent path
must move along at least one edge that has not been
traversed before the path is defined.
Path Testing Independent Path (2)
A
B
Independent paths are:
C
D
ABGOQRS
E
ABGOPRS
F
G
ABCDFGOQRS
O
H
ABCDEFGOPRS
I
J
P
Q
K
ABGHIJNRS
L
M
R
ABGHIKLNRS
N
S
ABGHIKMNRS
Path Testing Computation of cyclomatic complexity (1)
Cyclomatic complexity=conditional complexity
Uses to measure the Structural Complexity.
Given by McCabe.
Used to find number of independent paths through the
program.
Provides an upper bound on the number of tests that must
be conducted to ensure that all statements have been
executed at least once.
Path Testing Computation of cyclomatic complexity (2)
Cyclomatic complexity has a foundation in graph theory and is computed in the following ways:
1. V(G) = E – N + 2P
E: number of edges N: number of nodes P: number of connected components
2. V(G) = ∏ + 1
∏ : number of predicate nodes
3. V(G)= number of region
Path Testing Computation of cyclomatic complexity (3)
1. V(G)= E–N+2=?
2. V(G)= ∏+ 1=?
3. V(G)= No regions= ?
Path Testing Example
Consider the program for the classification of a triangle
Compute cyclomatic complexity based on the graph that presented in
page 401
Determine independent
Generate test cases which are coverage independent path
(pages 399)
Data Flow Testing (1)
Data flow testing focuses on the points at which variables receive values and the points at which these values are used (or referenced). It detects improper use of data values due to coding errors.
Data Flow Testing (2)
Flow graph used as basic for data flow testing
Data flow analyses centered on a set of faults that are
known as define/reference anomalies. – A variable that is defined but never used (referenced) – A variable that is used but never defined – A variable that is defined twice before it is used
Data Flow Testing (3) Definitions
The definitions refer to a program P that has a program graph G(P)
and set of program variables V. The graph G(P) has a single entry
node and a single exit node. The set of all paths in P is PATHS(P)
Data Flow Testing (3) Definitions
DEF(v, n): node n in G(P) is a defining node of variable v in V, if the value of variable v is defined at the statement fragment corresponding to node n.
USE(v, n): node n in G(P) is a usage node of variable v in V, if
the value of variable v is used at the statement fragment corresponding to node n.
du-path: a definition-use path (du-path) with respect to
variable v is a path in PATH S(P) such that, for some v in V, there are defined and usage nodes DEF(v, m) and USE(v, n) such that m and n are initial and final nodes of the path.
dc-path: a definition-clear path with respect to a variable v is a du-path with initial and final node such that no other node in the path is defining node of v.
Data Flow Testing (4) Steps for Data Flow Testing
Draw the program Flow graph
Find the DD path graph
Prepare the table for Def/Use status of all variable
Find all du-path
Identify du-path that are not dc-path
Generate test cases to test all du-path at least one. If we cannot test all du-paths, we have to test all du-paths that are not dc-path.
Data Flow Testing (5) Example
Consider example 8.20 (page 411)
Levels of Testing
Unit testing Integration testing System testing
System testing
Integration testing
Unit testing
There three levels of testing:
Unit Testing (1)
The process taking of a module and run it in isolation from
the rest of the software product by using prepared test
cases and comparing actual result with expected output
The purpose of this test is to find (and remove)as many
errors in the software as practical
Unit Testing (2)
1. The size of single module is small enough that we can locate an
error fairly easily
2. The module is small enough that we can attempt to test it in some
demonstrably exhaustive fashion
3. Confusing interaction of multiple errors in widely different parts of
the software are eliminated
The reasons for unit testing:
Unit Testing (3)
1. How do we run a module without anything to call it, to be called by
it
2. Possibly, to output intermediate values obtained during execution?
The problems associated which unit testing:
Unit Testing (4)
First approach is to construct an appropriate drive
routine to call it and, simple stubs to be called by it and, to insert output statements in it.
Second approach is to generate a scaffolding
automatically by means of test harness.
The third approach is to omit unit testing and simply to allow incremental addition of modules to a partially integrated product, hoping that the integration testing will also provide sufficient coverage of the module’s structure.
Unit Testing (5)
Parameter out
Driver
User input and output
Test module
Parameter back
Stub
Stub
Stub
Mark’s scaffolding
Integration Testing
“Big bang” integration
Bottom-up integration
Top-down integration
Sandwich integration
Horizontal Integration Testing
68
Vertical Integration Testing
Phased integration
design, code, test, debug each class/unit/subsystem separately
combine them all
pray
phased ("big-bang") integration:
"Sandwich" integration
"sandwich" integration:
add middle layers later as needed
more practical than top-down or bottom-up?
Connect top-level UI with crucial bottom-level classes
Controller Controller
Level-4 Level-4
Level-3 Level-3
KeyChecker KeyChecker
System hierarchy:
Level-2 Level-2
KeyStorage KeyStorage
Key Key
Level-1 Level-1
Logger Logger
PhotoSObsrv PhotoSObsrv
DeviceCtrl DeviceCtrl
Test Test Logger Logger
Test Controller & Test Controller & KeyChecker & KeyStorage & KeyChecker & KeyStorage & Key & Logger & PhotoSObsrv Key & Logger & PhotoSObsrv & DeviceCtrl & DeviceCtrl
Test Test PhotoSObsrv PhotoSObsrv
Bottom-up integration testing:
Test Test DeviceCtrl DeviceCtrl
Test KeyChecker Test KeyChecker & KeyStorage & & KeyStorage & Key Key
Test Key & Test Key & KeyStorage KeyStorage
Test Test Controller Controller
Test Test Controller & Controller & KeyChecker KeyChecker
Test Controller & Test Controller & KeyChecker & KeyChecker & KeyStorage & Key KeyStorage & Key
Test Controller & Test Controller & KeyChecker & KeyStorage & KeyChecker & KeyStorage & Key & Logger & PhotoSObsrv Key & Logger & PhotoSObsrv & DeviceCtrl & DeviceCtrl
71
Top-down integration testing:
Horizontal Integration Testing
User story-1
User story-2
User story-N
inner feedback loop
Make the test pass
Write a failing acceptance test
Write a failing unit test
Refactor
outer feedback loop
Developing user stories:
Whole system
Each story is developed in a cycle that integrates unit
tests in the inner feedback loop and the acceptance
test in the outer feedback loop
72
Vertical Integration Testing
( Not necessarily how it’s actually done! )
Component code
Unit test
Component code
Unit test
Integrated modules
Integration test
System test
System in use
Ensures that all components work together
Component code
Unit test
Function test
Quality test
Acceptance test
Installation test
Ensure that each component works as specified
Verifies that functional requirements are satisfied
Verifies non- functional requirements
Customer verifies all requirements
Testing in user environment 73
Logical Organization of Testing
System Testing (1)
Objective: to ensure that the system does what the customer wants it to do.
System Testing (2)
System Testing (3)
Termination Problem
How decide when to stop testing?
Termination takes place when
• resources (time & budget) are over • some coverage is reached
The main problem for managers!
Debugging
Debugging is the activity of locating and correcting errors.
It can start once a failure has been detected
Brute Force
Debugging by Induction
Debugging by Deduction
Debugging by Backtracking
Debugging by Testing
Techniques used for debugging:
Debugging By Brute Force
Debugging with a storage dump(trace)
Debugging according to the common suggestion to “scatter print
statements throughout the program”
Debugging with automated debugging tools
Requires little thought Inefficient and generally unsuccessful. Can be partitioned into three categories:
Debugging by Induction
Thought Process Start with the clues and look for the relationships among the
cannot
Devise a Hypothesis
Study their relationship
Organize The data
Locate pertinent data
can
cannot
Prove the Hypothesis
can
Fix the error
clues
Debugging by Induction
Locate the pertinent data: take account of all available data
or symptom about the problem
Enumerate all you know about the program Organize the data: induction implies processing from the
particulars to generals
Structure the pertinent data and search for the contradiction Devise a Hypothesis: Study clues and devise one or ore
hypothesis.
If multiple theories are possible then select the most
probable one first.
Prove the hypothesis: Prove the reasonableness of the
hypothesis.
Hypothesis should completely explain the existence of clues.
Debugging By Deduction
Start with set of suspects and use process of elimination
and refinement.
Enumerate the possible causes or hypothesis
Use data to eliminate possible causes
Refine the remaining Hypothesis
Prove the remaining Hypothesis
Steps:
Debugging By Backtracking
Effective method for locating errors in small programs.
Backtrack the incorrect result through the logic of the
program until you find the point where the logic went astray.
Debugging By Testing
Test cases for debugging: purpose is to provide information
useful in locating a suspected error.
This method is used in conjunction with the induction
method
ANY QUESTION?
Chapter 8 Software testing
87
The weather station object interface
Weather station testing
Need to define test cases for reportWeather, calibrate,
test, startup and shutdown.
Using a state model, identify sequences of state
transitions to be tested and the event sequences to cause these transitions
Shutdown -> Running-> Shutdown
Configuring-> Running-> Testing -> Transmitting -> Running
Running-> Collecting-> Running-> Summarizing -> Transmitting
-> Running
Chapter 8 Software testing
88
For example:
Chapter 8 Software testing
89
Equivalence partitions
Chapter 8 Software testing
90
Interface testing
Interface testing
Objectives are to detect faults due to interface errors or
invalid assumptions about interfaces.
Parameter interfaces Data passed from one method or
procedure to another.
Shared memory interfaces Block of memory is shared between
procedures or functions.
Procedural interfaces Sub-system encapsulates a set of
procedures to be called by other sub-systems.
Message passing interfaces Sub-systems request services from
other sub-systems
Chapter 8 Software testing
91
Interface types
Use-case testing
The use-cases developed to identify system interactions
can be used as a basis for system testing.
Each use case usually involves several system
components so testing the use case forces these interactions to occur.
Chapter 8 Software testing
92
The sequence diagrams associated with the use case documents the components and interactions that are being tested.
Chapter 8 Software testing
93
Collectweather data sequence chart
Testing policies
Exhaustive system testing is impossible so testing
policies which define the required system test coverage may be developed.
All system functions that are accessed through menus should be
tested.
Combinations of functions (e.g. text formatting) that are
accessed through the same menu must be tested.
Where user input is provided, all functions must be tested with
both correct and incorrect input.
Chapter 8 Software testing
94
Examples of testing policies:
Test-driven development
Test-driven development (TDD) is an approach to
program development in which you inter-leave testing and code development.
Tests are written before code and ‘passing’ the tests is
the critical driver of development.
You develop code incrementally, along with a test for that increment. You don’t move on to the next increment until the code that you have developed passes its test.
TDD was introduced as part of agile methods such as
Chapter 8 Software testing
95
Extreme Programming. However, it can also be used in plan-driven development processes.
Chapter 8 Software testing
96
Test-driven development
TDD process activities
Start by identifying the increment of functionality that is
required. This should normally be small and implementable in a few lines of code.
Write a test for this functionality and implement this as
an automated test.
Run the test, along with all other tests that have been implemented. Initially, you have not implemented the functionality so the new test will fail.
Implement the functionality and re-run the test.
Chapter 8 Software testing
97
Once all tests run successfully, you move on to implementing the next chunk of functionality.
Benefits of test-driven development
Every code segment that you write has at least one associated
test so all code written has at least one test.
Code coverage
A regression test suite is developed incrementally as a program
is developed.
Regression testing
When a test fails, it should be obvious where the problem lies. The newly written code needs to be checked and modified.
Simplified debugging
The tests themselves are a form of documentation that describe
what the code should be doing.
Chapter 8 Software testing
98
System documentation
Regression testing
Regression testing is testing the system to check that changes have not ‘broken’ previously working code.
In a manual testing process, regression testing is
expensive but, with automated testing, it is simple and straightforward. All tests are rerun every time a change is made to the program.
Tests must run ‘successfully’ before the change is
Chapter 8 Software testing
99
committed.
Release testing
Release testing is the process of testing a particular release
of a system that is intended for use outside of the development team.
The primary goal of the release testing process is to
Release testing, therefore, has to show that the system delivers its specified functionality, performance and dependability, and that it does not fail during normal use.
convince the supplier of the system that it is good enough for use.
Release testing is usually a black-box testing process
Chapter 8 Software testing
100
where tests are only derived from the system specification.
Release testing and system testing
Release testing is a form of system testing.
A separate team that has not been involved in the system development, should be responsible for release testing.
System testing by the development team should focus on
discovering bugs in the system (defect testing). The objective of release testing is to check that the system meets its requirements and is good enough for external use (validation testing).
Chapter 8 Software testing
101
Important differences:
Requirements based testing
Requirements-based testing involves examining each
requirement and developing a test or tests for it.
If a patient is known to be allergic to any particular medication, then prescription of that medication shall result in a warning message being issued to the system user.
If a prescriber chooses to ignore an allergy warning, they shall
provide a reason why this has been ignored.
Chapter 8 Software testing
102
MHC-PMS requirements:
Set up a patient record with no known allergies. Prescribe medication for allergies that are known to exist. Check that a warning message is not issued by the system.
Set up a patient record with a known allergy. Prescribe the medication to that the patient is allergic to, and check that the warning is issued by the system.
Set up a patient record in which allergies to two or more drugs are recorded. Prescribe both of these drugs separately and check that the correct warning for each drug is issued.
Prescribe two drugs that the patient is allergic to. Check that two warnings
are correctly issued.
Prescribe a drug that issues a warning and overrule that warning. Check
that the system requires the user to provide information explaining why the warning was overruled.
Chapter 8 Software testing
103
Requirements tests
Features tested by scenario
Authentication by logging on to the system.
Downloading and uploading of specified patient records
to a laptop.
Home visit scheduling.
Encryption and decryption of patient records on a mobile
device.
Record retrieval and modification.
Links with the drugs database that maintains side-effect
information.
Chapter 8 Software testing
104
The system for call prompting.
Performance testing
Part of release testing may involve testing the emergent
properties of a system, such as performance and reliability.
Tests should reflect the profile of use of the system.
Performance tests usually involve planning a series of tests where the load is steadily increased until the system performance becomes unacceptable.
Stress testing is a form of performance testing where the
Chapter 8 Software testing
105
system is deliberately overloaded to test its failure behaviour.
User testing
User or customer testing is a stage in the testing process in which users or customers provide input and advice on system testing.
The reason for this is that influences from the user’s working
environment have a major effect on the reliability, performance, usability and robustness of a system. These cannot be replicated in a testing environment.
Chapter 8 Software testing
106
User testing is essential, even when comprehensive system and release testing have been carried out.
Types of user testing
Users of the software work with the development team to test the
software at the developer’s site.
Alpha testing
A release of the software is made available to users to allow
them to experiment and to raise problems that they discover with the system developers.
Beta testing
Customers test a system to decide whether or not it is ready to be accepted from the system developers and deployed in the customer environment. Primarily for custom systems.
Chapter 8 Software testing
107
Acceptance testing
Chapter 8 Software testing
108
The acceptance testing process
Stages in the acceptance testing process
Define acceptance criteria
Plan acceptance testing
Derive acceptance tests
Run acceptance tests
Negotiate test results
Chapter 8 Software testing
109
Reject/accept system

