Software error detection is one of the most challenging problems in software engineering. Now, you can learn how to make the most of software testing by selecting test cases to maximize the probability of revealing latent errors. Software Error Detection through Testing and Analysis begins with a thorough discussion of test-case selection and a review of the concepts, notations, and principles used in the book.
This chapter presents some basic concepts in models of software and some families of models that are used in a wide variety of testing and analysis techniques. The fundamental concepts and trade-offs in the design of models is necessary for a full understanding of those test and analysis techniques, and is a foundation for devising new techniques and models to solve domain-specific problems.
In lecture Software testing and analysis - Chapter 10, you will: Understand the rationale for systematic (nonrandom) selection of test cases; understand why functional test selection is a primary, base-line technique; distinguish functional testing from other systematic testing techniques.
In this chapter, we begin with functional tests based on specification of intended behavior, add selected structural test cases based on the software structure, and work from unit testing and small-scale integration testing toward larger integration and then system testing.
In this chapter you will: Understand how automated program analysis complements testing and manual inspection; understand fundamental approaches of a few representative techniques (Lockset analysis, pointer analysis, symbolic testing, dynamic model extraction; recognize the same basic approaches and design trade-offs in other program analysis techniques).
Symbolic execution builds predicates that characterize the conditions under which execution paths can be taken and the effect of the execution on program state. Extracting predicates through symbolic execution is the essential bridge from the complexity of program behavior to the simpler and more orderly world of logic. This chapter presents the symbolic execution and proof of properties.
The objective of this chapter is to introduce the range of software verification and validation (V&V) activities and a rationale for selecting and combining them within a software development process, view the “big picture'' of software quality in the context of a software development project and organization.
This chapter describes the nature of those trade-offs and some of their consequences, and thereby a conceptual framework for understanding and better integrating material from later chapters on individual techniques.
This chapter advocates six principles that characterize various approaches and techniques for analysis and testing: sensitivity, redundancy, restriction, partition, visibility, and feedback. Some of these principles, such as partition, visibility, and feedback, are quite general in engineering.
Learning objectives in this chapter: Understand basics of data-flow models and the related concepts (def-use pairs, dominators…), understand some analyses that can be performed with the data-flow model of a program, understand basic trade-offs in modeling data flow.
In this chapter you will learn: Understand the purpose and appropriate uses of finite-state verification (fsv), understand modeling for fsv as a balance between cost and precision, distinguish explicit state enumeration from analysis of implicit models.
Learning objectives in this chapter: Understand the purpose of defining test adequacy criteria, and their limitations; understand basic terminology of test selection and adequacy; know some sources of information commonly used to define adequacy criteria; understand how test selection and adequacy criteria are used.
This chapter include objectives: Understand rationale and basic approach for systematic combinatorial testing, learn how to apply some representative combinatorial approaches, understand key differences and similarities among the approaches.
Learning objectives of this chapter: Understand rationale for structural testing, recognize and distinguish basic terms, recognize and distinguish characteristics of common structural criteria, understand practical uses and limitations of structural testing.
In this chapter 13, you will: Understand why data flow criteria have been designed and used, recognize and distinguish basic DF criteria, understand how the infeasibility problem impacts data flow testing, appreciate limits and potential practical uses of data flow testing.
Models are often used to express requirements, and embed both structure and fault information that can help generate test case specifications. Control flow and data flow testing are based on models extracted from program code. Models can also be extracted from specifications and design, allowing us to make use of additional information about intended behavior.
A model of potential program faults is a valuable source of information for evaluating and designing test suites. Some fault knowledge is commonly used in functional and structural testing, for example when identifying singleton and error values for parameter characteristics in category-partition testing or when populating catalogs with erroneous values, but a fault model can also be used more directly.
This chapter describes approaches for creating the run-time support for generating and managing test data, creating scaffolding for test execution, and automatically distinguishing between correct and incorrect test case executions.
Any complex process requires planning and monitoring. The quality process requires coordination of many different activities over a period that spans a full development cycle and beyond. Planning is necessary to order, provision, and coordinate all the activities that support a quality goal, and monitoring of actual status against a plan is required to steer and adjust the process.
Problems arise in integration even of well-designed modules and components. Integration testing aims to uncover interaction and compatibility problems as early as possible. This chapter presents integration testing strategies, including the increasingly important problem of testing integration with commercial off-the-shelf (COTS) components, libraries, and frameworks.