Quantitative analysis of unit verification as predictor in large scale software enginering
Author
Summary, in English
Unit verification, including software inspections and unit tests, is usually the first code verification phase in the software development process. However, principles of unit verification are weakly explored, mostly due to the lack of data, since unit verification data are rarely systematically collected and only a few studies have been published with such data from industry. Therefore, we explore the theory of fault distributions, originating in the quantitative analysis by Fenton and Ohlsson, in the weakly explored context of unit verification in large-scale software development. We conduct a quantitative case study on a sequence of four development projects on consecutive releases of the same complex software product line system for telecommunication exchanges. We replicate the operationalization from earlier studies, analyzed hypotheses related to the Pareto principle of fault distribution, persistence of faults, effects of module size, and quality in terms of fault densities, however, now from the perspective of unit verification. The patterns in unit verification results resemble those of later verification phases, e.g., regarding the Pareto principle, and may thus be used for prediction and planning purposes. Using unit verification results as predictors may improve the quality and efficiency of software verification.
Department/s
Publishing year
2016-12
Language
English
Pages
967-995
Publication/Series
Software Quality Journal
Volume
24
Issue
4
Document type
Journal article
Publisher
Springer
Topic
- Computer Science
Keywords
- Software fault distributions - Unit verification - Software metrics - Empirical research - Replication
Status
Published
ISBN/ISSN/Other
- ISSN: 0963-9314