Justification for the position

The quality of a software product can be expressed as an ordered pair:

  1. The ``goodness'' of the results

  2. The ``goodness'' of the product.

The first part is easy to understand and evaluate. A software product that generates specified results is correct. To determine whether a software product is correct, it can be formally verified or can be tested.

The second part, however, is not as easy to quantify. Software engineering literature [Pressman, Sommerville] is replete with factors and metrics for measuring this aspect of quality - determining how good a software product is or how well a software product has been engineered. Most published factors and metrics are good only to the extent that they can be used to argue ``statistically'' that one product is better than another. Unfortunately, it is not at all clear how even a smart software designer can use these factors to produce a well-engineered software product.

We take the position in this paper that the potential verification effort (PVE) for evaluating part 1 (i.e., correctness) locally is a useful factor for evaluating part 2. We show that designing to minimize PVE improves software quality and that minimized PVE directly implies improved ratings on conventional software engineering metrics.



Subsections