Elements of Reuse and Regression Testing of Object-Oriented Software

Lee J. White

Department of Computer Engineering and Science
Case Western Reserve University
Cleveland, Ohio 44106-7071
Tel: (216) 368-3919
Fax: (216) 368-2801
Email: leew@alpha.ces.cwru.edu

Khalil Abdullah

Abstract:

The focus of this study is on the constructs of inheritance, polymorphism, parametrized types and object composition as elements of reuse in object- oriented software design. There are a number of objectives of this study:
1) Determine the effects of these elements of reuse on the cost of object- oriented software development and maintenance. This will require extraction of empirical information from the development process of operational oo-software, and the selection of appropriate metrics to measure and calculate the cost impact of these reuse elements.
2) A firewall model has been developed to provide for regression testing at the integration level for small changes in oo-software, but has not accounted for polymorphism, or the other elements of reuse mentioned above. With the information gained in 1), the firewall model can be extended to include the effects of polymorphism. The firewall is then to be applied to both classes and objects in the software system.
3) From the results of the study described in 1) and 2), determine ways to improve both the design and maintenance of oo-systems, including testing and regression testing.

Keywords: Reuse elements, polymorphism, metrics, firewall, regression testing, class hierarchy testing, integration testing.

Workshop Goals: Interact with other researchers and practitioners working on maintenance and metrics for reuse.

Working Groups: Reuse of the Earlier Life-Cycle Artifacts.

Background

Our previous expertise and interest has been in the area of software testing of both function-oriented and object-oriented systems. We need to study the reuse elements of polymorphism in order to determine the cost and advantages of these elements in both development and maintenance. The effects of these reuse elements will allow us to extend the applicability of the firewall testing model.

Position

Our position is that for the important constructs of polymorphism in inheritance, parametrized types and object composition, little is known concerning the quantitative cost of developing or maintaining these constructs, and little is known concerning quantitative measures of reuse of these constructs during either development or maintenance. We believe that we can contribute to the state of the art of software reuse in two ways: one is a better understanding of the effects of the reuse of these constructs in development and maintenance, and the other is in the area of metrics and quantitative measure of these effects.

MAINTENANCE OF OBJECT-ORIENTED SOFTWARE

We want to investigate the potential advantage and cost of each variety of reuse element for both software development and maintenance. It is expected that some forms represent a tremendous advantage and reduction in cost during development. Other forms may represent a tradeoff or more costly alternative during development, but provide an increased payoff during maintenance. Other forms may be identified as being of questionable value during either life cycle period.

There have been several important papers which have considered polymorphism and maintenance of oo-technology. In Wild and Huitt [1], they identifythe most serious problem with polymorphism and dynamic binding during maintenance in terms of the dependencies which are created; static analysis alone cannot identify all the dependencies produced, and the identification of dependencies is a key part of maintenance. They also observe that if functions are to be reused or modified for reuse during maintenance, these reuse benefits will be achieved only if the methods can be located fairly efficiently. If polymorphism makes the location of the code obscure, then reuse advantages will be lost. A second paper on this maintenance theme is written by Lejter, et. al. [2]. After identifying essentially the same issues of inheritance and dynamic binding, they describe methods and a tool to browse and identify dependencies, class hierarchies and clusters.

OBJECT-ORIENTED METRICS

In the last five years, significant progress has been made in the area of oo-metrics. The best example of this progress is the book by Lorenz and Kidd [3], which provides both design metrics and project metrics, as well as guidelines and recommendations for their use, with considerable case study observations and data. An excellent research paper on a metrics suite for oo-design is provided by Chidamber and Kemerer [4].

At this point, our study will include the following oo-metrics:

1) Number of Abstract Classes (possibly weighted by number of methods)
2) Number of Concrete Classes (possibly weighted by number of methods)
3) Class Hierarchy Nesting Level
4) Total Number of Methods
5) Number of Methods Overloaded
6) Number of Operators Overloaded
7) Number of Parametrized Classes
8) Number of Objects Composed

The metrics and data representing effort in man-months is more difficult. The guidelines from Lorenz and Kidd [3] will be helpful, together with their data and interpretation in using them. Industry effort data for class and method development, as well as for other special metrics given above, will provide more specific effort data in man-months. Yet further planning and research is necessary here, in order to identify a specific methodology to empirically determine the cost of each polymorphic method as opposed to alternatives.

Comparison

About seven years ago, there was essentially no research literature or even systematic practical literature on the subject of testing oo-software. Although many interesting problems still need to be addressed, this field has now matured, and there are many research and practice papers available. A comprehensive survey by Binder [5] will soon appear, documenting both research and practice in this area, as well as related areas(such as testing abstract data types). There is a general agreement that any systematic and comprehensive test plan for oo-software should include the following:

Our concern is primarily with class hierarchy testing and class integration testing, although we will also be interested in testing client-server relation- ships as well. Excellent work has been done on the testing and integration testing of classes and the inheritance hierarchy; for example, see Harrold, et. al. [6]. Yet these approaches concentrate on static dependencies; we wish to extend these so that dynamic binding effects can also be included. A new book by Siegel has recently appeared [7], the only book on oo-testing, in which he discusses system, integration, class and object testing procedures.

Next we need to consider the concept of "firewall", primarily for maintenance and regression testing when small changes are made in either function-oriented or object-oriented systems. Leung and White introduced the concept of "firewall" in order to deal with regression testing at the integration level for functional design [8], [9], with more recent insight provided by Abdullah, et. al. [10]. Given one or more modules which have been changed, the firewall encloses the set of modules that must be retested. The firewall is an imaginary boundary that limits the amount of retesting for modified software containing possible regression errors introduced during modification. This procedure also involves the selection or development of test data for regression testing of this change. A key concept is the categorization of the modification as a specification change or code change, as this has a major effect on the construction and implications of the firewall. As long as unit and integration testing are reliable, it is shown that the firewall regression tests are also reliable. The paper by Abdullah, et.al. [10] showed that even if integration testing were unreliable, the concept of firewall could be retained with only a small increase in the firewall and the required regression testing.

This concept of firewall was applied to oo-software by Kung, et.al. [11]. The effects of a small modification of the system could be seen in this firewall model, and the affected objects, classes and client-server relationships within the firewall could be regression tested to detect any regression errors resulting from the modification. Again the emphasis was upon static effects of the class inheritance hierarchy and did not take polymorphic effects into account. This firewall model also did not include the effects of code change vs specification change, and it is possible that this concept could sharpen the results and insights in using the firewall model.

References

1
N. Wilde and R. Huitt, ``Maintenance support for object-oriented programs,'' IEEE Trans. on Software Engineering, vol. 1, pp. 1038-1044, December 1992.

2
M. Lejter, S. Meyers, and S. Reiss, ``Support for maintaining object-oriented programs,'' IEEE Trans. on Software Engineering, vol. 18, pp. 1045-1052, Decmber 1992.

3
M. Lorenz and J. Kidd, Object-Oriented Software Metrics. Englewood Cliffs, NJ: Prentice Hall, 1994.

4
S. Chidamber and C. Kemerer, ``A metrics suite for object-oriented design,'' IEEE Trans. on Software Engineering, vol. 20, pp. 476-493, June 1994.

5
R. Binder, ``Testing object-oriented software: a survey,'' Journal of Software Testing, Verification and Reliability, vol. 6.3 and 6.4, 1996. to appear December 1996.

6
M. J. Harrold, J. McGregor, and K. Fitzpatrick, ``Incremental testing of object-oriented class structures,'' in Proc. of 14th Int. Conf. on Software Engineering, (Melbourne, Australia), pp. 68-80, 1992.

7
S. Siegel, Object-Oriented Software Tesring: A Hierarchical Approach. New York: John Wiley, 1996.

8
H. Leung and L. White, ``A study of integration testing and software regression at the integration level,'' in Int. Conf. on Sofware Maintenance, (San Diego), pp. 290-301, 1990.

9
L. White and H. Leung, ``A firewall concept for both control-flow and data flow in regression integration testing,'' in Proc. Int. Conf. on Software Maintenance, (Orlando, FL), pp. 262-271, 1992.

10
K. Abdullah, J. Kimble, and L. White, ``Correcting for unreliable integration testing,'' in Proc. Conf. on Software Maintenance, (Nice, France), pp. 232-241, 1995.

11
D. Kung, J. Gao, P. Hsia, F. Wen, Y. Toyoshima, and C. Chen, ``Firewall regression testing and software maintenance of object-oriented systems,'' JournalL of Object-Oriented Programming, pp. 51-63, May 1995.

Biography

Lee J. White holds the Jennings Chair as Professor of Computer Engineering and Science at Case Western Reserve University. His areas of research are in software testing and maintenance. He previously served as Chairman of the computing departments at CWRU, the University of Alberta and the Ohio State University. He served as General Chair of the International Conference on Software Maintenance in 1994 held in Victoria, BC, Canada. He was invited to be a presenter in the IEEE Videoconference Software Testing and Reliability in 1991. He has served as a consultant for the General Electric R and D Center, IBM Corporation, Monsanto Research Laboratory and North American Rockwell Corporation. He received a PhD in Electrical Engineering from the University of Michigan in 1967.

Khalil Abdullah is currently a PhD student at Case Western Reserve University. His area of research is software testing and maintenance. He holds a position with Kuwait University as a Teaching Assistant in the Mathematics and Computer Science Department. He received an MS in Computer Science from the University of Miami, Florida, in 1990.