Reuse and Analysis

 

Douglas A. Stuart

Microelectronics and Computer Technology Corporation

3500 Balcones Center Drive, Austin, TX 78759

Tel: (512)338-3478, fax: (512)338-3818

Email: stuart@mcc.com

URL: http://www.mcc.com/

 

Abstract

The synergy between reuse and analysis has long been recognized by advocates of both reuse and analysis. Reuse can amortize the cost of analysis across multiple appearances of the analyzed artifact. Analysis can ensure that artifacts are reused and not bugs. Architecture based product line development has emerged as a means of achieving large scale planned reuse, further reducing the cost of analysis by raising the level of abstraction from models and designs to components and architectures, while also increasing the consequences of reusing flawed artifacts, reinforcing the need for analysis. Even so, most of the focus on the combination of reuse and analysis has been on the reuse of analysis. At least as important are analysis for reuse and reuse for analysis.

  

Keywords: Product line development, software architecture, analysis.

Workshop Goals: Explore the meanings and roles of analysis in, of, and for reuse.

Working Groups: Product line architectures, Testing and verification for reuse.

 

1 Background

The synergy between analysis and reuse has long been apparent to advocates of both reuse and analysis. Reuse of artifacts can amortize the cost of their analysis (with respect to various functional and quality properties) over multiple artifact uses, while analysis can ensure that artifacts are being reused and not defects. Unfortunately, there are other factors that can inhibit this synergy. In particular, in order to reduce the cost of analysis, the goal of the analysis is focussed as tightly as possible, and the reusability of the analysis for broader goals may be minimal. Reuse, on the other hand requires that the applicability of the analysis be as wide as possible. Reconciling this conflict is crucial for effective reuse and effective analysis.

2 Position

Effective reuse and effective analysis require reuse of analysis. However, effective reuse of analysis requires that the analysis be tailored for reuse, and that the reuse be tailored for analysis. In particular, the goals for the analysis should be set with an awareness of the reuse context, and the reuse context should be defined with making analysis tractable as a goal. Product line based development provides the environment for this context setting.

3 Approach

Reuse of analysis has long been a goal of both the research and analysis communities [1,2]. Much of this effort has focussed on isolating the artifact being analyzed from its context. Such local analysis [1] can then be used to certify that the artifact in question has the given property regardless of its context of use. The results of the analysis, and the analysis itself, are reused without modification.

Unfortunately, such reuse relies on the reusability not just of the analysis, but the goal of the analysis. That is, this type of reuse of analysis requires that the property satisfied by the artifact be reused, as well as the analysis. If this is acceptable, then this type of reuse of analysis can be very effective. If the property can not be reused, it may be that much of the analysis can not be reused either. Requiring that the property as well as the analysis be reused places a severe restriction on reusing analysis.

The reason for this is that due to the expense of analysis, most analysis techniques are very goal directed. Given a property to be checked against an artifact, the analysis tends to be tailored to doing exactly what is necessary to establish the property. Consider the following analysis techniques. SAAM [3], the Software Architecture Analysis Method, is an approach for doing qualitative assessment of software architectures in order to do comparative evaluations of candidate software architectures. It is built around the notion of an architectural quality property of interest, for example modifiability, and a set of scenarios deemed to be significant for that property. If the set of scenarios changes, as would be likely if the architecture were to be reused in a different context, the entire analysis may have to be redone.

Although in this case, it could be argued that the reliance on scenarios breaks locality, this is not the case for other techniques. FLAVERS [4] is formal verification tool for checking finite behaviors of source code modules. It uses a model-checking like approach, and in order to combat the state explosion problem, the property being checked is used to filter the set of events represented in the model. To verify a different property requires rebuilding the state space. Similar approaches are used in other behavioral verification approaches [5], and it is at the very heart of symbolic model checking [6]. In these cases, the analyses are definitely local, and yet can not be reused except to the degree that the underlying formalization of the artifact is expressive enough to accommodate a new property.

To see the impact of being unable to reuse the analysis of an artifact apart from reusing the property that was the goal of the analysis, consider the following example. A real-time system is being built that performs a particular task. There is a reusable artifact available that performs that task, but its requirement was to execute at a rate of 100 times per second, and analysis indicates that it met its requirement. However, the new system requires that the task execute at a rate of 200 times per second. The analysis result associated with the artifact is useless for determining whether it can be reused in this new system.

The problem lies not with the artifact, which may well be capable of operating at the desired higher rate, but with the analysis. The analysis required for reuse is not analysis that confirms that an artifact meets its designed requirements, but analysis that indicates exactly what the characteristics of the artifact as built are. Unfortunately, as discussed in the preceding paragraphs, this type of analysis is more expensive than analysis that checks that the artifact meets a stated requirement, and may be intractable and in some cases undecidable [7].

Product line development [8] offers a way out of this dilemma. Product line development is a form of strategic reuse that aims to achieve large scale reuse by designing an asset base to be used to develop a family of applications in a particular domain. The domain is chosen because the common features of the domain allow for a high degree of reuse, and the variability among products in the product line (and the domain) is likewise planned to maximize reuse. Assets are then built to span the planned dimensions of variability.

The planning for variability is the aspect of product line development that can lead to analysis for reuse as opposed to reuse of analysis. Rather than performing analysis to determine whether a particular artifact meets its requirements, which may place unacceptable limits on the reusability of that analysis, or trying to determine the exact characteristics of the artifact, which will be expensive at best, analysis can be performed with respect to the variability of the product line domain. This is goal driven analysis, but with the goal determined by the variability of the product line, not by the requirements of a single product. Returning to the previous example, if the systems were being built as a product line, and variability in the task processing rate was part of the planned variability of the domain, then the original analysis of the artifact would have indicated which part of the planned variability it spanned.

This type of analysis, driven by requirements that reflect variability across a planned product line domain, will be more expensive than analysis based on single application requirements. For example, performing FLAVERS [4] analysis that is suitable for reuse across a product line may require building a state space that includes all of the events of interest across the product line, resulting in a larger model and longer analysis times. This type of analysis will be appropriate if its cost is less than that of redoing the analysis for each individual application reusing an artifact, and also less than for building the full state space for the artifact.

This increased cost of analysis when performed with an eye to reuse across the variability of the domain of a product line leads to the final observation. Reuse for analysis is necessary to achieve reuse of analysis. Just as to facilitate reuse, analysis must be done with reuse in mind, reuse must also be done with analysis in mind. Again, a product line provides the context in which this can be done. The boundaries of the product line domain, that establish the variability used to circumscribe the analysis, should be chosen so that the analysis that is deemed necessary can be performed. This can also impact the artifacts that are chosen to populate the asset base of the product line. For example, the portion of the domain spanned by individual artifacts determines the variability that the analysis of the artifact must support.

 

4 Comparison

This completes the circle. Reuse of analysis, which as indicated has been the focus of most efforts on reusing analysis, is necessary for the success of both reuse and analysis. Analysis for reuse is necessary to broaden the reusability of analysis. Reuse for analysis is necessary for analysis for reuse to be practical. To achieve these goals, more interaction, cooperation and understanding is necessary between the reuse and analysis communities.

References

[1] Jeff Poulin and Will Tracz. WISR'93: 6th Annual Workshop on Software Reuse summary and working group reports. In WISR'93, Owego, NY, November 1993.

[2] Stephen H. Edwards and Bruce W. Weide. WISR8: 8th Annual Workshop on Software Reuse summary and working group reports. In WISR8, Columbus, OH, March 1997.

[3] Rick Kazman, Len Bass, Gregory Abowd, and Mike Webb. SAAM: A method for analyzing the properties of software architectures. In Proceedings of the 16th International Conference on Software Engineering, pages 81-90, Sorrento, Italy, May 1994.

[4] Matthew B. Dwyer. Data Flow Analysis for Verifying Correctness Properties of Concurrent Programs.PhD thesis, University of Massachusetts Amherst, 1995.

[5] Jin Yang, Aloysius K. Mok, and Douglas A. Stuart. A new generation modechart verifier. In RTAS'95,1995.

[6] Rajeev Alur, Thomas A. Henzinger, and P.-H. Ho. Automatic symbolic verification of embedded systems. In Proc.IEEE Real-Time Systems Symposium, pages 2-11. IEEE Computer Society Press, December 1993.

[7] Rajeev Alur and Thomas A. Henzinger. Real-time logics : Complexity and expressiveness. In Proceedings of the 5th Annual IEEE Symposium on Logic in Computer Science, March 1990.

[8] Paul Clements, Linda M. Northrup, et al. A framework for software product line practice - version1.0. Technical report, Software Engineering Institute, September 1998.

Biography

Douglas Stuart (stuart@mcc.com) , http://www.mcc.com/

Douglas Stuart is currently a member of the technical staff at the Microelectronics and Computer Technology Corporation in Austin, Texas working on projects on architecture based product line development, focusing on architecture description and analysis, and software testing. He received his Ph. D. and M.S.C. S. in Computer Science from the University of Texas in 1996 and 1989, researching specification languages for real-time systems, and a B.S. in Mathematics and Computer Science from the Ohio State University in 1983.