Position

When a user enters his software reuse library of choice he will follow a typical sequence of commands which will permit him to identify the parameters of his problem, perform a search of the library, browse candidate assets, and extract those which meet his criteria. By tracking the user's actions in this process one can synthesize the user's objective and determine the degree of success in achieving that objective. Two indicators are sought: first, the effectiveness of the library in meeting the user's requirements; second, the efficiency with which the library does this. Define an extraction ratio, ER = number of user extractions per search; this is an indicator of the overall effectiveness of the library. Next, define an extraction index, EI = ratio of the number of user extractions to the number of search candidates found by the library mechanism; this is an indicator of the efficiency with which the mechanism finds candidates. There are other possible intermediate parameters which can add insight to the process, notably the ratio of browses to searches or extractions.

When this approach was applied to ASSET's user activity over a four-month period certain modalities became apparent. The typical user search is an iterative process; the user may be unsuccessful on the first try in matching his needs with the search mechanism schema. In this case the typical user will converge on a matching path within two or three tries. It has been noted that this same user learns from the experience and converges more quickly on subsequent searches. An often chosen alternative approach is to scan the catalog headings and call out specific candidates by their unique library identification number. It is certainly true that the extraction index and extraction ratio are a function of the classification scheme, ease of use, search mechanism peculiarities, extent of the library's holdings, and so forth. However, the simple metric technique has merit for the reasons stated.

Analysis of the cumululative statistics of ASSET usage since inception of this approach shows some expected, but some unexpected results. First, the ER, the number of extractions per search on a monthly basis has been quite consistently within a band of 0.9 to 1.5, with a cumulative average of 1.3. So users have been finding somewhat more than one component per search. Second, EI, the ratio of extractions to candidates turned up by the search, varies over a wide range. Results are erratic and not statistically significant. This is because at one extreme users may invoke the entire catalog (hundreds of candidates) just to browse through it; at the other extreme a user extracts a component each time he calls for a known component by identification number. Third, we have found that the ratio of searches:browses:extractions tends to be stable from month to month about roughly 1:3:1