home *** CD-ROM | disk | FTP | other *** search
Wrap
Newsgroups: comp.windows.x.apps Path: sparky!uunet!vicorp!ron From: ron@vicorp.com (Ron Peterson) Subject: Automated testing Message-ID: <1992Aug31.193740.3729@vicorp.com> Organization: V. I. Corporation, Northampton, Massachusetts Date: Mon, 31 Aug 1992 19:37:40 GMT Lines: 781 Here's an addendum to Vonn Marsch's summary of responses about automated testing of X applications. I asked a similar question and received somewhat different replies. In particular, it was quite interesting to learn a little about how others go about testing. (Some use college students, some use regression, some develop their own tools, some purchase tools.) If you have experience testing medium to large graphic applications please send me a note describing what you worked on and how you approached it. If I get enough of them I'll post another summary. Here's a description of my experience to get things started: VI Corp. markets a graphic interface prototyper called DataViews. DataViews is built around 'views' which are 2D collections of graphical objects (such as lines, rectangles, graphs, input widgets, images, etc.) The product consists primarily of DVdraw (a layout and drawing editor which also lets you define dynamics for graphical objects and define programming rules for the way objects and views change and behave) and DVtools (a library of graphics routines which users can use to control and modify views, objects, event-handlers, etc.) Other related products are starting to come on line also such as DVXDesigner---an X widget layout editor and code generator that allows DataViews views to display in a drawing area widget. I've been with the company for about three years in the testing department and am the primary designer and coder of new tests. When I first arrived here the testing philosophy was "the developers are done with it, now you test it before we ship it in a few weeks." As the company has grown things have changed to where we currently view testing as part of the design process for the products. Hence, I am investigating trying to automate some of our testing. Towards that end I've tried writing a few things in-house such as an event recorder/player and a view tester. The event-recorder/player was a failure because it took longer to record the tests than to do them manually and since the user interface kept changing I had to keep starting over to record the tests. Eventually we ran out of time and reverted to manual testing. The view tester is currently being used and with a little work looks like it will be a valuable tool for finding memory leaks, run-time errors, and crashes. My current thinking is that an X based event recorder/playback tool combined with a bitmap comparer (that uses compressed bitmaps) might be made independent of the user interface as well as being easy to port to the 30+ Unix and VMS machines we support. By using X and being careful to control window size and fonts, a single automated test suite might be devised that could run on most or all platforms. Currently all of our testing tools have been developed in-house and many rely on the prototype building ability of DataViews which allows us to create interactive tests without writing code. Some code has been written to test drivers on a function by function basis, which has proved valuable, but there has never been enough time to write test code for every function in the library (and I see less benefit in such tests than I used to.) Demo and example code also form a critical part of our testing process since they are designed to be similar to applications our customers would write. Still, much of testing involves sitting and staring at the screen while clicking and moving a mouse. This is the most time consuming and dull part of testing and is the part we'd like most to automate. ron@vicorp.com or uunet!vicorp!ron ----------------------------------------------------------------------------- >From: uunet!cd.amdahl.com!wdc50 (Winthrop D Chan) >For PD, take a look at TCL, GCT, and EXPECT.The last two are based on TCL. I've >heard good things they are all supposed to be able to handle keyboard & mouse >event recording/playback on multiple windows. I downloaded a copy of TCL. So far I haven't seen anything about it being able to handle keyboard and mouse events. If it does handle them, then I get the impression you have to link some TCL code into your executable; sounds a lot like REXX. GCT has nothing to do with TCL (an extensive note from the creator of GCT is included below.) Here's an excerpt from the TCL README: >Tcl > >by John Ousterhout >University of California at Berkeley > >This directory contains the sources for Tcl, an embeddable tool command >language. For an introduction to the facilities provided by Tcl, see >the paper ``Tcl: An Embeddable Command Language'', in the Proceedings >of the 1990 Winter USENIX Conference. I also found a copy of EXPECT which is indeed based on TCL: >This is the README file from "expect", a program that performs >programmed dialogue with other interactive programs. It is briefly >described by its man page, expect(1). More examples and further >discussion about implementation, philosophy, and design are in >"expect: Curing Those Uncontrollable Fits of Interaction" by Don >Libes, Proceedings of the Summer 1990 USENIX Conference, Anaheim, >California, June 11-15, 1990. > >expect was designed and written by Don Libes, January - April, 1990. >NAME > expect - programmed dialogue with interactive programs > >SYNOPSIS > expect [ -d ] [ -c cmds ] [[ -f ] cmdfile ] [ args ] > >INTRODUCTION > expect is a program that "talks" to other interactive pro- > grams according to a script. Following the script, expect > knows what can be expected from a program and what the > correct response should be. An interpreted language pro- > vides branching and high-level control structures to direct > the dialogue. In addition, the user can take control and > interact directly when desired, afterward returning control > to the script. > > The name "expect" comes from the idea of send/expect > sequences popularized by uucp, kermit and other modem con- > trol programs. However unlike uucp, expect is generalized > so that it can be run as a user-level command with any pro- > gram and task in mind. (expect can actually talk to several > programs at the same time.) > > For example, here are some things expect can do: > > + Cause your computer to dial you back, so that you > can login without paying for the call. > > + Start a game (e.g., rogue) and if the optimal con- > figuration doesn't appear, restart it (again and > again) until it does, then hand over control to > you. > > + Run fsck, and in response to its questions, answer > "yes", "no" or give control back to you, based on > predetermined criteria. > > + Connect to another network or BBS (e.g., MCI Mail, > CompuServe) and automatically retrieve your mail so > that it appears as if it was originally sent to > your local system. ----------------------------------------------------------------------------- From: uunet!euclid.JPL.NASA.GOV!pjs (Peter J. Scott) If you have Motif 1.2 then it has a QA scripting language for this purpose. Even compares windows. Only Motif apps, though. ----------------------------------------------------------------------------- From: uunet!cbnea.att.com!tr (Aaron L Hoffmeyer) I encourage you to contact Dr. Bill Wolters at w.j.wolters@att.com Bill is working on a front-end to a commercial application called CAPBAK, which in raw form, is, to a limited degree, the kind of tool you are looking for. However, as Bill will be able to tell you, one must spend a great deal of effort in making it a solid, useable tool. Bill can give you specs, but, considering that he has worked with this tool for a year and it will probably take another year to make it the end-all, be-all tool that he would like, I feel I must warn you that purchasing such a tool and putting it to use, are not one and the same. Bill's really leading edge with this technology, he has presented at some conference, maybe USENIX, quite recently, so he knows where the market is. [I investigated CAPBAK about two years ago. We decided not to use it because it was more than we could afford at the time (being a small company) and it wasn't portable to the 20+ Unix and VMS platforms our product ran on. It is a tool for capturing mouse and keyboard events and then replaying them. It was designed to be used in conjunction with some other testing tools written by the same company which, I think, is Software Research, Inc.] ----------------------------------------------------------------------------- From: GAMBLE <uunet!heifetz!image!dos!gamble> We have looked here for automated tools, and I can pass on what little we have learned. * We do not like the 'total solution' packages. Instead we use a varity of small tools and utilities. * Software Management News magazine is a good source for tools. SMN (415) 969-5522 * We use tools from Software Research (Unix) - expensive but ok. has graphic compare capabilities * C-Cover for code coverage analysis - not good for huge programs * Microsoft Test for Windows applications - nice easy to use We have an extensive list of testcases and usually hire college co-ops to run the tests. It provides them with valuable experience, and we get the testing done. For automation, I have had better luck using keyboard cursor capture rather than mouse capture. Our graphic screens change resolution from different graphic boards so mouse movement is not always accurate. We have so far been able to save our graphic images as files, and use a file compare program with more success. We use ExDiff from Software Research that allows us to use byte offsets to eliminate time and date tags, or version numbers imbedded in our graphic files. Mostly be cautious of software that says they are the total solution. Use many small useful utilities, and keep track of things by using a bugs database. (We wrote our own using DBase IV for Unix) ----------------------------------------------------------------------------- From: Jim Curran <uunet!cis.ohio-state.edu!jim> XTest and XTrap are extensions to the X server. Extensions provide additional capabilities over and above what is provided by a "vanilla" server. It is unfortunate, but in order to use these extensions (and others) you must rebuild the X server. This may not be a hassle for your X developer but it does take a little extra time and patience. Once the extension is installed(you can list your extensions by running xdpyinfo) you can use calls to the extension library (libXext.a) and it will talk to the server's extensions for you. XTest and XTrap are (as I recall) copyrighted by HP and Dec, respectively. However, the copyright is one of those "It belongs to us but you can do anything you want with it" types of copyrights. In other words, you can have it. As I mentioned, XTrap is on export.lcs.mit.edu. Here's the confusing part to me: XTest seems to have come out around the time that R4 appeared and seemsto have been part of the official MIT dist (I could be wrong). I have not encountered the XTest extension by itself EXCEPT when I snagged the package called xrecplay from export. It contains the extension and some special stuff for Suns. I don't really feel like grabbing the R4 dist and looking around for XTest but my guess is that it is there. If it is not, then you can get it from xrecplay and beat it into usefulness. I've spent the last few weeks beating XTest into submission for my purposes (a program which does demos - moves the mouse, keyboard hits, voice etc...) and I can say that it more or less works. I looked at XTrap and it was very elaborate and came with a better manual but it added *lots* to our X server which was something I couldn't have. (XTest ~20K, XTrap ~200K). ----------------------------------------------------------------------------- From: Brian Marick <uunet!cs.uiuc.edu!marick> Capture/playback tools and graphics image "differencers": there are several out there. Two I have in my tools file are from Software Research (800-942-7638) and Mercury (408) 982-0100. I haven't used either. DEC also has one, which may be of interest for VMS. fullerton@clt.enet.dec.com may be the person to talk to. You may have problems with portability, not to mention price, if you're wanting to use these on several machines. I don't believe any of the commercial tools do anything more than bitmap comparisons, which will cause problems with spurious differences and storing huge bitmaps (Mercury compresses the bitmaps, I think, but that's still a lot of data per test). --- Automatic test generation does not work well, except in specialized circumstances. If you have a large testing problem like this, don't worry about path coverage. Worry about writing good tests at the system or subsystem level, based on what the program is supposed to do, what the users are likely to do, and potential implementation errors. Measure coverage later, to point out problems with your testing. (I distribute a free coverage tool that runs on most of the machines you mentioned, but not on VMS - details below.) I can send you a paper on "three ways to improve your testing", either postscript or hardcopy. It talks about some sensible tactics in situations like yours. If you want lots of detail about testing, I teach a course in testing subsystems and I make my living consulting in testing. (217)351-7228 -- One upcoming conference is the Pacific Northwest Software Quality Conference, Oct 19-21, in Portland. Call Pacific Agenda (503) 223-8633 for details. It was pretty good last year. IEEE Software has had some good practitioner-oriented articles about testing in recent years. I run a mailing list on testing research (testing-research-request@cs.uiuc.edu). It's pretty dead, unfortunately. Brian Marick, marick@cs.uiuc.edu, uiucdcs!marick, marick@testing.com (pending) Testing Foundations: Consulting, Training, Tools. Freeware test coverage tool: see cs.uiuc.edu:pub/testing/GCT.README ==== GCT is a freely-distributable coverage tool for C programs, based on the GNU C compiler. Coverage tools measure how thoroughly a test suite exercises a program. I tested without a coverage tool for years, but I've used GCT and its predecessor for about three years now. I'll never go back to testing without coverage. GCT provides coverage measures of interest to both practitioners and testing researchers: (1) branch coverage. Do the tests force every branch in both directions? Has every case in a switch been taken? (2) multiple-condition coverage. In a statement like if (A || B) has A been both true and false? Has B? Multiple-condition coverage is stronger than branch coverage. (3) loop coverage. Has every loop been skipped? Has every loop been iterated exactly once? More than once? Tests to satisfy loop coverage can force the detection of some bugs that branch coverage might miss. (4) relational operator coverage. Do tests probe for off-by-one errors? (5) routine and call coverage. Has every routine been entered? Has every function call in a routine been exercised? These two measures are weaker than the above, but they are useful for getting a general picture of the coverage of system testing. (6) race coverage. This special-purpose measure is useful for evaluating stress tests of multi-threaded systems, tests designed to find locking and concurrency problems. (7) weak mutation coverage. This is a type of coverage of most interest to researchers. GCT has been used in large C development projects. It is "industrial-strength". Platforms GCT runs on many versions of UNIX; porting it to new ones is usually simple. An earlier version ran on HP/Apollo's Aegis, so the current version should be easy to port. Mechanism GCT takes C source code and adds instrumentation. This new C source is then compiled as usual. This allows you to use your favorite C compiler. GCT is designed to work with existing makefiles (or other system build mechanisms). For example, instrumenting the entire System V Release 4 UNIX kernel requires changes to only the top-level makefile. As the compiled program runs, it updates an internal log. The log is written on exit (or, in the case of an unending program like the UNIX kernel, extracted from the running program). Other tools produce coverage reports (detailed or summary) from the log. Instrumentation is time-efficient, a few instructions per instrumentation point. Retrieving GCT GCT is available via anonymous FTP from cs.uiuc.edu. Log in as user "anonymous", and give your email address as the password. The file /pub/testing/GCT.README describes the GCT distribution. GCT and its documentation are also available on 8mm and QIC tapes for $150. After retrieving and installing GCT, be sure to run the tutorial. See the document 'A Tutorial Introduction to GCT'. Support and Training While GCT is free, the services you expect for a commercial product are available for a price. Contact me for information about support and training, or for information about other services I offer. Informal GCT support is available through the GCT mailing list; send mail to gct-request@ernie.cs.uiuc.edu to join it. Brian Marick Testing Foundations 809 Balboa, Champaign, IL 61820 (217) 351-7228 marick@cs.uiuc.edu ---------------------------------------------------------------------------- From: Brian Marick <uunet!cs.uiuc.edu!marick> This may also be of interest. It describes what I do when I test my own software, a smallish addition - maybe 10,000 lines - embedded inside a much larger C compiler. A CASE STUDY IN COVERAGE TESTING Brian Marick Testing Foundations Abstract I used a C coverage tool to measure the quality of its own test suite. I wrote new tests to improve the coverage of a 2600-line segment of the tool. I then reused and extended those tests for the next release, which included a complete rewrite of that segment. The experience reinforced my beliefs about coverage-based testing: 1. A thorough test suite should achieve nearly 100% feasible coverage. 2. Adding tests for additional coverage can be cheap and effective. 3. To be effective, testing should not be a blind attempt to achieve coverage. Instead, use coverage as a signal that points to weakly-tested parts of the specification. 4. In addition to suggesting new tests, coverage also tells you when existing tests aren't doing what you think, a common problem. 5. Coverage beyond branch coverage is worthwhile. 6. Even with thorough testing, expect documentation, directed inspections, beta testing, and customers to find bugs, especially design and specification bugs. The Generic Coverage Tool GCT is a freeware coverage tool for C programs, based on the GNU C compiler. It measures these kinds of coverage: - branch coverage (every branch must be taken in both directions) - multi-condition coverage (in 'if (a && b)', both subexpressions must evaluate to true and false). - loop coverage (require loop not to be taken, to be traversed exactly once, and traversed more than once) - relational coverage (require tests for off-by-one errors) - routine entry and call point coverage. - race coverage (extension to routine coverage for multiprocessing) - weak mutation coverage (a research technique) (For more, see [Marick92].) The tool comes with a large regression test suite, developed in parallel with the code, using a "design a little, test a little, code a little" approach, much like that described in [Rettig91]. About half the original development time was spent in test construction (with, I believe, a corresponding reduction in the amount of frantic debugging when problems were found by users - though of course there was some of that). Most of the tests are targetted to particular subsystems, but they are not unit tests. That is, the tests invoke GCT and deduce subsystem correctness by examining GCT's output. Only a few routines are tested in isolation using stubs - that's usually too expensive. When needed, test support code was built into GCT to expose its internal state. In early releases, I had not measured the coverage of GCT's own test suite. However, in planning the 1.3 release, I decided to replace the instrumentation module with two parallel versions. The original module was to be retained for researchers; commercial users would use a different module that wouldn't provide weak mutation coverage but would be superior in other ways. Before redoing the implementation, I wanted the test suite to be solid, because I knew a good test suite would save implementation time. Measuring Coverage I used branch, loop, multi-condition, and relational coverage. I'm not convinced weak mutation coverage is cost-effective. Here were my initial results for the 2617 lines of code I planned to replace. (The count excludes comments, blank lines, and lines including only braces.) BINARY BRANCH INSTRUMENTATION (402 conditions total) 47 (11.69%) not satisfied. 355 (88.31%) fully satisfied. SWITCH INSTRUMENTATION (90 conditions total) 14 (15.56%) not satisfied. 76 (84.44%) fully satisfied. LOOP INSTRUMENTATION (24 conditions total) 5 (20.83%) not satisfied. 19 (79.17%) fully satisfied. MULTIPLE CONDITION INSTRUMENTATION (390 conditions total) 56 (14.36%) not satisfied. 334 (85.64%) fully satisfied. OPERATOR INSTRUMENTATION (45 conditions total) ;; This is relational coverage 7 (15.56%) not satisfied. 38 (84.44%) fully satisfied. SUMMARY OF ALL CONDITION TYPES (951 total) 129 (13.56%) not satisfied. 822 (86.44%) fully satisfied. These coverage numbers are consistent with what I've seen using black box unit testing combined with judicious peeks into the code. (See [Marick91].) I do not target coverage in my test design; it's more important to concentrate on the specification, since many important faults will be due to omitted code [Glass81]. When the uncovered conditions were examined more closely (which took less than an hour), it was clear that the tests were more thorough than appears from the above. The 129 uncovered conditions broke down as follows: 28 were impossible to satisfy (sanity checks, loops with fixed bounds can't be executed 0 times, and so on). 46 were support code for a feature that was never implemented (because it turned out not to be worthwhile); these were also impossible to exercise. 17 were from temporary code, inserted to work around excessive stack growth on embedded systems. It was always expected to be removed, so was not tested. 24 were due to a major feature, added late, that had never had regression tests written for it. 14 conditions corresponded to 10 untested minor features. All in all, the test suite had been pleasingly thorough. New Tests Prior to the Rewrite I spent 4 hours adding tests for the untested major feature. I was careful not to concentrate on merely achieving coverage, but rather on designing tests based on what the program was supposed to do. Coverage is seductive - like all metrics, it is only an approximation of what's important. When "making the numbers" becomes the prime focus, they're often achieved at the expense of what they're supposed to measure. This strategy paid off. I found a bug in handling switches within macros. A test designed solely to achieve coverage would likely have missed the bug. (That is, the uncovered conditions could have been satisfied by an easy - but inadequate - test.) There was another benefit. Experience writing these tests clarified design changes I'd planned to make anyway. Writing tests often has this effect. That's why it's good to design tests (and write user documentation) as early as possible. I spent two more hours testing the minor features. I did not write tests for features that were only relevant to weak mutation. Branch coverage discovered one pseudo-bug: dead code. A particular special case check was incorrect. It was testing a variable against the wrong constant. This check could never be true, so the special case code was never executed. However, the special case code turned out to have the same effect as the normal code, so it was removed. (This fault was most likely introduced during earlier maintenance.) At this point, tests written because of multi-condition, loop, and relational coverage revealed no bugs. My intuitive feel was that the tests were not useless - they checked situations that might well have led to failures, but didn't. I reran the complete test suite overnight and rechecked coverage the next day. One test error was discovered; a typo caused the test to miss checking what it was designed to test. Rechecking took 1/2 hour. Reusing the Test Suite The rewrite of the instrumentation module was primarily a re-implementation of the same specification. All of the test suite could be reused, and there were few new features that required new tests. (I did go ahead and write tests for the weak mutation features I'd earlier ignored.) About 20% of the development time was spent on the test suite (including new tests, revisions to existing tests, and a major reorganization of the suite's directory structure and controlling scripts). The regression test suite found minor coding errors; they always do, in a major revision like this. It found no design flaws. Rewriting the internal documentation (code headers) did. (After I finish code, I go back and revise all the internal documentation. The shift in focus from producing code to explaining it to an imaginary audience invariably suggests improvements, usually simplifications. Since I'm a one-man company, I don't have the luxury of team code reads.) The revised test suite achieved 96.47% coverage. Of 37 unsatisfied conditions: 27 were impossible to satisfy. 2 were impossible to test portably (GNU C extensions). 2 were real (though minor) omissions. 1 was due to a test that had been misplaced in the reorganization. 5 were IF tests that had been made redundant by the rewrite. They were removed. It took an hour to analyse the coverage results and write the needed tests. They found no bugs. Measuring the coverage for the augmented test suite revealed that I'd neglected to add one test file to the test suite's controlling script. Other Tests The 1.3 release also had other features, which were duly tested. For one feature, relational operator coverage forced the discovery of a bug. A coverage condition was impossible to satisfy because the code was wrong. I've found that loop, multi-condition, and relational operator coverage are cheap to satisfy, once you've satisfied branch coverage. This bug was severe enough that it alone justified the time I spent on coverage beyond branch. Impossible conditions due to bugs happen often enough that I believe goals like "85% coverage" are a mistake. The problem with such goals is that you don't look at the remaining 15%, deciding, without evidence, that they're either impossible or not worth satisfying. It's better - and not much more expensive - to decide each case on its merits. What Testing Missed Three bugs were discovered during beta testing, one after release (so far). I'll go into some detail, because they nicely illustrate the types of bugs that testing tends to miss. The first bug was a high level design omission. No testing technique would force its discovery. ("Make sure you instrument routines with a variable number of arguments, compile them with the GNU C compiler, and do that on a machine where GCC uses its own copy of <varargs.h>.") This is exactly the sort of bug that beta testing is all about. Fixing the bug required moderately extensive changes and additions, always dangerous just before a release. Sure enough, the fix contained two bugs of its own (perhaps because I rushed to meet a self-imposed deadline). - The first was a minor design omission. Some helpful code was added to warn GCC users iff they need to worry about <varargs.h>. This code made an assumption that was violated in one case. Coverage would not force a test to detect this bug, which is of the sort that's fixed by changing if (A && B) to if (A && B && C) It would have been nice if GCT would have told me that "condition C, which you should have but don't, was never false", but this is more than a coverage tool can reasonably be expected to do. I found the bug by augmenting the acceptance test suite, which consists of instrumenting and running several large "real" programs. (GCT's test suite contains mostly small programs.) Instrumenting a new real program did the trick. - As part of the original fix, a particular manifest constant had to be replaced by another in some cases. I missed one of the cases. The result was that a few too few bytes of memory were allocated for a buffer and later code could write past the end. Existing tests did indeed force the bytes to be written past the end; however, this didn't cause a failure on my development machine (because the memory allocator rounds up). It did cause a failure on a different machine. Memory allocation bugs, like this one and the next, often slip past testing. The final bug was a classic: freeing memory that was not supposed to be freed. None of the tests caused the memory to be reused after freeing, but a real program did. I can envision an implementable type of coverage that would force detection of this bug, but it seems as though a code-read checklist ought to be better. I use such a checklist, but I still missed the bug. References [Rettig91] Marc Rettig, "Practical Programmer: Testing Made Palatable", CACM, May, 1991. [Marick91] Brian Marick, "Experience with the Cost of Different Coverage Goals for Testing", Pacific Northwest Software Quality Conference, 1991. [Marick92] Brian Marick, "A Tutorial Introduction to GCT", "Generic Coverage Tool (GCT) User's Guide", "GCT Troubleshooting", "Using Race Coverage with GCT", and "Using Weak Mutation Coverage with G CT", Testing Foundations, 1992. [Glass81] Robert L. Glass. "Persistent Software Errors", Transactions on Software Engineering", vol. SE-7, No. 2, pp. 162-168, March, 1981. Brian Marick, marick@cs.uiuc.edu, uiucdcs!marick Testing Foundations: Consulting, Training, Tools. ------------------------------------------------------------------------- From: uunet!veritas.com!joshua (Joshua Levy) You may be interested in ViSTA, a product made by my company. You can send email to our sales people at vsales@veritas.com, or call our east coast sales person, Jerry DeBaun, at 508-624-7758. ViSTA is a tool for gathering coverage metrics on your code as you test it. It can tell you how much of your code is being tested by your test suite. This can be valuable in determining how much testing still needs to be done. ViSTA also can tell you exactly which pieces of code have been tested, so you can write code to specifically test those parts which are not currently tested. We have found that using ViSTA significantly reduces the amount of time it takes to test code. Without coverage data, most test suites repeatedly test the same code over and over again, wasting most of their time (both development and execution). The data ViSTA provides can direct testing in a way which greatly reduces this waste. ViSTA works on C code (Application level or Kernel level) on these platforms: SunOS 4.1.x, AIX 3.x, HP 700s and 800s. We also support several "custom" ports including SVR3, SVR4, SCO UNIX, Pyramid, etc. Availability for these ports varies. Please give me a call or send email if you have any questions, or if I have not clearly explained what ViSTA does. Thanks. Joshua Levy (joshua@veritas.com) 408-727-1222x253 or 800-if-vista ------------------------------------------------------------------------- From: uunet!veritas.com!paula (Paula Phelan) Thanks for your interest in ViSTA. I will send along marketing literature this afternoon. If you like what you see give Jerry DeBaun a call at 508-624-7758 he is the Regional Sales Manager in your area. Paula HIGHLIGHTS Code coverage testing tool for developers and testers Static and Dynamic Analysis of new or existing code McCabe and Halstead Metrics Error Seeding (to test testcase quality) Selective Instrumentation Works on multi-threaded code Multi-processor development support Application simulation of: OS calls, I/O errors, hardware states Kernel simulation of: Process credentials, DKI/DDI, I/O errors Ability to customize simulations OVERVIEW DESCRIPTION of VERITAS ViSTA VERITAS Vista provides metric-based testing tools for application and kernel code. The tools inform developers, testers, and managers about test coverage and code maintainability. ViSTA reports on source code static metrics; estimates the number of test cases needed, determines the complexity of the code, and estimates the number of errors. ViSTA dynamic metrics identifies code segments missed by existing test cases and highlights redundant coverage. All metric data can be displayed graphically for at-a-glance comprehension. Error seeding is available to test the quality of test cases by verifying if they detect the errors they're intended to find. ViSTA utilities can simulate event-driven conditions to improve code coverage. This technology allows for testing hard to reach code or where replicating the physical I/O resource would be prohibitively expensive. DESCRIPTION of VERITAS ViSTA Modules VistaTEST The foundation of VERITAS ViSTA is VistaTEST. This facility provides the basic capabilities for code coverage, complexity metrics, error seeding and report generation. VistaTEST supplies development teams with the basic tools to immediately improve the detection and elimination of latent defects, increases the insight and understanding of existing code, and reduces the cost of software testing. VistaTEST defines what needs to be tested, estimates the cost of testing and the level of effort required. An advanced preprocessor that works with ANSI C or K&R C is provided, and is platform independent. VistaTEST also provides coverage measurements by determining what sections of the code are actually exercised by testing programs. It provides coverage of function entry/exits, segments, conditionals, basic blocks, paths and interface behaviors. Metrics are collected to determine the complexity of the code, level of coverage with existing tests and the number of tests needed for adequate coverage. Run-time monitors support coverage statistics for library, application, daemon, and kernel code. VistaSIM VistaSIM offers state of the art software validation for application and kerneldevelopers, by providing error handling logic, simulating behaviors at interface points and creating a more complete test scenario. It simulates all external behaviors at function interface proints including function return values, argument return values, and global variable values. The UNIX operating system calls are included and other simulation libraries may be extended by the user. ORDERING INFORMATION Contact VERITAS sales at 408-727-1222 or vsales@veritas.com, VistaTEST list price $1800. Quantity pricing is available on request. TRADEMARK ATTRIBUTES VERITAS ViSTA, VistaTEST, VistaSIM and VistaKERNEL are trademarks of VERITAS Software Corporation. -- Paula Phelan VERITAS Software 4800 Great America Parkway #420 Santa Clara, CA 95054 +1 408-727-1222 x241 paula@veritas.com ...{apple,pyramid}!veritas!paula ---------------------------------------------------------------------------