home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.benchmarks
- Path: sparky!uunet!sun-barr!ames!data.nas.nasa.gov!amelia!eugene
- From: eugene@amelia.nas.nasa.gov (Eugene N. Miya)
- Subject: [l/m 7/24/92] SPEC info sources (16/28) c.be. FAQ
- Keywords: who, what, where, when, why, how
- Sender: news@nas.nasa.gov (News Administrator)
- Organization: NAS Program, NASA Ames Research Center, Moffett Field, CA
- Date: Sun, 16 Aug 92 11:25:08 GMT
- Message-ID: <1992Aug16.112508.27772@nas.nasa.gov>
- Reply-To: eugene@amelia.nas.nasa.gov (Eugene N. Miya)
- Lines: 279
-
- 16 SPEC <This panel>
- 17 Benchmark invalidation methods
- 18
- 19 WPI Benchmark
- 20 Equivalence
- 21 TPC
- 22
- 23
- 24
- 25 Ridiculously short benchmarks
- 26 Other miscellaneous benchmarks
- 27
- 28 References
- 1 Introduction to the FAQ chain and netiquette
- 2
- 3 PERFECT Club
- 4
- 5 Performance Metrics
- 6
- 7 Music to benchmark by
- 8 Benchmark types
- 9 Linpack
- 10
- 11
- 12 Measurement Environments
- 13
- 14
- 15 12 Ways to Fool the Mass with Benchmarks
-
- Systems Performance Evaluation Corporation
- [SPEC]
- c/o NCGA
- 2722 Merrilee Drive
- Suite 200
- Fairfax, VA, 22031
-
- Ph: 703-698-9600 x318
- FAX: 703-560-2752
- spec-ncga@cup.portal.com. (NOT FREE)
- SPEC is a trademark of the Standard Performance Evaluation Corporation.
-
- SPEC, the System Performance Evaluation Corporation, is a non-profit
- corporation formed to "establish, maintain and endorse a standardized
- set of relevant benchmarks that can be applied to the newest generation
- of high-performance computers".
- The founders of this organization believe that the user community
- will benefit greatly from an objective series of applications-oriented
- tests, which can serve as common reference points and be considered
- during the evaluation process. While no one benchmark can fully
- characterize overall system performance, the results of a variety
- pf realistic benchmarks can give valuable insight into expected
- real performance.
- The members of SPEC (at current annual dues of $5000) are
- AT&T/NCR, Bull S.A., Compaq, CDC, DG, DEC, Fujitsu, HP, IBM,
- Intel, Integraph, MIPS, Motorola, Prime, Siemens, Silicon Graphics,
- Solbourne, Sun, Unisys.
-
- SPEC basically does 2 things:
-
- 1) Puts together suites of benchmarks that are generally available
- in source form.
- These benchmarks are intended to measure something meaningful and
- are extensively tested for portability before release.
- There are strict rules on how these benchmarks must be run
- and how results must be reported for the trademarked results
- (SPECmark, SPECint, and SPECfp for the current CPU benchmarks).
- (SPECmark89, SPECint89 and SPECfp89 are the metrics from SPEC Release 1
- benchmark suite. Two suites, CINT92 and CFP92 with metrics SPECint92 and
- SPECfp92 were designed to replace SPEC Release 1.
-
- 2) Publishes SPEC Benchmark results in a quarterly newsletter.
-
- There are currently four suites of benchmarks.
-
- 1) CINT92
-
- This is a suite of 6 compute intensive (CPU, memory system, compiler) integer
- benchmarks intended to measure "engine horsepower" on applications of
- realistic size and content.
- The results are expressed as the ratio of the wall clock time to
- execute the benchmark compared to a (fixed) "SPEC reference time"
- (which was chosen early-on as the execution time on a VAX 11/780).
- The results are reported as the geometric mean of the individual ratios:
-
- SPECint92 -- the geometric mean of all 6 integer benchmarks.
-
- A separate measure known SPECrate_int92 will be define by the end of June 92
- to use these benchmarks to measure the the processing capacity of a given
- system (i.i. - not just how fast one compute intensive job can be done but how
- many tasks can be accomplished.)
-
- 2) CFP92
-
- This is a suite of 14 compute intensive (CPU, memory system, compiler
- (12 Fortran, 2 C)) floating point benchmarks intended to measure
- "engine horsepower" on applications of realistic size and content.
- The results are expressed as the ratio of the wall clock time to
- execute the benchmark compared to a (fixed) "SPEC reference time"
- (which was chosen early-on as the execution time on a VAX 11/780).
- The results are reported as the geometric mean of the individual ratios:
-
- SPECfp92 -- the geometric mean of all 6 floating point benchmarks.
-
- A separate measure known SPECrate_fp92 will be define by the end of June 92
- to use these benchmarks to measure the the processing capacity of a given
- system (i.i. - not just how fast one compute intensive job can be done but how
- many tasks can be accomplished.)
-
- 3) SPEC Benchmark suite 1.2.
- This is a suite of 10 CPU (and memory system) intensive benchmarks
- intended to measure "engine horsepower" on some applications of
- realistic size and content.
- This release is in the process of being replaced by CINT92 and CFP92.
- The results are expressed as the ratio of the wall clock time to
- execute the benchmark compared to a (fixed) "SPEC reference time"
- (which was chosen early-on as the execution time on a VAX 11/780).
- The results are reported as the geometric mean of the individual ratios:
- SPECmark89 -- all 10
- SPECint89 -- the 4 benchmarks that are integer-operation intensive
- SPECfp89 -- the 6 benchmarks that are floating-point intensive
- (in this case that means > 1% of executed instr are FP instr).
- A separate measure known as SPECthruput may be reported for
- multiprocessor systems.
- This attempts to measure the actual CPU/memory performance
- available from the multiple processors.
- In effect, it shows up any benchmark slowdown due to resource
- contention in the MP system.
-
- 2) SPEC SDM 1.0 (Systems Development Multitasking) (announced 14-May-91).
- This consists of two benchmarks that present a multiprocess
- load to the system. The results are more graphical than the
- CPU benchmark and are not as easily reduced to single numbers,
- but the marketing ad numbers used will be the two (different)
- peak throughput numbers measured in scripts/hour.
-
- SPEC is currently working on an NFS benchmark (LADDIS) expected in the second
- half of 1992.
-
- The benchmark sources are generally available -- but not free.
- SPEC is charging separately for the two benchmark suites.
- The cost of the source tapes is intended to support the administrative
- costs of the corporation -- making tapes, answering questions about
- the benchmarks, and so on.
-
- SPEC membership costs:
-
- Initiation $1,000
- In order to encourage new memberships, SPEC has lowered the
- initiation fee to $ 1000 (full members) or $ 500 (associates).
- Annual Dues $ 5,000
-
- SPEC Associate:
-
- Initiation $500
- Annual Dues $1,000
-
- To qualify as a SPEC associate, you must be an accredited educational
- institution or a non-profit organization. An associate has no voting
- privileges. An associate will receive the newsletter and the benchmark
- tapes as they are available. In addition, an associate will have early
- access to benchmarks under development so that an associate may act in an
- advisory capacity to SPEC.
-
- SPEC Benchmark Suite Tape/Newsletter Subscriptions
-
- Pricing
- Price
- CINT92 $425 (until 8/1/92,$300 if you have a SPEC 1 license)
- CFP92 $575 (until 8/1/92,$400 if you have a SPEC 1 license)
- CINT92&CFP92 $900 (until 8/1/92,$600 if you have a SPEC 1 license)
- Release1.2b $300 (QIC 24 tape format)
- SDM $1450
- Newsletter $550 (1 year subscription, 4 issues)
-
- Should probably place contents here:
-
- Q. Where can I get SPEC Results?
-
- A. Some SPEC results are tabulated (very unofficially) and maintained
- for anonymous ftp access on perelandra.cms.udel.edu (128.175.74.1)
- in the directory bench/spec/ .
- The file spec.sc contains the raw data and some derived quantities.
- This is an `sc' format spreadsheet file, and is not easy to read
- without using the `sc' spreadsheet calculator.
- The file spec.print contains a formatted subset of the data.
-
- If you need to get `sc' as well, it is available in pub/Lang/sc614.tar.Z
-
- REQUEST: I have no official source for these results. Your help
- in keeping them up to date is requested. Just send e-mail to
- "mccalpin@perelandra.cms.udel.edu" with the following info:
- (1) Exact model of machine
- (2) SPECIFIC citation of where you got the data
- (3) Either the 10 SPEC ratios or the 10 SPEC times
- Unless you request otherwise, I will cite your personal
- communication in the notes for the new results.
-
-
- ^ A
- s / \ r
- m / \ c
- h / \ h
- t / \ i
- i / \ t
- r / \ e
- o / \ c
- g / \ t
- l / \ u
- A / \ r
- <_____________________> e
- Language
-
-
- You might want to mention LADDIS in your SPEC listing even
- though LADDIS is not yet available from SPEC.
- [See the 11/25/91 issue of "Unix Today!" for details. :-))) ]
-
- Might you still want to write an editorial for UT! ?
-
- ---jason jason@cs.utexas.edu
-
- Article 2117 of comp.benchmarks:
- From: weicker@sinix.UUCP (Reinhold Weicker)
- Newsgroups: comp.benchmarks
- Subject: Update on SPEC
- Message-ID: <1992Jul22.095039.23362@sinix.UUCP>
- Date: 22 Jul 92 09:50:39 GMT
- Organization: SNI AG Muenchen, STO XS
- Lines: 58
-
- This is an update (mostly minor corrections) to the recent (July 16)
- posting of Answers to Frequently Asked Questions about SPEC.
-
-
- 2) Phase-Out of the SPEC Benchmark Suite 1.2 (CPU-Benchmarks of 1989):
-
- Since SPEC has now, with CINT92 and CFP92, better and larger
- benchmark suites than the old suite 1.2 to measure the
- CPU/Memory/Compiler performance, it strongly encourages usage of the
- new suites (with the averages SPECint92, SPECfp92 for speed,
- SPECrate_int92, SPECrate_fp92 for compute-intensive throughput).
-
- Therefore, at the last Steering Committee meeting, SPEC has
- adopted a plan for the phase-out of the old benchmarks:
- - Result pages labeled with "Benchmark obsolete" from Jan. 1993 on,
- - No more tape sales of Rel. 1.2b after Jan. 1993,
- - No more result publications in SPEC Newsletter after June 1993.
-
- 3) SPEC seeks new members:
-
- - You can get first-hand experience in an area that is widely
- neglected in academia but nethertheless very important in the "real
- world"; and there *are* interesting research questions in this
- area.
- - SPEC can perhaps overcome the perceiption that may exist "These are
- just a bunch of computer vendors, and who knows how meaningful
- their results are". Presently, we are only vendors, but membership
- is not restricted to vendors. Others can bring in new aspects, are
- (hopefully) unbiased (and perceived as unbiased), can accompany
- SPEC's practical work with systematic research, etc.
-
- - Reinhold Weicker
-
- I have been involved in benchmarking since I wrote Dhrystone; for
- the last 1 1/2 years I have been representing my company (Siemens
- Nixdorf) within SPEC.
-
- Opinions expressed are my own and do not represent SPEC or my company.
-
- P.S. I post this message from a different machine than I use normally.
- Therefore, if you want to send mail to me, do not use the "r"
- command, use the explicit address "weicker@ztivax.zfe.siemens.de".
- --
- Reinhold P. Weicker, Siemens Nixdorf Information Systems, STM OS 32
- Address: Otto-Hahn-Ring 6, W-8000 Muenchen 83, Germany
- E-Mail: weicker@ztivax.zfe.siemens.de
- Phone: +49-89-636-42436 (8-17 Central European Time)
-
-
-