home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.benchmarks
- Path: sparky!uunet!stanford.edu!leland.Stanford.EDU!dhinds
- From: dhinds@leland.Stanford.EDU (David Hinds)
- Subject: Re: Geometric Mean or Median
- Message-ID: <1992Aug14.165247.6558@leland.Stanford.EDU>
- Sender: news@leland.Stanford.EDU (Mr News)
- Organization: DSG, Stanford University, CA 94305, USA
- References: <1992Aug12.172209.3108@nas.nasa.gov> <Aug14.142126.38458@yuma.ACNS.ColoState.EDU> <1992Aug14.155857.6561@riacs.edu>
- Distribution: comp.benchmarks
- Date: Fri, 14 Aug 92 16:52:47 GMT
- Lines: 23
-
- In article <1992Aug14.155857.6561@riacs.edu> lamaster@pioneer.arc.nasa.gov (Hugh LaMaster) writes:
- >In article <Aug14.142126.38458@yuma.ACNS.ColoState.EDU>, shafer@CS.ColoState.EDU (spencer shafer) writes:
- >|>
- >|>
- >|> Having joined this in midstream, I risk repeating some information. If
- >|> so, my apologies in advance. A discussion of this, and an offered proof
- >|> of the geometric mean as preferred method is in the March 1986 issue of
- >|> Communications of the ACM, "How Not to Lie With Statistics: The Correct
- >|> Way to Summarize Benchmark Results," by Fleming and Wallace.
-
- Has anyone tried to take all the available SPEC numbers, and do a factor
- analysis, to see if there is a statistically meaningful small set of
- numbers that can be used to predict the performance on all the tests?
- One would hope that the factors that fell out would naturally fit
- different architectural parameters -- scaler integer speed, scaler
- floating point speed, vector performance, etc. You would also get the
- weights of each factor for each SPEC test, and users could estimate
- performance of their own codes on new machines, by running on a few old
- machines, and using the published SPEC factors for those machines to
- calculate the weights for their codes.
-
- - David Hinds
- dhinds@allegro.stanford.edu
-