home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!gatech!purdue!mentor.cc.purdue.edu!pop.stat.purdue.edu!hrubin
- From: hrubin@pop.stat.purdue.edu (Herman Rubin)
- Newsgroups: sci.math.stat
- Subject: Re: Testing for Normality
- Message-ID: <BuJ7v8.EKM@mentor.cc.purdue.edu>
- Date: 13 Sep 92 19:37:55 GMT
- References: <1992Sep10.124312.4391@cognos.com> <11SEP199206534334@amarna.gsfc.nasa.gov> <WVENABLE.92Sep12234508@algona.stats.adelaide.edu.au>
- Sender: news@mentor.cc.purdue.edu (USENET News)
- Organization: Purdue University Statistics Department
- Lines: 45
-
- In article <WVENABLE.92Sep12234508@algona.stats.adelaide.edu.au> wvenable@algona.stats.adelaide.edu.au (Bill Venables) writes:
- >>>>>> "Charles" == Charles Packer <packer@amarna.gsfc.nasa.gov> writes:
-
- >Charles> Why not use a traditional chi-square test?
-
- >I can think of two possible reasons:
-
- >1. It requires an arbitrary partition of the range into panels, *before*
- > the sample comes to hand. (In fact it really does not test normality as
- > such, but rather that the grouped sample distribution agrees with a
- > similarly grouped normal.) Arbitrariness always comes at some cost.
-
- >2. In seeking to get some power against a very wide class of alternatives
- > it manages only to achieve low power against any subclass, including the
- > subclass of practically important alternatives. In this sense it is not
- > well focused enough.
-
- >[BTW I would be interested in a Bayesian reaction to this question. It always
- >seemed to me that tests of fit could be rather an embarrassment to a Bayesian.]
-
- Even from the classical standpoint, the traditional chi-squared test is
- wrong. The distribution of the chi-squared statistic should not have the
- full reduction in degrees of freedom for estimated parameters. This was
- proved by Chernoff and Lehmann in 1953 for fixed partitions, and by A. R. Roy
- in his dissertation under me in 1954 for data-defined partitions; the theory
- using "good" estimators under the null hypothesis shows that it does not
- matter how the partitions are defined. Some of the papers of D. H. Moore
- about 1970 provide a readable summary.
-
- The power is VERY low against any reasonable alternative. The reason for
- this is that the test fails to consider that adjacent intervals are likely
- to differ from the null in the same manner. Such tests as the Kolmogoroff-
- Smirnoff or Kuiper do a far better job.
-
- For a discussion of testing from a parametric robust Bayesian viewpoint,
- readers may wish to look at my paper with Sethuraman in Sankhya, 1965, on
- the subject. For at least one "real" problem, that of looking for concentrated
- contamination to a distribution, this can be approximately carried out in the
- infinite-dimensional version. But one thing to keep in mind--the higher the
- dimension, the more important the assumptions.
- --
- Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
- Phone: (317)494-6054
- hrubin@pop.stat.purdue.edu (Internet, bitnet)
- {purdue,pur-ee}!pop.stat!hrubin(UUCP)
-