home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!stanford.edu!lll-winken!news.larc.nasa.gov!darwin.sura.net!paladin.american.edu!auvm!FRANKSTON.COM!MEREDITH_WARSHAW
- From: Meredith_Warshaw@FRANKSTON.COM
- Newsgroups: bit.listserv.stat-l
- Subject: Re: "proving" no difference
- Message-ID: <199210131532.AA16080@world.std.com>
- Date: 13 Oct 92 15:29:00 GMT
- Sender: "STATISTICAL CONSULTING" <STAT-L@MCGILL1.BITNET>
- Lines: 22
- Comments: Gated by NETNEWS@AUVM.AMERICAN.EDU
-
- Andy Taylor brings up two important questions:
- "How best can we use statistical inference to "prove"
- (give strong evidence for)the _absence_ of a difference or effect?
-
- As Meredith says, clients/authors often want to use a nonsignificant test (or
- better, a P-value >> 0.05) as evidence two groups don't differ (or the
- regressio
- slope is 0, or whatever). What can/should they do instead?"
-
-
- If there is a nonsignificant p-value, it should be reported as just that.
- Then,
- if we believe that there would have been adequate power to find a meaningful
- difference
- and if any differences found were trivial, we can state just that. If the
- differences
- found are large enough that they would have been of interest had they passed
- the
- "p-value test" and we believe that the problem is lack of power, then we can
- state
- that as well. This is where you then go and look for funding to pursue the
- result more properly!
-