home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: sci.math.stat
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!magnus.acs.ohio-state.edu!regeorge
- From: regeorge@magnus.acs.ohio-state.edu (Robert E George)
- Subject: Re: Fwd: Standard Deviation.
- Message-ID: <1992Aug17.125306.9509@magnus.acs.ohio-state.edu>
- Sender: news@magnus.acs.ohio-state.edu
- Nntp-Posting-Host: bottom.magnus.acs.ohio-state.edu
- Organization: The Ohio State University
- References: <seX2yRq00Uh785H2EB@andre <1992Aug14.231916.23479@magnus.acs.ohio-s
- Date: Mon, 17 Aug 1992 12:53:06 GMT
- Lines: 40
-
- In article <1992Aug16.212245.27577@mailhost.ocs.mq.edu.au> wskelly@laurel.ocs.m
- q.edu.au (William Skelly) writes:
- [deletions]
- >
- >This and other posting indicate that there is a relationship between
- >sample size and and estimated variance (of the population) which is
- >positive and always an underestimate. What is the limit, or point
- >at which an increasing sample size no longer improve the estimate
- >of populations variance?
- ___ _ _
- (1) the statistic T= \ | X - Xbar | 2 __ 2
- /__ |_ _| has expectation (n-1) O
- __ 2
- where O denotes the population variance *for any value of n*. Therefore,
- T __ 2
- ------------ is always unbiased for O and T
- n - 1 -------
- n
-
- __ 2
- will always have bias - O regardless of the value of n.
- _____
- n
-
- Expectation is a linear operator. Note that the bias --> 0 as n gets large,
- but T/n is biased for all finite n.
-
- (2) The variance of the unbiased estimator T/(n-1) is O(1/n), so taking
- a large sample will be beneficial in the sense of leading to an estimator
- with smaller variance.
-
- >Can this be tested by taking samples of sample (the later sample
- >being elevated to the status of population)?
-
- (3) I'm not clear what your mean here, this sounds something like
- bootstrapping / jackknifing (well-known words for well-known techniques
- which are not well-understood. . . )
-
- Robert George
- (speaking only for myself)
-