home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: sci.math.stat
- Path: sparky!uunet!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!spool.mu.edu!umn.edu!thompson
- From: thompson@atlas.socsci.umn.edu (T. Scott Thompson)
- Subject: Re: Least Square Errors
- Message-ID: <thompson.716095751@daphne.socsci.umn.edu>
- Sender: news@news2.cis.umn.edu (Usenet News Administration)
- Nntp-Posting-Host: daphne.socsci.umn.edu
- Reply-To: thompson@atlas.socsci.umn.edu
- Organization: Economics Department, University of Minnesota
- References: <1992Sep9.150541.15735@cbfsb.cb.att.com>
- Date: Thu, 10 Sep 1992 03:29:11 GMT
- Lines: 61
-
- rizzo@cbnewsf.cb.att.com (anthony.r.rizzo) writes:
-
- >This is probably something that's a piece of cake for all you
- >stat-pros out there, but for a mere engineer the question requires
- >some thought and, possibly, some help.
-
- Such problems always require thought, even for stat "pros".
-
- >I have experimental data, a calibration curve of sorts, for the
- >thermal output of a strain gauge. I've fitted a 4th degree polynomial
- >to the data, by the method of least square errors, and I'm using
- >the polynomial to correct strain gauge readings taken at various
- >temperatures. My quandary is that the polynomial does not pass
- >through the one point of which I'm dead certain, (20,0). The instrument
- >with which the data were collected was zeroed at 20 C. So, the
- >curve, ideally, should pass through (20,0).
-
- To answer a question like this you must really think about why you
- think there are errors in the data to begin with. You have already
- done some of this since you tell us that you are "dead certain" that
- (20,0) is on the curve. But least squares (or constrained least
- squares) implicitly assumes that all of the other errors have equal
- variance. Given your description of the problem I suspect that it
- might be more reasonable to assume that points that are close to the
- "zeroing" point would have smaller errors than points that are far
- away.
-
- If this is the case, then you should consider weighting observations
- taken at temperatures close to 20 C more heavily than observations
- elsewhere in whatever statistical procedure you employ. (However,
- going overboard in this respect exposes you to excessive dependence on
- just a few of the observations.)
-
- On the other hand, you should ask yourself why you are dead certain
- that the true curve goes through (20,0). You seem to be implicitly
- assuming that there were no measurement errors present either in the
- strain gauge reading or in the calibration instrument when you did the
- zeroing.
-
- >Two options are available to me. First, I can simply change the
- >value of the constant term in my polynomial, so as to shift
- >the curve up or down by the required amount. But this will give
- >me a new curve that DOES NOT minimize the squares of the errors.
- >Second, I can re-derive the equations such that the fitted curve
- >is CONSTRAINED to pass through (20,0). (This would not be unlike
- >the application of boundary conditions by the theoretical method
- >in finite element problems.) Doing so should insure that
- >the curve pass through (20,0), while still giving me coefficients
- >that minimize the square of the errors. Now my questions:
-
- Certainly there are other possibilities. Option (1) is equivalent to
- assuming that (a) there is no measurement error at (20,0) and (b)
- there is a constant bias in the errors for all of the other
- observations. Option (2) implicitly assumes that (a) there is no
- measurement error at (20,0) and (c) all of the other observations have
- error that is unbiased and of equal variance. It is not clear to me
- that any of the assumptions (a), (b) or (c) is reasonable.
- --
- T. Scott Thompson email: thompson@atlas.socsci.umn.edu
- Department of Economics phone: (612) 625-0119
- University of Minnesota fax: (612) 624-0209
-