home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: sci.math.stat
- Path: sparky!uunet!usc!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!att!cbnewsc!cbfsb!cbnewsf.cb.att.com!rizzo
- From: rizzo@cbnewsf.cb.att.com (anthony.r.rizzo)
- Subject: Re: Least Square Errors
- Message-ID: <1992Sep10.151722.14213@cbfsb.cb.att.com>
- Sender: news@cbfsb.cb.att.com
- Organization: AT&T
- References: <1992Sep9.150541.15735@cbfsb.cb.att.com> <thompson.716095751@daphne.socsci.umn.edu>
- Date: Thu, 10 Sep 1992 15:17:22 GMT
- Lines: 101
-
- In article <thompson.716095751@daphne.socsci.umn.edu> thompson@atlas.socsci.umn.edu writes:
- >rizzo@cbnewsf.cb.att.com (anthony.r.rizzo) writes:
- >
- >>I have experimental data, a calibration curve of sorts, for the
- >>thermal output of a strain gauge. I've fitted a 4th degree polynomial
- >>to the data, by the method of least square errors, and I'm using
- >>the polynomial to correct strain gauge readings taken at various
- >>temperatures. My quandary is that the polynomial does not pass
- >>through the one point of which I'm dead certain, (20,0). The instrument
- >>with which the data were collected was zeroed at 20 C. So, the
- >>curve, ideally, should pass through (20,0).
- >
- >To answer a question like this you must really think about why you
- >think there are errors in the data to begin with. You have already
- >done some of this since you tell us that you are "dead certain" that
- >(20,0) is on the curve. But least squares (or constrained least
- >squares) implicitly assumes that all of the other errors have equal
- >variance. Given your description of the problem I suspect that it
- >might be more reasonable to assume that points that are close to the
- >"zeroing" point would have smaller errors than points that are far
- >away.
-
- The points that are closer to the zeroing point have smaller
- errors in an absolute sense. As a percentage of the readings
- the errors aren't necessarily smaller.
-
- >If this is the case, then you should consider weighting observations
- >taken at temperatures close to 20 C more heavily than observations
- >elsewhere in whatever statistical procedure you employ. (However,
- >going overboard in this respect exposes you to excessive dependence on
- >just a few of the observations.)
-
- I wasn't aware of "constrained" least squares. And I hadn't
- considered "weighted" least squares. The latter sounds like
- a rather painless way to go. It also seems to make more sense
- in my case, since I'm more interested in accuracy near 20C.
-
- >On the other hand, you should ask yourself why you are dead certain
- >that the true curve goes through (20,0). You seem to be implicitly
- >assuming that there were no measurement errors present either in the
- >strain gauge reading or in the calibration instrument when you did the
- >zeroing.
-
- The bridge circuit was balanced at 20C. This is comparable to
- saying that the thermal output of the gauge-sample system is
- zero by definition at 20C. The actual strain measurements,
- after the thermal output curve is generated, are then taken
- at temperatures other than 20C, with the sample subjected to
- structural loads. But the bridge circuit is balanced again
- at 20C before the structure is loaded. Of course, there is
- some variation here. But that is much smaller than what exists
- during the test.
-
- >>Two options are available to me. First, I can simply change the
- >>value of the constant term in my polynomial, so as to shift
- >>the curve up or down by the required amount. But this will give
- >>me a new curve that DOES NOT minimize the squares of the errors.
- >>Second, I can re-derive the equations such that the fitted curve
- >>is CONSTRAINED to pass through (20,0). (This would not be unlike
- >>the application of boundary conditions by the theoretical method
- >>in finite element problems.) Doing so should insure that
- >>the curve pass through (20,0), while still giving me coefficients
- >>that minimize the square of the errors. Now my questions:
- >
- >Certainly there are other possibilities. Option (1) is equivalent to
- >assuming that (a) there is no measurement error at (20,0) and (b)
- >there is a constant bias in the errors for all of the other
- >observations.
-
- You've very eloquently stated my reasons for not using option (1).
- I have no reason to believe that there is any bias in the
- erros for all the other points.
-
- >Option (2) implicitly assumes that (a) there is no
- >measurement error at (20,0) and (c) all of the other observations have
- >error that is unbiased and of equal variance. It is not clear to me
- >that any of the assumptions (a), (b) or (c) is reasonable.
-
- Assumption (a) comes close to being true. The measurement error
- at the zeroing point is much smaller than the erros in the other
- observations. Before zeroing the instrument, I had the luxury of
- letting everything come to thermal equilibrium. I could only
- come close to thermal equilibrium during the actual observations.
- This alone is reason for the assumption that the measurement error
- at (20,0) is very small in comparison to the other observations.
-
- Assumption (c) comes close. I have no
- reason to suspect a bias in the errors of the other observations.
- If there was some systematic error, I'm not aware of it.
- I can make no statement about the variance, other than to say
- that the points appear to lie on a curve.
-
- >--
- >T. Scott Thompson email: thompson@atlas.socsci.umn.edu
- >Department of Economics phone: (612) 625-0119
- >University of Minnesota fax: (612) 624-0209
-
- Thanks!
-
- Tony
-
-