home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!usc!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!news.sei.cmu.edu!drycas.club.cc.cmu.edu!pitt.edu!pitt!icarus.lis.pitt.edu!ml
- Newsgroups: sci.math.stat
- Subject: Re: likelihood ratios from detection data..
- Message-ID: <16350@pitt.UUCP>
- From: ml@icarus.lis.pitt.edu (Michael Lewis)
- Date: 7 Sep 92 22:07:08 GMT
- Sender: news@cs.pitt.edu
- Distribution: sci
- Organization: University of Pittsburgh
- Summary: contrasting ideal observers & subjects using likelihood ratio functions
- Lines: 42
-
-
- I have some data to analyze & thought someone in this group might have
- a good suggestion. The data comes from an experiment comparing two displays.
- When the system is unfailed its output is z = .5x + .5y + n when it fails
- the displayed output z_i = z_i-1 + n. x and y are displayed inputs which are
- driven in a random walk by a zero mean gaussian disturbance. The noise is
- also gaussian. Failures are introduced randomly with a fixed probability over
- each half second interval. The subject's task is to monitor the display and
- press a button when she believes the process has "failed".
- The problem is a typical signal detection one.
- I want to reference subjects' performance to various ideal observers which
- attend to particular features of the display. The "ideal" ideal observer,
- for example, would be a Wald observer with memory. My problem involves dealing
- with subjects' response biases. I would like to find some measure of agreement
- between a subject & observer which does not "overly" penalize the subject
- for choice of cutoff. One way of looking at the situation is that over the
- course of the experiment the subject's responses estimate the
- likelihood ratio function being used. ^
- p(F|X) p(F|X)
- ------ vs. ------
- ^_ _
- p(F |X) p(F|X)
-
- The subject's "error" wrt observer is then something like the integral
- of the difference between these functions from the first to last cutoff.
- Although this measure seems intuitively to do the right sort of thing, I can't
- seem to figure out the cover story that justifies it.. (distortion in
- apprehending X with fixed criterion and/or accurate X with shifting criterion)..
- It really seems to be a measure of scaling departures for the distortion or
- criterion shifts..
-
- My alternative is to pick fixed cutoffs so I get
- a reference to a Neyman-Pearson observer with the same alpha (punishes the
- subject's variability in cutoff) or a Max-P(C) observer (punishes for distance
- from the saddlepoint) neither of which quite capture what I want to measure..
- which is: does one display provide subjects with more accurate evidence of
- failures than the other...
-
- If any of this sounds familiar.. particularly estimating likelihood ratio
- functions from detection data please point me in the right direction...
-
- -Mike Lewis ml@icarus.lis.pitt.edu
-