home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky sci.math.stat:1568 comp.ai.neural-nets:3027
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!agate!boulder!boulder!batra
- From: batra@boulder.Colorado.EDU (sajeev batra)
- Newsgroups: sci.math.stat,comp.ai.neural-nets
- Subject: question on scoring prediction accuracy of a classifier
- Message-ID: <batra.712560431@beagle>
- Date: 31 Jul 92 05:27:11 GMT
- Sender: news@colorado.edu (The Daily Planet)
- Organization: University of Colorado, Boulder
- Lines: 34
- Nntp-Posting-Host: beagle.colorado.edu
-
-
-
-
- Suppose I have a classifier that classifies objects into one of n
- classes (populations). I train the classifier with a "train set." And to
- test the prediction accuracy of the classifier I predict on a "predict set."
- I am wondering: what are some of the better ways to score the
- prediction accuracy of my classifier? All classifications &
- misclassifications have equal cost.
-
- Here the ways that I'm aware of:
-
- 1) (c1 + c2 + ... + cn)/total no. of predictions made
- where cn is the no. of correct predictions made in class n
-
-
- 2) sample correlation coefficient which gives the correlation between
- the prediction and the actual. the coefficient is always between
- -1 and +1. There will be a coefficient for each class. this
- coefficient takes into account type 1 and type 2 errors.
-
-
- What other popular ways are there to score the prediction accuracy???
- I think I saw someone using the mutual information fuction between the
- predicted and the actual to score the prediction accuracy. Would this work?
-
- I would like to know other ways and some of the problems associated with
- them, including the ones I've listed above. Also, I'm not sure when I can
- assume a certain distribution in my n populations (for example, a normal
- dist.)?
-
- ---sb
-
-
-