What is CRV's Accuracy?
ANSWER:
The media seeks an easy bottom-line answer because it doesn't want the public to have to think too hard. One thing which they always gloss over is that intelligence data, no matter what form, is judged for "accuracy" at least twice:
First, is that data accurate?
Second, regardless of how accurate it is, is it something we care about or didn't already know?
A lot of CRV data which approaches total accuracy is still stuff that's already known, or not important to the person doing the tasking. Everyone might be amazed at its accuracy - it may bring on an "8 martini lunch". But it might still be completely underwhelming in its usefullness. When that happens, even totally exact information doesn't get included in the final "accuracy" evaluation. In fact, it gets ignored.
Another thing which has been glossed over, so as to not make people think too much, is that, like anything else, evaluations are made by people. Dr. Edwin C. May says 15% of CRV data is "startling in its exactness". Ed Dames of PsiTech says that 100% of a viewer's work is absolutely "accurate"*. During my time of working the CRV database, I was asked to turn up numbers for accuracy, reliability, exactness, usefullness, and 10 or 20 other criteria. The figures - according to the database numbers - came up different every time, because each time, the criteria the person was looking for was different, and therefore, the figures had different meanings and included different datasets.
One time, we were to make a presentation of CRV's accuracy to Congress, so the Director told me to go over all the sessions for the previous 6 months and pull out every REPORTED perception (not every perception in the session transcript, but only those which got into the final reports) and evaluate them according to the feedback for the session in which it was located.
Then enters the human factor. Of the impressions which 1) were included in the FINAL reports of the sessions and 2) could be judged from the feedback, we had a 72% correct accuracy rate. The director looked at the results, which had taken us over 2 weeks to compile, and said that he didn't want to report anything that high to Congress, because they would then expect us to always perform at that level, so he ordered me to sit there and randomly change numbers on the spreadsheet until the total line said 24%. That is what got reported to Congress.
Even still, the accuracy rate of 72% was true only for targets for which there was immediate feedback in the form of a picture or written article. To state that it implied the same accuracy rate for "real world" targets, some of which would never have feedback, and some of which were in future time, and some of which were only targeted as "what if" suppositions, etc., would be foolish and unscientific.
And that is only considering the numbers devised using THAT judging criteria. Dr. Ed May uses another set of judging criteria**, altogether. In fact, his group, the Cognitive Sciences Laboratory, uses three.*** In one, he has a viewer do a session on a picture-feedback target, then takes the session transcript or report and shows it to several people. Along with it, he shows them five pictures, and has them pick out which picture is the target, judging by the viewer's description. CHANCE rate of any one impartial judge picking the correct picture is 20%. A viewer's ability rating is then judged not by the accuracy of any set of impressions, but by percentage above or below 20%. Am I saying that this is a bad rating method? Absolutely not. It is a VERY GOOD rating method, and scientific in its approach. I am just saying that it measures CRV using a different yardstick than my rating system, than SRI's rating system, than Ingo Swann's rating system, etc. It is especially very much different from the Intelligence community's rating system, where "accuracy" is not nearly as much a consideration as usefullness. --- Each rating system will report that they are rating CRV's "accuracy", "reliability", etc., and each will turn up a completely different set of numbers. In fact, the three different methods used by the CSL, all of which are scientifically valid, will turn up different accuracy ratings for the same set of sessions.
In short summary, until you find out what's behind the numbers, they don't mean a thing.
My method of "scoring" is database-oriented and probably unnecessarily complex. But to answer your question -- if I oversimplify it as grossly as I oversimplified Dr. May's and Major Dames' -- my students normally have an INDIVIDUAL AVERAGE PURITY score of between 70%-95% "accuracy" on a consistent basis, depending on the individual student. The OVERALL AVERAGE of all the students is running slightly over 80%. Please click on my method and read about it, to see what is behind that number, or it will have no meaning, either.
** - The above explanation of Dr. May's scoring method is greatly oversimplified - my appologies are in order up front. I stand ready to print his corrections/clarifications if he wants.
*** - For a full explanation of the three scoring methodologies used by CSL, called "Rank Order", "Fuzzy Sets", and "Assessment Rating", click here.