What is CRV's Accuracy?


QUESTION FROM READER:
>The interviewees on Nightline said that 15% of the experiences were
>startling in their exactness but that for the most part the rest of the
>experiments were not especially helpful - at least not enough for the
>government to continue with the programs. What kind of success rate do
>your students achieve ? - P.H.

ANSWER:
The media seeks an easy bottom-line answer because it doesn't want the public to have to think too hard. One thing which they always gloss over is that intelligence data, no matter what form, is judged for "accuracy" at least twice:

First, is that data accurate?

Second, regardless of how accurate it is, is it something we care about or didn't already know?

A lot of CRV data which approaches total accuracy is still stuff that's already known, or not important to the person doing the tasking. Everyone might be amazed at its accuracy - it may bring on an "8 martini lunch". But it might still be completely underwhelming in its usefullness. When that happens, even totally exact information doesn't get included in the final "accuracy" evaluation. In fact, it gets ignored.

Another thing which has been glossed over, so as to not make people think too much, is that, like anything else, evaluations are made by people. Dr. Edwin C. May says 15% of CRV data is "startling in its exactness". Ed Dames of PsiTech says that 100% of a viewer's work is absolutely "accurate"*. During my time of working the CRV database, I was asked to turn up numbers for accuracy, reliability, exactness, usefullness, and 10 or 20 other criteria. The figures - according to the database numbers - came up different every time, because each time, the criteria the person was looking for was different, and therefore, the figures had different meanings and included different datasets.

One time, we were to make a presentation of CRV's accuracy to Congress, so the Director told me to go over all the sessions for the previous 6 months and pull out every REPORTED perception (not every perception in the session transcript, but only those which got into the final reports) and evaluate them according to the feedback for the session in which it was located.

  1. First, we went through every session and pulled out everything which was not a practice target. Many of the non-practice sessions were still waiting for feedback, and even if they did have feedback, they were classified and could not be presented, should anyone ask for hard-copy evidence of our figures.
  2. Then, because there had been someone in the office who had, on his own, tasked us with a lot of practice targets for which there was no feedback (unrecorded events in history, emotional profiling of certain people, UFOs, etc.), those target sessions were pulled.
  3. We then pulled all those sessions which had been merely for practice in some minor aspect of the CRV structure, so did not need to have a final report.
  4. The remaining practice sessions were ones for which there was
    • A session transcript,
    • A session report, and
    • A copy of the magazine picture or news article which had been used as feedback.
  5. We then painstakingly went through every word of the summary reports and evaluated them very strictly against the feedback picture or article, with 3 result categories: correct, incorrect, or "can't tell from the feedback".
  6. We then threw out all of the "can't tells" and came up with a "score" on what was left.

Then enters the human factor. Of the impressions which 1) were included in the FINAL reports of the sessions and 2) could be judged from the feedback, we had a 72% correct accuracy rate. The director looked at the results, which had taken us over 2 weeks to compile, and said that he didn't want to report anything that high to Congress, because they would then expect us to always perform at that level, so he ordered me to sit there and randomly change numbers on the spreadsheet until the total line said 24%. That is what got reported to Congress.

Even still, the accuracy rate of 72% was true only for targets for which there was immediate feedback in the form of a picture or written article. To state that it implied the same accuracy rate for "real world" targets, some of which would never have feedback, and some of which were in future time, and some of which were only targeted as "what if" suppositions, etc., would be foolish and unscientific.

And that is only considering the numbers devised using THAT judging criteria. Dr. Ed May uses another set of judging criteria**, altogether. In fact, his group, the Cognitive Sciences Laboratory, uses three.*** In one, he has a viewer do a session on a picture-feedback target, then takes the session transcript or report and shows it to several people. Along with it, he shows them five pictures, and has them pick out which picture is the target, judging by the viewer's description. CHANCE rate of any one impartial judge picking the correct picture is 20%. A viewer's ability rating is then judged not by the accuracy of any set of impressions, but by percentage above or below 20%. Am I saying that this is a bad rating method? Absolutely not. It is a VERY GOOD rating method, and scientific in its approach. I am just saying that it measures CRV using a different yardstick than my rating system, than SRI's rating system, than Ingo Swann's rating system, etc. It is especially very much different from the Intelligence community's rating system, where "accuracy" is not nearly as much a consideration as usefullness. --- Each rating system will report that they are rating CRV's "accuracy", "reliability", etc., and each will turn up a completely different set of numbers. In fact, the three different methods used by the CSL, all of which are scientifically valid, will turn up different accuracy ratings for the same set of sessions.

In short summary, until you find out what's behind the numbers, they don't mean a thing.

My method of "scoring" is database-oriented and probably unnecessarily complex. But to answer your question -- if I oversimplify it as grossly as I oversimplified Dr. May's and Major Dames' -- my students normally have an INDIVIDUAL AVERAGE PURITY score of between 70%-95% "accuracy" on a consistent basis, depending on the individual student. The OVERALL AVERAGE of all the students is running slightly over 80%. Please click on my method and read about it, to see what is behind that number, or it will have no meaning, either.


* - Certain paths of personal logic have led Mr. Dames to that conclusion rather than hard data analytic means. Also, my explanation of his scoring method is greatly simplified - my apologies are in order for doing so. I stand ready to print his corrections/clarifications if he wants. For a more complete understanding of his scoring, the reader would have to contact him, personally.

** - The above explanation of Dr. May's scoring method is greatly oversimplified - my appologies are in order up front. I stand ready to print his corrections/clarifications if he wants.

*** - For a full explanation of the three scoring methodologies used by CSL, called "Rank Order", "Fuzzy Sets", and "Assessment Rating", click here.


Return to the FAQ menu
Return to the Main menu
logo The Controlled Remote Viewing Home Page is a service of
Problems->Solutions->Innovations (P>S>I), 1005 Bosse Drive, Mechanicsville, MD 20659
Tel: (301)884-5856 / email: rviewer@atc.ameritel.net
Your comments and questions are encouraged.