How Does P>S>I Score Its Viewers' Work?


P>S>I keeps a database on the work of all its associated viewers, and tracks that database to establish VIEWER PROFILES, not overall "accuracy" scores. The following is exerpted from the "TAME CAT TRIBUNE", P>S>I's remote viewing newsletter.

Several people have asked me about the "viewer profiles" that I keep. First, a word of explanation about what they are and how they are used, then a lengthy and boring explanation of how they are figured.

It dawned on me many years back that different viewers have different strengths and weaknesses in the remote viewing arena. One, for example, always seems to get the color right, while another will be an ace when it comes to shapes and sizes. It seems logical, then, to look at the tasking which comes in and assign to that task the most proficient viewer for that task. In actual practice, tasking comes in and I break it up into its component questions, then I task each question to the viewer who is most proficient in the area which will answer that type of question.

Also, if you keep track of a viewer's strengths and weaknesses, you know the areas in which more/less training is needed, and can customize training and proficiency practice sessions to suit the needs of the individual viewer.

However, to do this requires a lot more than just having a monitor or analyst think back on a viewer's past results and say, "You know, Joe Smith is really good at that - let's give him the task." That is nothing more than a personal value judgement. In order to know exactly what a viewer's strengths and weaknesses are, you have to have to collect a LOT of data, organize and keep it properly, and then do a LOT of analytic work on it. What develops is then no longer a personal value judgement, but an exact VIEWER PROFILE.

There are certain requirements:

First, you must have feedback in order to judge each perception correctly. I agree with Ingo Swann that, if you don't have feedback, you may be doing a lot of amazing stuff, but you aren't doing Controlled Remote Viewing.

Second, you must have a "non-waffled" scoring system. If, for example, the viewer says,

"Object #1 is a red moving vehicle against a plain, single-colored background."
The feedback shows that Object #1 is a green vehicle against a plain red background, and you cannot tell whether it is moving or not. You cannot say, "Well, he got the red right, just in the wrong place, and most vehicles move, so let's give him credit for those two perceptions" (which is what usually happens in scoring a viewer's session by most people's methods). In order to facilitate a "non-waffled" scoring environment, I devised the following "outlined summary" method for restructuring the viewer's perceptions into a more judgable format for evaluation against feedback. The viewer's statement is changed to:
There is an object: Object #1 __Y__
...which is red __N__
...which is moving __?__
It is against a background __Y__
...which is plain __Y__
...which is single-colored __Y__
Within the confines of this structure, there can be no "waffling". It is the object and only the object which is perceived (and judged) either red or not red, moving or not moving.

Third, you must decide which strengths and weaknesses you want to track. This is generally decided according to what strengths will be required by the viewer's work situation. In other words, what problems will the viewer be tasked against when he/she works in the real world? I track all perceptions by categories and sub-categories:

Sensories: Tangibles: Intangibles: And so on...

By analyzing every session a viewer does against feedback, I am able to develop a "profile" concerning each viewer, in order to know certain facts about him/her. Those facts include the following:

PRODUCTIVITY: The average quantity of perceptions he/she normally produces. SCORABILITY: The percentage of a viewer's perceptions for which feedback is usually available. PURITY: The percentage of scorable perceptions a viewer gets which are normally correct. RELIABILITY: The predictable percentage of all impressions which can be expected to be valid for a target WHEN NO FEEDBACK IS AVAILABLE. PROFICIENCY: The percent variance from chance. Here is the tricky one. How much better or worse is this viewer's results than if we just threw darts at a dartboard to get the answers?

Using that analogy, let's say that the dartboard were sectioned into two sides, one for "the person is dead" and the other for "the person is alive". The chance of getting the correct answer is 50/50, or 50% chance. If we were judging a viewer in the category "dead or alive", and the viewer got the right answer 500 times out of 1000, that viewer would be said to be "AT CHANCE", and his/her PROFICIENCY(dead or alive) would be no better than using the darts. However, if he got the right answer 900 times out of 1000, his PROFICIENCY(dead or alive) would be "ABOVE CHANCE", and you would be much better off using him to find the answer. A score of 200 out of 1000 would be "BELOW CHANCE", and you would be better off with the dartboard.

However, if you want to find out a specific color, you would have to divide the dartboard into many more sections; one for red, one for heliotrope, etc. How many sections? That depends on your own personal color vocabulary. You would only have sections for those colors the viewer can name, simply because he/she will probably not give as a perception any color for which he/she doesn't have a name. Therefore, in order to know how many sections this dartboard has, you have to know the viewer's own personal vocabulary of color words. There are MUCH better ways to do this, such as using chi-square tables to get a probability, or p-score. However, for the lay person (which most viewers are), this is usually the clearest and most understandable way. Since it is the viewing student's understanding I am after, I use this for the students. If you are a statistician, rolllll your eyes back into your head and utter a sigh of disgust, then be patient and forgiving. The bottom line is that this works.

RELIABLE PROFICIENCY: The predictable percentage variance which can be expected for a target WHEN NO FEEDBACK IS AVAILABLE:
Now, back to the question of how to get a viewer's actual vocabulary. There are basically two ways to do it. First, you can give a viewer a vocabulary test. The trouble with this is that generally, the test, itself, is only an indicator of the vocabulary of the person making up the test. The second way is to keep a list of all the words a viewer uses - again, for example, color words. As the viewer performs more sessions, each color word in the new session is compared to the growing list. If it is not on the list, it is added - and all the viewer's profile numbers change. The ideal way is to use a combination of the two methods. Unfortunately, that is also a very cumbersome, time consuming feat. What I generally do is to take all the basic colors: red, yellow, green, etc., and assume that the viewer will know those words. I then add to that list as the viewer turns up with new color-word vocabulary not already on his/her list. But remember, a separate list of each category of words must be kept for each viewer.
But that's for color perceptions only. Now, in order to get the viewer's OVERALL profile, all you have to do is figure the same calculations for tastes, smells, shapes, sizes, concepts, emotionals, textures, motions, ... And then all you have to do to get the average profile for American viewers is to do the same for hundreds of viewers from America. Then, for viewers above age 35... Then, for left handed viewers... Then...

Then add to that the fact that the VIEWER PROFILE contains other aspects not shown here: such things as charts showing a viewer's track record of each of these aspects for sessions worked in the morning hours vs. sessions worked in the afternoon or evening. The result is that there is no one simple number which can be given for any viewer or group of viewers to say that their "accuracy" is such and such.

Parapsychology may be hell to prove with logic or evidence, but for bean-counting statisticians, it is a true unending paradise.

This, then, is P>S>I's way of measuring the viewers' results. It is totally APPLICATIONS oriented. It sees "accuracy" as only a minor part of the equation. The result is, that when answering incoming tasking, the resultant "scoring" for accuracy, precision, reliability, etc. for the group effort can be significantly higher than the "scoring" would be for any one remote viewer alone.

And therein lies the falacy of what you are hearing in the newscasts today: the military and CIA are interested in the numbers which result from CRV APPLICATIONS in the field, and the newscasts are reporting the results of CRV RESEARCH in the lab.
I feel strongly that this type of scoring system would work for all types of parapsychological efforts: dowsing, PK studies, etc. The thing to remember is that it is strictly applications-oriented in nature, not research-oriented.
Return to the FAQ menu
Return to the Main menu
logo The Controlled Remote Viewing Home Page is a service of
Problems->Solutions->Innovations (P>S>I), 1005 Bosse Drive, Mechanicsville, MD 20659
Tel: (301)884-5856 / email: rviewer@atc.ameritel.net
Your comments and questions are encouraged.