home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Hall of Fame
/
HallofFameCDROM.cdr
/
data1
/
modstat2.lzh
/
STATIST.TXT
< prev
next >
Wrap
Text File
|
1990-04-21
|
44KB
|
1,168 lines
YOUR MODERN MICROCOMPUTERS' STATISTICS LICENSE
Even though the program is shareware, and can be freely copied, there are
still some limitations to protect the quality of the distribution of the
program and to support future development.
Users of MODERN MICROCOMPUTERS' STATISTICAL software may make copies of
this program for trial use by others on a PRIVATE NON-COMMERCIAL BASIS.
By accepting and using this software, you acknowledge that this software
may not suit your particular requirements or be completely trouble-free.
With proper application, this software will perform as described. However,
MODERN MICROCOMPUTERS is not responsible for your specific application or
any problems resulting from use of this software.
If the software does not perform as described, our liability to you is
limited to replacing the software or refunding the purchase price (if pur
chased and registered). We have no liability to you or any other person or
entity for any damage or loss, including special, incidental, or consequen
tial damages, caused by this software, directly or indirectly. Some states
do not allow the limitation or exclusion of liability for incidental or con
sequential damages, so the above limitation or exclusion may not apply to
you.
This Agreement is governed by the laws of the State of New York. Should any
part of this agreement be held invalid, the remainder of the Agreement will
still be in effect. This Agreement can only be modified by written state
ment signed by MODERN MICROCOMPUTERS.
Under this license, you may NOT:
1) Distribute the program in connection with any other product or service,
or as part of a corporate or institutionally sponsored distribution.
Site licenses and bundling agreements are available upon request.
2) Charge anything for these MODERN MICROCOMPUTERS' programs. An excep-
tion is made for registered user groups who may charge a cost-based
fee (not to exceed $10) to cover their own costs.
3) Distribute the program in modified form.
4) Copy or reproduce the printed documentation in any form.
THE SHAREWARE CONCEPT
MODERN MICROCOMPUTERS' STATISTICS is distributed through a unique marketing
approach called shareware. The diskette with programs on it can be freely
copied and shared. It is also available from us, MODERN MICROCOMPUTERS, for
$15.00. We ask you to help us distribute MODERN MICROCOMPUTERS' MODSTAT2
by sharing unmodified copies of the diskette with others. We also encourage
you and those to whom you have distributed the program to register your copy.
Registration has a number of benefits to you:
1) You will receive the latest copy of the programs, including all recent
updates, and additions.
2) Automatic notice of future updates and additions.
3) You will be supporting the concept that allowed you to receive MODERN
MICROCOMPUTERS' MODSTAT2 in the first place. Only through user support
can be continue to improve MODERN MICROCOMPUTERS' MODSTAT2 and perhaps
develop other products.
Only by supporting the program's authors who release valuable programs as
shareware can you encourage others to do the same.
NO-QUIBBLE LICENSE AGREEMENT
YOU ARE IMPORTANT TO US AS A CUSTOMER AND WE RESPECT YOUR NEEDS
We know that it's extremely important to you that our software be easy to
use, and so our disks contain only standard unprotected files with no
'fingerprints' and requiring no special installation routines. This means
that you can install our files on hard disks with no fear of future
problems.
Our only concession to copy protection is that our program code is encrypted
with serial number and/or user's name which appears on the screen and makes
each disk traceable.
Our software license restriction is essentially that each licensed copy of
the software must be used on ONLY ONE COMPUTER AT A TIME.
BY USING THIS SOFTWARE YOU INDICATE YOUR ACCEPTANCE OF THE THIS SOFTWARE
LICENSING AGREEMENT.
We hope you will find the program useful and will support the shareware
concept by registering your copy with us. Please use the registration form
on the next page.
In the event that the REGISTRATION FORM is missing, you can receive one from
us by writing to:
MODERN MICROCOMPUTERS
63 Sudbury Lane
Westbury, New York 11590
or by calling: (516) 333-9178
There is no charge for the registration form. Even if you do not decide to
send in the $15.00 purchase cost, you should register your copy with us.
We only want those who find the program useful to purchase it. We
appreciate your passing of the programs on to friends and fellow workers.
SYSTEM REQUIREMENTS:
IBM PC/XT/AT OR IBM COMPATIBLE computer with hard disk.
MS-DOS 2.1 (PC-DOS 2.1) or higher.
256K RANDOM ACCESS MEMORY
REGISTRATION FORM
-----------------
If you purchased MODERN MICROCOMPUTERS' MODSTAT2 directly from MODERN
MICROCOMPUTERS in your own name, then your copy is already registered and
you will receive all the benefits of registration. You do not need to send
in a registration form.
If you received MODERN MICROCOMPUTERS' MODSTAT2 some other way, you may
register your copy by filling out the following form and mailing it to the
listed address along with $15.00. You will promptly receive the latest
version of MODERN MICROCOMPUTERS' MODSTAT2 along with a special software
disk containing a series of financial programs. We will also place you on
our update list so that you will automatically be notified of any changes
to the programs.
Once your copy of MODERN MICROCOMPUTERS' MODSTAT2 is registered, you will
be entitled to unlimited telephone support bye calling (516) 333-9178
during business hours (10:00 am - 3:00 p.m. New York time).
In addition, you will be supporting software distributed under the shareware
concept and will be contributing to the further development of MODERN
MICROCOMPUTERS' MODSTAT2 and other shareware products.
Mail To:
MODERN MICROCOMPUTERS
63 Sudbury Lane
Westbury, N.Y. 11590
NAME ______________________________________________
COMPANY ___________________________________________
ADDRESS ___________________________________________
CITY/STATE ________________________________________
ZIP ______________________________
How did you first learn about MODERN MICROCOMPUTERS' MODSTAT2 or where
did you first obtain a copy of MODERN MICROCOMPUTERS' MODSTAT2?
_________________________________________________________________________
_________________________________________________________________________
_________________________________________________________________________
What additional programs would you like to see developed? _______________
_________________________________________________________________________
_________________________________________________________________________
THIS IS THE MODERN MICROCOMPUTERS' STATISTICAL SOFTWARE
These programs were developed during the teaching of various courses
in statistics at the university level by Dr. Robert C. Knodt. Their
development was the direct result of work instituted to make statistics
a more understandable and usable subject.
Early in my work with statistics I realized that almost any student can
learn to apply the statistical tests and successfully complete the
mathematics necessary to arrive at the correct answer to any of these tests.
What proved to be the biggest problem that the students had revolved around
the problem of selecting the correct statistical test to meet the proposed
investigation. With this in mind the first program was developed. The aim
of the program is to help in the selection of the proper statistical test.
This program is called 'FIND' is the first one listed on the first menu.
FIND allows the investigator to answer some simple questions and the
program will indicate the correct statistical test. In addition, the
program will then branch to that test so that the investigator can
immediately perform the test.
Over the course of years the various programs increased in number so
that today there are over 40 programs in the package. Of course, not all
will be needed by any one investigator, but they do cover a wide range of
situations.
These programs are not strictly free. The author, Dr. Knodt, requests
that you send the registration form along with $15.00 to cover the costs
of the development of these programs and the development of future programs.
If you find these programs useful, $15.00 is a small price for the
collection of programs. Your donation will allow the author to continue
developing programs of this and other types.
As a registered user you will be kept up-to-date on any future programs
and will be offered them at comparably low prices.
Please send your donation to:
MODERN MICROCOMPUTERS
Dr. Robert Knodt
63 Sudbury Lane
Westbury, N.Y. 11590
If you experience any difficulty or find that you need any personal
help in your investigations or the analysis of data from you investigations,
please do not hesitate to call (516) 333-9178 from 10:00 a.m. to 3:00 p.m.
New York time. Dr. Knodt will make every effort to help.
THE SHAREWARE CONCEPT ALLOWS THE DEVELOPMENT OF POWERFUL, USEFUL,
AND HIGHLY AFFORDABLE COMPUTER SOFTWARE. SUPPORT THE CONCEPT.
REGISTER AND SEND YOUR $15.00 TODAY.
The group of statistical tests called MODSTAT2 include the following
tests.
* ONE-WAY ANALYSIS OF VARIANCE BETWEEN GROUPS FOLLOWED BY t-TESTS.
(Up to seven groups - any number in a group)
One of the best tests, if not the best, for determining of there is a
significant difference between groups. There should only be one independent
variable involved.
The test assumes that the scores are from a population with a normal
distribution but studies have shown that this requirement can be violated to
a great extent without altering the outcome of the analysis. The test
produces a summary table and gives the F value as well as the significance
of that value. The degrees of freedom are taken from the table as the
degrees of freedom of the between variable and the degrees of freedom of the
within variable. You can check the significance level in the F-table using
these degrees of freedom but the program gives you the exact value.
The program then shows the t-test values for each group compared to
each other group. The mean and standard deviation of each group is also
displayed. The test can be used for comparing two groups and gives the F
value which, in that case, is the Z score (or for small groups, the t-test)
squared.
Most statistics books give the computational method but many do not
stress the value and ease of the test. Usually the t-test is emphasized for
comparing two groups and the ANOVA (ANalysis Of VAriance) tests left for
later in the course work. In all probability, the ANOVA should be taught
first since it includes the t-test calculations.
In most other ANOVA tests the number of subjects in each group must be
equal but this is not the case for the One-Way ANOVA. You can test up to 7
groups and have unequal numbers in each group.
The first basic assumption is that the scores must be from a genuine
interval scale, that is, each score should be equal distant from the next
score. For example the distance from 84 to 85 should be the same as the
distance from 23 to 24.
The second assumption is that the scores must be normally distributed
in the population. As noted above, this assumption can be violated to a
great extent without changing the conclusions of the test.
The third assumption is that the variance in the groups must be
homogeneous. This assumption can also be violated to a great extent.
There are some tests that have been developed to determine non-normalcy
and heterogeneity of variance but most of them are less robust than the
ANOVA and many are themselves more susceptible to distortion than the ANOVA.
Most are also tedious and time-consuming to perform.
Hamberg, Morris, Basic Statistics: A Modern Approach. New York: Harcourt
Brace Jovanovich, Inc., 1974
Snedecor, George W., Statistical Methods., 4th Ed., Ames, Iowa: The Iowa
State College Press, 1946
* TWO-WAY ANALYSIS OF VARIANCE BETWEEN GROUPS
(Up to 9 levels of each variable)
The same assumptions exist for this test as exist for the One-Way ANOVA
with the added condition that you must have equal numbers of scores in each
condition of the test.
The test is used when you have a between subjects design and two
independent variables.
The program produces the summary table, shows all F scores and shows
the significant level of each F score. The degrees of freedom are the
degrees of freedom for the item in question and the degrees of freedom for
the error factor. The main effects of Variable A and Variable B are
evaluated and the interaction of Variable A with Variable B (AxB) is shown.
In the event that only one score is tabled per test condition then the
error term is not shown and all F scores are calculated by dividing by the
mean square value of the triple interaction term. If more than one score is
entered under each test condition, the error term is shown and F scores are
calculated using the mean square value of the error term.
All summary totals are shown and the average of scores in each cell are
shown. These averages can be used when doing the Turkey's (a) test. For
information on this test refer to:
Cicchetti, Dominic V. Extensions of multiple-range tests to interaction
tables in the analysis of variance: A rapid approximate solution.
Psychological Bulletin, 1972, 77, 405-408.
McNemar, Quinn, Psychological Statistics. New York: John Wiley & sons.,
Inc., 1949.
Richmond, Samuel B., Statistical Analysis. New York: The Ronald Press
Company, 1964.
* THREE-WAY ANALYSIS OF VARIANCE BETWEEN GROUPS
(Up to 4 levels of each variable)
This analysis also requires an equal number of scores in each condition
of the test. The summary table is shown along with all the significant
levels for the F values shown.
In the event that only one score is tabled per test condition then the
error term is not shown and all F scores are calculated by dividing by the
mean square value of the triple interaction term. If more than one score is
entered under each test condition, the error term is shown and F scores are
calculated using the mean square value of the error term.
Linton, Marigold, and Philip S. Gallo, The Practical Statistician.
Monterey, California: Brooks/Cole Publishing Co., 1975.
McNemar, Quinn, Psychological Statistics. New York: John Wiley & sons.,
Inc., 1949.
Snedecor, George W., Statistical Methods., 4th Ed., Ames, Iowa: The Iowa
State College Press, 1946.
* ONE WAY-ANALYSIS OF VARIANCE WITHIN GROUPS FOLLOWED BY CORRELATED t-TESTS
* TWO-WAY ANALYSIS OF VARIANCE WITHIN GROUPS
* THREE-WAY ANALYSIS OF VARIANCE WITHIN GROUPS
These tests are similar to the ANOVA for between groups but involve
investigations involving a within groups situation. This type of test
involves tested the same individuals more than once. It can be used as a
before and after investigation. Each individual is tested once under each
of the conditions.
With the one-way ANOVA after the completion of the summary table for
the ANOVA you can do a correlated t-test between any two of the test
conditions.
Edwards, Allen L., Experimental Design in Psychological Research. New York:
Holt, Rinehard and Winston, Inc., 1968.
(Please note that the example in Edwards listed as a between ANOVA is
actually the within ANOVA.)
Linton, Marigold, and Philip S. Gallo, The Practical Statistician.
Monterey, California: Brooks/Cole Publishing Co., 1975.
* ANALYSIS OF VARIANCE MIXED DESIGN TWO FACTOR BETWEEN-WITHIN
* ANALYSIS OF VARIANCE MIXED DESIGN THREE FACTOR BETWEEN-BETWEEN-WITHIN
* ANALYSIS OF VARIANCE MIXED DESIGN THREE FACTOR BETWEEN-WITHIN-WITHIN
These tests involve a mixed design ANOVA. There are three tests of
which the two factor design is the most often used. The subjects are
usually divided into groups and each individual is tested under a number of
conditions. The test allows for an analysis of both the between groups and
the within groups.
The other two tests involve either the between group factor or the
within group factor to be compared to two of the other type factor. In all
cases the subjects are tested under various conditions.
Linton, Marigold, and Philip S. Gallo, The Practical Statistician.
Monterey, California: Brooks/Cole Publishing Co., 1975.
Snedecor, George W., Statistical Methods., 4th Ed., Ames, Iowa: The Iowa
State College Press, 1946.
Edwards, Allen L., Experimental Design in Psychological Research. New York:
Holt, Rinehard and Winston, Inc., 1968.
* ANALYSIS OF VARIANCE LATIN SQUARE DESIGN, 3x3, 4x4, 5x5
This program handles three different sized Latin square designs. You
indicate where in the design each individual tested is located and the
program does the complete ANOVA. The significance level of the calculated F
score is shown.
Edwards, Allen L., Experimental Design in Psychological Research. New York:
Holt, Rinehard and Winston, Inc., 1968.
* ONE-WAY CHI SQUARE ANALYSIS
* TWO-WAY CHI SQUARE ANALYSIS 2x2 USING YATES CORRECTION FACTOR
* FISHER'S EXACT PROBABILITY TEST FOR 2x2 TABLE WITH SMALL VALUES
* TWO-WAY CHI SQUARE ANALYSIS AxB
* THREE-WAY CHI SQUARE ANALYSIS 2x2x2
* THREE-WAY CHI SQUARE ANALYSIS AxBxC
Although the Chi Square analysis is one of the most often used non-
parametric tests, it is possible to select the incorrect test for your data.
You are advised to use the FIND program which is choice 1 on the first menu
to make sure you have selected the correct Chi Square test.
The Yates correction factor is used wherever necessary and if the 2x2
analysis has limited numbers in each cell of the table, you are offered the
option of running the Fisher's exact probability test as an alternative.
All these tests assume a between subjects analysis.
* REPEATED MEASURES CHI SQUARE ANALYSIS
* McNEMAR'S CHI SQUARE ANALYSIS OF CHANGES
These two Chi Square tests are used when you have a within subjects or
a mixed design analysis.
Boker, A. H., A test for symmetry in contingency tables. J. American
Statistical Association, 43, 1948.
Linton, Marigold, and Philip S. Gallo, The Practical Statistician.
Monterey, California: Brooks/Cole Publishing Co., 1975.
McNemar, Quinn, Psychological Statistics, 2nd Ed. New York: John Wiley &
Sons Inc., 1955.
Siegel, Sidney, Nonparametric Statistics for the Behavioral Sciences. New
York: McGraw-Hill Book Company, 1956.
* THE SIGN TEST
The sign tests gets its name from the fact that is uses plus or minus
signs rather than quantitative measures as its data. You can either enter
the raw data or a summary of the data indicating only the number of
individuals who changed in each direction. The test is based on the
binomial distribution. The significance level is shown.
A more detailed test is the Wilcoxon match-pairs signed-ranks test.
Siegel, Sidney, Nonparametric Statistics for the Behavioral Sciences. New
York: McGraw-Hill Book Company, 1956.
McNemar, Quinn, Psychological Statistics, 2nd Ed. New York: John Wiley &
Sons Inc., 1955.
* WILCOXIN RANK-SUMS TEST
This test utilizes information about the direction and magnitude of the
differences between the scores of individuals over time.
If only the direction of the change is known, then the proper test is
the Sign test.
The idea of using rank values in place of the measurements themselves
for the purpose of significance tests came from Professor Spearman in 1904.
Mood, A. M., Introduction to the theory of statistics. New York: McGraw-
Hill Book Company, 1950.
Siegel, Sidney, Nonparametric Statistics for the Behavioral Sciences. New
York: McGraw-Hill Book Company, 1956.
Spearman, C., American Journal of Psychology, 15:88, 1904.
Wilcoxon, F., Individual comparisons by ranking methods. Biometrics
Bulletin, 1, 1945.
Wilcoxon, F., Probability tables for individual comparisons by ranking
methods. Biometrics Bulletin, 3, 1947.
* KRUSKAL WALLIS ONE-WAY ANALYSIS OF VARIANCE BY RANKS
This test extends the range of Wilcoxon's Sum of Ranks Test to cases
where there are more than two sets of measurements. The test uses the Chi
Square distribution.
This test determines whether k independent samples are from different
populations.
Langley, Russell, Practical Statistics Simply Explained. New York: Dover
Publications, Inc., 1970
Siegel, Sidney, Nonparametric Statistics for the Behavioral Sciences. New
York: McGraw-Hill Book Company, 1956.
Kruskal, W. H. and W. A. Wallis, Use of ranks in one-criterion variance
analysis. Journal of the American Statistical Association, 47, 1952.
* FRIEDMAN'S TEST
This test compares three or more random samples which are matched. The
test involves ranking each set of matched measurements.
Friedman, Milton, Journal of the American Statistical Association, 1937.
Siegel, Sidney, Nonparametric Statistics for the Behavioral Sciences. New
York: McGraw-Hill Book Company, 1956.
* PEARSON PRODUCT-MOMENT CORRELATION AND REGRESSION ANALYSIS
Single and multiple regression - Curvilinear regression
These programs allow you to work with either simple or multiple
regression analysis.
For a simple regression analysis the program first uses the least
squares method and calculates the regression equation, coefficient of
correlation 'R' value and the standard error of the estimate. It will also
evaluate if the 'R' value is significantly different from zero by
calculating either the t-value or the Z value, and finally, it will allow
you to make estimates of the dependent variable from the independent
variable(s).
The program then offers you the ability to check for a curvilinear
regression fit using the same data. At the completion of the analysis the
program will indicate the F value for divergence from the linear
relationship and evaluate the significance of the F value.
If the F value is significant you will be shown the significance level
of the correlation coefficient and finally offered the ability to make
estimates of the dependent variable from various independent values.
If the F value is non-significant you are returned to the linear
section and offered the opportunity to do estimates.
The range of the conditional mean is shown as well as the individual
range of the predicted dependent variable.
If you select to do a multiple regression the data will be analyzed
using two entered independent variables associated with the dependent
variable. You are shown the level of significance and given the opportunity
to make estimates of the dependent variable based on entering various
combinations of the dependent variables.
All data is saved in a file. For single linear regression data you can
try the data as in choice 2 of the menu to see if you get a better fit as
exponential, logarithmic or as a power fit. If you data doesn't match the
input data limitations of these tests you will receive an error message.
This basic method of curve fitting is attributed to Karl Pearson and a
much more complete analysis of this method can be found in the text,
Statistical Methods by George W. Snedecor.
Snedecor, George W., Statistical Methods., 4th Ed., Ames, Iowa: The Iowa
State College Press, 1946.
Poole, Lon, Mary Borchers, and David M. Castlewitz, Some Common Basic
Programs, Apple II Edition, Berkeley, California: Osborne/McGraw-Hill,
1981.
McElroy, Elam E., Applied Business Statistics, 2nd Ed., San Francisco,
California: Holden-Day, Inc., 1979.
* REGRESSION ANALYSIS EXPONENTIAL
* REGRESSION ANALYSIS LOGARITHMIC
* REGRESSION ANALYSIS POWER
These methods calculate the regression equation when the dependent
variable is related to the independent variable in various fashions.
When attempting an exponential analysis the dependent variable must be
greater than zero. When attempting a logarithmic fit the independent
variable must be greater than zero and when attempting a power curve fit,
both variables must be greater than zero.
The program shows the coefficient of determination, the calculated 'A'
and 'B' values, the regression equation and allows you to make estimates of
the dependent variable from the independent variable.
All data is saved in a file so it is possible to try all three curve
fits as well as a linear and curvilinear fit without having to re-enter the
data.
Hewlett-Packard Company, HP-67 Standard Pac, Cupertino, California
* SPEARMAN RANK CORRELATION COEFFICIENT
Of all the statistics based on ranks, the Spearman rank correlation
coefficient was the earliest to be developed and is perhaps the best known
today. This statistic is referred to as 'rho'.
Both variables must be measured in at least an ordinal scale so that the
objects or individuals under study may be ranked in two ordered series.
Siegel, Sidney, Nonparametric Statistics for the Behavioral Sciences. New
York: McGraw-Hill Book Company, 1956.
Spearman, C., American Journal of Psychology, 15:88, 1904.
* POINT-BISERIAL CORRELATION
If one variable is graduated and yields an approximately normal
distribution and the other is dichotomized, then if we can assume that the
underlying dichotomized trait is continuous and normal, then we can obtain a
correlation measure which constitutes an estimate as to what the product
moment 'r' would be if both variables were in graduated form.
Bernstein, Allen L., A Handbook of Statistics Solutions for the Behavioral
Sciences. New York: Holt, Rinehart and Winston, Inc., 1964
McNemar, Quinn, Psychological Statistics. New York: John Wiley & sons.,
Inc., 1949.
* KENDALL'S RANK ORDER CORRELATION
The Kendall rank correlation coefficient, tau, is suitable as a measure
of correlation when you have rank values for the X and Y variables.
It is possible, although the program is not included in this set, to
generalize to a partial correlation coefficient.
Siegel, Sidney, Nonparametric Statistics for the Behavioral Sciences. New
York: McGraw-Hill Book Company, 1956.
Kendall, M. G., Rank Correlation Methods. London: Griffin Press, 1948.
* KENDALL'S COEFFICIENT OF CONCORDANCE
When you have k sets of rankings of N objects or individuals it is
possible to determine the association among them by using the Kendall
coefficient of concordance, W.
Friedman, M., A comparison of alternative tests of significance for the
problem of m rankings. Annual Mathematical Statistician, 11, 1940.
Kendall, M. G., Rank Correlation Methods. London: Griffin Press, 1948.
* PARTIAL CORRELATION ANALYSIS
This technique is used to asses the relationships between two variables
when another variable's relationship with the initial two has been held
constant or "partialed out."
Popham, W. James, Educational Statistics, Use and Interpretation. New York:
Harper & Row, Publishers, 1967.
Richmond, Samuel B., Statistical Analysis. New York: The Ronald Press
Company, 1964.
* MULTIPLE CORRELATION ANALYSIS
It is possible to use this program to determine the extent of the
relationship between one variable and a combination of two or more other
variables considered simultaneously.
Popham, W. James, Educational Statistics, Use and Interpretation. New York:
Harper & Row, Publishers, 1967.
Richmond, Samuel B., Statistical Analysis. New York: The Ronald Press
Company, 1964.
Snedecor, George W., Statistical Methods., 4th Ed., Ames, Iowa: The Iowa
State College Press, 1946.
* DETERMINE THE DIFFERENCE BETWEEN TWO CORRELATIONS
This will determine if the correlation coefficient computed for one
sample is significantly different than the correlation coefficient computed
for a second sample.
Richmond, Samuel B., Statistical Analysis. New York: The Ronald Press
Company, 1964.
* COVARIANCE WITH ONE VARIABLE
* COVARIANCE WITH TWO VARIABLES
In a single-classification analysis of covariance model there is one
dependent variable, one independent variable and at least one control
variable. There may be several control variables which can be employed if
the researcher feels that they are strongly related to the dependent
variable in the study. This design can statistically compensate for
differences between the independent variable groups with respect to the
control variables.
Edwards, Allen L., Experimental Design in Psychological Research. New York:
Holt, Rinehard and Winston, Inc., 1968.
Popham, W. James, Educational Statistics, Use and Interpretation. New York:
Harper & Row, Publishers, 1967.
Snedecor, George W., Statistical Methods., 4th Ed., Ames, Iowa: The Iowa
State College Press, 1946.
* ETA TEST FOR USE AFTER ONE-WAY ANALYSIS OF VARIANCE
The eta test performed after a one-way ANOVA can tell how much
(percentage) of the variance was accounted for by the conditions of the
test.
Linton, Marigold, and Gallo, Philip S., The Practical Statistician.
Monterey, California: Brooks/Cole Publishing Co., 1975.
McNemar, Quinn, Psychological Statistics. New York: John Wiley & sons.,
Inc., 1949.
* ETA TEST FOR USE AFTER RANK-SUMS OR KRUSKAL TEST
This test is similar to the previous one in that it can also tell how
much (percentage) of the variance was due to the conditions of the test.
It is used in one of two forms after either the Rank-sums or Kruskal-Wallis
test.
Linton, Marigold, and Philip S. Gallo, The Practical Statistician.
Monterey, California: Brooks/Cole Publishing Co., 1975.
McNemar, Quinn, Psychological Statistics. New York: John Wiley & sons.,
Inc., 1949.
* CONTINGENCY COEFFICIENT FOR USE AFTER CHI SQUARE ANALYSIS
The contingency coefficient is a measure of the degree of association
or correlation which exists between variables for which we have only
categorical information. It is included as part of some of the Chi Square
analysis but it can be run directly from this program.
McNemar, Quinn, Psychological Statistics. New York: John Wiley & sons.,
Inc., 1949.
* DETERMINATION OF MEAN AND STANDARD DEVIATION OF GROUPED DATA
* DETERMINATION OF MEAN AND STANDARD DEVIATION OF UNGROUPED DATA
* COMBINING THE MEANS AND STANDARD DEVIATIONS OF TWO GROUPS
These three techniques are very useful when you have raw data which
must be analyzed before it is entered into other tests. The standard
deviations calculated in the first two tests will show both the population
standard deviation and the sample deviation.
The third test can be used to combine any number of groups with known
means and standard deviations into one over-all group.
Many texts give the basic calculation methods.
Edwards, Allen L., Experimental Design in Psychological Research. New York:
Holt, Rinehard and Winston, Inc., 1968.
Richmond, Samuel B., Statistical Analysis. New York: The Ronald Press
Company, 1964.
Snedecor, George W., Statistical Methods., 4th Ed., Ames, Iowa: The Iowa
State College Press, 1946.
Weinberg, George H., and John A. Schumaker, Statistics, An Intuitive
Approach. Belmont, California: Wadsworth Publishing Co., Inc., 1965.
* ZM TEST
This compares a random sample of one or more measurements with a large
parent group whose mean and standard deviation is known. It is useful if
you have a small sample, as small as one individual, and want to determine
if it came from a population about which you know both the mean and standard
deviation.
Langley, Russell, Practical Statistics Simply Explained. New York: Dover
Publications, Inc., 1970
* ZI TEST
This test is essentially an adaptation of the ZM test for use with
numbers of instances instead of measurements.
The test allows for comparing a sample of isolated occurrences and an
average, for comparing two samples of isolated occurrences with each other,
or for comparing a binomial sample and a large parent group.
Langley, Russell, Practical Statistics Simply Explained. New York: Dover
Publications, Inc., 1970
* DETERMINING THE SIGNIFICANT DIFFERENCE BETWEEN TWO LARGE GROUPS
* DETERMINING THE SIGNIFICANT DIFFERENCE BETWEEN TWO SMALL GROUPS
* DETERMINING THE SIGNIFICANT DIFFERENCE BETWEEN TWO PROPORTIONS
* STUDENT'S t-TEST
* SIGNIFICANT DIFFERENCE BETWEEN A SAMPLE AND A POPULATION USING PROPORTIONS
This group of tests allows for the determination of significant
differences between two groups. The calculation methods are listed in
several texts. They should allow you to handle all situations involving the
comparison of groups.
The first two require that you know the mean and standard deviation of
both groups along with the number in each group. You can calculate this
data by using choice 1 of menu number four.
When determining the significant difference between proportions you
need only to know the number in the sample and the number or proportion
within the sample which are under consideration.
* DETERMINING THE PROPER SAMPLE SIZE TO USE
This test is described in many books. In order to estimate the proper
sample size to use it is important that you estimate the PROBABLE standard
deviation involved in the population from which you intend to take the
sample. One way is to take a small pilot sample, calculate the mean and
standard deviation and then using those numbers estimate the population
standard deviation.
You are offered various options to the program. The first option will
determine the sample size for a large population without replacement, the
second option takes into account the finite population factor if you are
sampling from a small sample.
You can also use proportions and the last two options offer you the
chance to estimate from a large population or a small population.
One further option is provided. If you use the options to estimate the
proportion of the population the program also calculates the worst case
situation. When you enter the estimated error or estimated answer you will
also be shown the worst case situation which is based on 50%.
Levin, Richard I., Statistics for Management, Englewood Cliffs, N.J.:
Prentice-Hall, Inc., 1978.
McElroy, Elam E., Applied Business Statistics, 2nd Ed., San Francisco,
California: Holden-Day, Inc., 1979.
* DETERMINING THE CONFIDENCE INTERVAL OF A POPULATION FROM A PROPORTION
This program finds the standard error of the proportion and then, given
the sample size and the proportion of the same in which the investigator is
interested, calculates the upper and lower confidence limits.
You must indicate the significance level wanted. You must indicate if
the sample is being taken from a small population without replacement. If
this is the case various correction factors come into use.
After indicating if the population is small or large you enter the
number in the sample size and the number in the sample which is of interest.
This can be entered either as a proportion (by indicating a decimal point in
front of the number) or as the actual number in the sample.
Levin, Richard I., Statistics for Management, Englewood Cliffs, N.J.:
Prentice-Hall, Inc., 1978.
* DETERMINING THE CONFIDENCE INTERVAL OF A POPULATION FROM A SAMPLE
There are a number of different estimates of a population which can be
made from information acquired from a sample.
The simplest estimate is called a POINT estimate. It is simply using
the sample mean as the best estimator of the population mean.
It is also possible to use the standard deviation of the sample to
estimate the standard deviation of the population. This is done by dividing
the sample standard deviation by the square root of the number in the
sample. In some cases the finite population correction factor must be used.
An interval estimate describes a range of values within which a
population parameter is likely to lie.
In statistics, the probability that we associate with an interval
estimate is called the confidence level. It indicates how confident we are
that the interval estimate will include the population parameter.
The confidence interval is the range of the estimate we are making. It
is often expressed as standard errors rather than in numerical values.
This program will calculate the mean of a population along with the
confidence interval at whatever significance level you desire. It will also
handle both finite and infinite populations. You must enter significance
level wanted, the mean, the standard deviation and size of the sample.
If you enter a small number for the sample you will be reminded to
enter the significance value as a student's t value rather than a Z value.
Levin, Richard I., Statistics for Management, Englewood Cliffs, N.J.:
Prentice-Hall, Inc., 1978.
McElroy, Elam E., Applied Business Statistics, 2nd Ed., San Francisco,
California: Holden-Day, Inc., 1979.
* THE POISSON DISTRIBUTION
* THE NORMAL DISTRIBUTION
* THE CHI SQUARE DISTRIBUTION
* THE STUDENT'S t-TEST DISTRIBUTION
* THE F-DISTRIBUTION
* THE BINOMIAL DISTRIBUTION
Although tables are available for most of these distributions, these
programs allow you to determine significance values from the raw data. The
programs were adapted from the book listed below.
All limitations are included as part of the program and where the
values are not exactly precise, they are on the conservative side.
Poole, Lon, Mary Borchers, and David M. Castlewitz, Some common Basic
Programs, Apple II Edition, Berkeley, California: Osborne/McGraw-Hill, 1981.
BIBLIOGRAPHY
Bernstein, Allen L., A Handbook of Statistics Solutions for the Behavioral
Sciences. New York: Holt, Rinehart and Winston, Inc., 1964
Cicchetti, Dominic V. Extensions of multiple-range tests to interaction
tables in the analysis of variance: A rapid approximate solution.
Psychological Bulletin, 1972, 77, 405-408.
Edwards, Allen L., Experimental Design in Psychological Research. New York:
Holt, Rinehard and Winston, Inc., 1968.
Ehrenfeld, S., and S. Littauer, Introduction to Statistical Analysis,
3rd ed. New York: McGraw-Hill Book Co., 1964
Friedman, Milton, Journal of the American Statistical Association, 1937.
Friedman, Milton, A comparison of alternative tests of significance for the
problem of m rankings. Annual Mathematical Statistician, 11, 1940.
Hamberg, Morris, Basic Statistics: A Modern Approach. New York: Harcourt
Brace Jovanovich, Inc., 1974.
Hewlett-Packard Company, HP-55 Statistics Programs, Cupertino, California
Hewlett-Packard Company, HP-67 Standard Pac, Cupertino, California
Kendall, M. G., Rank Correlation Methods. London: Griffin Press, 1948.
Kruskal, W. H. and W. A. Wallis, Use of ranks in one-criterion variance
analysis. Journal of the American Statistical Association, 47, 1952.
Langley, Russell, Practical Statistics Simply Explained. New York: Dover
Publications, Inc., 1970
Lapin, L. L., Statistics for Modern Business Decisions. New York: Harcourt
Brace Jovanovich, Inc., 1973.
Levin, Richard I., Statistics for Management, Englewood Cliffs, N.J.:
Prentice-Hall, Inc., 1978.
Linton, Marigold, and Philip S. Gallo, The Practical Statistician.
Monterey, California: Brooks/Cole Publishing Co., 1975.
McElroy, Elam E., Applied Business Statistics, 2nd Ed. San Francisco,
California: Holden-Day, Inc., 1979.
McNemar, Quinn, Psychological Statistics. New York: John Wiley & Sons.,
Inc., 1949.
McNemar, Quinn, Psychological Statistics, 2nd Ed. New York: John Wiley &
Sons Inc., 1955.
Mood, A. M., Introduction to the Theory of Statistics. New York: McGraw-
Hill Book Company, 1950.
Poole, Lon, Mary Borchers, and David M. Castlewitz, Some Common Basic
Programs, Apple II Edition, Berkeley, California: Osborne/McGraw-Hill,
1981.
Popham, W. James, Educational Statistics, Use and Interpretation. New York:
Harper & Row, Publishers, 1967.
Richmond, Samuel B., Statistical Analysis. New York: The Ronald Press
Company, 1964.
Shao, Stephen P., Statistics for Business and Economics. Columbus, Ohio:
Charles E. Merrill Books, Inc., 1967.
Siegel, Sidney, Nonparametric Statistics for the Behavioral Sciences. New
York: McGraw-Hill Book Company, 1956.
Snedecor, George W., Statistical Methods., 4th Ed., Ames, Iowa: The Iowa
State College Press, 1946.
Spearman, C., American Journal of Psychology, 15:88, 1904.
Texas Instruments, Calculating Better Decisions, 1977.
Weinberg, George H., and John A. Schumaker, Statistics, An Intuitive
Approach. Belmont, California: Wadsworth Publishing Co., Inc., 1965.
Wilcoxon, F., Individual comparisons by ranking methods. Biometrics
Bulletin, 1, 1945.
Wilcoxon, F., Probability tables for individual comparisons by ranking
methods. Biometrics Bulletin, 3, 1947.
MODERN MICROCOMPUTERS
63 SUDBURY LANE
WESTBURY, N.Y. 11590
(516) 333-9178
Programmer: Dr. Robert C. Knodt
Suggested Donation: $15.00
REGISTER YOUR COPY TODAY.