Basic Statistical Tools
Wilcoxon Rank Sum Test and MannWhitney Test 
Y Nonparametric test based on two independent, simple random samples (no assumptions of samples being normally distributed with identical variances Y Determines whether two statistical populations of continuous values are identical to or different from one another Y Rankspool both samples; rank 1 thru (n_{A} + n_{B}); add ranks of sample A together and add ranks of sample B together Y MannWhitney test aka the U test , the difference between the largest possible value of W and the actual value, where W is the sum of sample A ranks. Y Given H_{0}: the samples' populations are identical,

Kendall's Tau 
Y A nonparametric test/measure of agreement between two rankings Y A nonparametric test of agreement between two rankings 
KruskalWallis Test 
Y Nonparametric test based on two independent, simple random samples (no assumptions of samples being normally distributed with identical variances) Y Determine whether more than two statistical populations of continuous values are identical to or different from one another Y Observations pooled and ranked; a rank sum is calculated for each original sample Y Extension of Wilcoxon ranksum test Y (Side Note: the KolmogorovSmirnov OneSample test, an alternative to the chi square test for goodness of fit [to a binomial distribution]???)

Spearman Rank Correlation Coefficient 
Y Nonparametric test based on two independent, simple random samples (no assumptions of samples being normally distributed with identical variances) Y Measures degree of association between two variables for which only rankorder data are available Y X_{i} is the rank of an individual variable wrt one variable (Y_{i}); Y_{i} is the rank of that element wrt another variable; d_{i} = X_{i}  Y_{i}

Pearson Correlation Coefficient 
Y NOT nonparametric (if run using ranks, then it's the Spearman Rank Correlation Coefficient) Y Capable of detecting LINEAR associations between variables Y r^{2} gives percentage of variation in Y that is explained by X Y Do not confuse correlation and causation

Fisher Test 
Y Nonparametric alternative to ChiSquare Y More information about Fisher's Exact Test and other standard statistical methods can be found at SISA. This wonderful site has plenty of information and interactive demonstrations. Y The matrix is altered by decrementing the smallest entry, in this case 8, and changing the remaining entries to preserve the row and column sums. For each iteration, the p(i) value is calculated as r!r2!c1!c2! / n! a11!a12!a21!a22! In this case, p(8) = 30!20!18!32! / 50!10!20!8!12!, p(7) = 30!20!18!32! / 50! 11! 19!7! 13! and so forth. The right tail is the sum of the p's from 0 to the original in the smallest entry. The twotail is the sum of small p's. {Note: There exists some controversy concerning the use of "sum of small p's" to find the twotail sum.} In this case, the right tail is the sum of p(0) through p(8). The twotail is the sum of p(i) that are <= p(8) = 0.20964159217. This computes the probability of getting a matrix of this strength or stronger, all things being equal. We would "expect" all four entries to be equalthink in terms of flipping a coin twice.


Y REPLICATION AND METAANALYSIS IN PARAPSYCHOLOGY, published in Statistical Science, 1991, Vol.6, No.4, pp. 363403. Found (with comments) through Jessica Utts' homepage Y Eric W. Weisstein's Probability and Statistics in Mathworld. In addition to MathWorld, there is also an evergrowing ScienceWorld. Having my own two kittiesCobol Lipshitz and Snobol GentimentI would be remiss to not include Dr. Weisstein's kittens. (Well, okay. My "kitties" are 17+ years old. They'll always be kittens to me!) Y 
.:: Home ::. 