Index of /www/kankel/ph567/examples/Octave/statistical_analysis/SpearmanKendall

[ICO]NameLast modifiedSizeDescription

[PARENTDIR]Parent Directory  -  
[   ]Kendall 1.00conf-10000trials.pdf2020-02-14 13:18 8.6K 
[   ]Kendall99.00conf-10000trials.pdf2020-02-14 13:18 8.7K 
[   ]Spearman 1.00conf-10000trials.pdf2020-02-14 13:18 8.6K 
[   ]Spearman99.00conf-10000trials.pdf2020-02-14 13:18 8.8K 
[TXT]helpfiles.txt2016-04-20 01:27 2.0K 
[   ]pval.m2016-04-20 01:27 2.2K 
[DIR]pval_results/2016-04-20 21:15 -  
[   ]pval_ties.m2016-04-20 10:58 2.4K 
[   ]pval_ties1.m2016-04-20 10:58 2.4K 
[DIR]pval_ties1_results/2016-04-20 10:58 -  
[DIR]pval_ties_results/2016-04-20 10:58 -  

Physics 567

Robust Measures of Correlation: Spearman's ρ and Kendall's τ

Spearman's ρ and Kendall's τ statistics are the bread and butter of robust correlation testing (see the lecture notes). Both are nonparametric tests. Both begin by replacing the data (x,y) with ranks (R,S). The choice of which statistic to use depends on several factors. Here are the bare essentials, from my perspective:

I tend to use Kendall's τ unless it becomes too computationally burdensome, in which case I switch to Spearman's ρ without a second thought. For a detailed analysis of the differing merits of Kendall and Spearman, see Xu et al (2012).

Statistical Significance

The traditional way of expressing statistical significance is with a P-value, often denoted α, which is defined as the probability that a statistic would reach a certain value under the null hypothesis. We use 1-α as the confidence. In the case of correlations, which are distributed symmetrically about zero and thus might be either significantly positive or negative, we will use a two-sided confidence. In other words, α is defined as the probability, under the hypothesis of independent data, that the absolute value of the correlation statistic exceeds a certain value.

Side Sermon: The Prosecutor's Fallacy

Let's think for just a moment what this means. In traditional confidence testing, we start by articulating a null hypothesis H. We then ask, under H, what is the probability of obtaining data D? Perhaps that's a little hard to answer, since D is a complicated object containing lots of information. So instead we use D to calculate a statistic S that we suppose to be sensitive to the truth or falsehood of H. The usual P-value is then defined as

α = P(S | H).

A small value of α is typically taken as grounds to reject H. We must not, however, equate α with the posterior probability, P(H | S). Consider Bayes' Theorem, which follows from the interchangeability of arguments in the joint probability distribution:

P(H, S) = P(S, H) = P(H | S) P(S) = P(S | H) P(H).

The posterior probability is therefore:

P(H | S) = α P(H) / P(S).

In view of this complication, how meaningful is the typical frequentist assertion that the null hypothesis, H, may be "rejected" with "confidence" equal to 1 - α? What we must realize is that P(H) and P(S) are "prior" probabilities. That is, they encode knowledge available to us before the experiment was performed. For example, if we have pre-existing evidence favoring the null hypothesis, then it would require proportionately stronger evidence to counter that. If instead we are beginning from a tabula raza, we have no prior knowledge. The best we can say is something like P(H) = P(S) = 1/2. We could apply this reasoning to the first or best study on a particular question. In summary, traditional confidence levels assess each experiment on its own terms.

For a detailed discussion, see the Wikipedia article on the Prosecutor's Fallacy.

Should I publish a novel result with low α that conflicts, at least apparently, with strong prior evidence? I think so, provided that I have carefully reviewed and documented my experimental procedure, the data, the analysis, and any assumptions made along the way. If possible, I should try to replicate the result or have it replicated elsewhere. What I cannot do is jump to the conclusion that a single experiment demolishes everything that has gone before. The result may, however, raise questions that prove productive in the long term. If I and my colleagues routinely refrain from reporting or discussing results that are contrary to received wisdom, yet significant on their own terms, then we run the risk of an ossified scientific enterprise that is immune to discovery.

P-values for ρ & τ

Look at GNU Octave's help files for functions spearman and kendall. These functions do not offer built-in significance testing. We could gripe and complain, but this is free software.

How may we calculate P-values for the Spearman and Kendall statistics? In particular:

MonteCarlo results:

. .
Code Results Comments
pval.m pval_results/
pval_ties.m pval_ties_results/ Ties in both x and y
pval_ties1.m pval_ties1_results/ Ties in x only.

A rude surprise. When repeated values are encountered, the confidence thresholds for τ deviate markedly from those implied by the στ2 formula, even for large N. Surprisingly, it performs even worse than spearman with ties. Looking back at the help files, I realize that Octave's primitive kendall does not correct for repeated values!

Conclusion

Octave's implementation of robust correlation measures needs:
  1. Significance testing for both spearman and kendall.
  2. An implementation of kendall that tests and compensates for tie values.

Page maintained by Charles Kankelborg