Inference for correlations

I’m currently updating my course materials, aimed at undergraduate students in psychology, for next academic year.  Since the text book is lacking a (thorough) description on how to do inference (hypothesis testing and confidence interval construction) for the product-moment correlation coefficient, I’ve written something myself.

It might be useful for someone else who feels that the text books aimed at social sciences students are lacking this information (and the text books aimed at mathematics students are too technical for other students), so I’ve put a copy here. Feel free to use it.

Comment on “Why you should use omega² instead of eta²”

In a new blogpost, Daniël Lakens explains why using ω² is better than using η². Based on literature review and his own simulations, he shows convincingly that the bias of η² is much larger than that of ε² and ω². Or, in Daniël’s words, “Here’s how bad it is: If η² was a flight from New York to Amsterdam, you would end up in Berlin”.

I agree with Daniël that the flight doesn’t take you to Amsterdam, but things are less severe than he claims, as I will outline below. My post is a follow-up to his, so please read his post before you read mine.

Daniël clearly shows that η² clearly disqualifies itself as an estimator in terms of bias. However: bias is only part of the story. Obviously you do want the bias to be small (or, ideally, 0, i.e. an unbiased estimator). But wishes are not unidimensional. You also want a stable estimator, i.e. an estimator with small variance. And in that category, η² performs the worst out of the three estimators that Daniël studied.

I ran Daniel’s R-code (available at the bottom of his post; I’ve set nsim = 10000 for practical purposes, I’ve got to finish work before the kids get out of school) and the variance of ε² is about 1,5% (when n=100) to 17% (when n=10) larger than that of η². For ω² these variance ratios are 1,1% up to 13,4%.
(You can check it yourself by re-running Daniel’s code and then running “SDmat[,2]^2/SDmat[,1]^2” and “SDmat[,3]^2/SDmat[,1]^2”).

There is always a trade-off between bias and variance. It’s easy to make an estimator with zero-variance. Let’s make one now: casper² is defined as always being equal to 0.2. Always. Clearly, casper² has zero-variance, but it will usually have a large bias (unless the true effect size actually is 0.2, but we don’t know that value (otherwise we wouldn’t have to estimate it)). Thus, It might not have been a smart move to name this poor estimator after myself. Which is why I’ll redefine it as TimHunt². That’ll teach him!)

The convential way to deal with the trade-off is to compute the Mean Squared Error. The MSE is defined as the sum of squared differences between the estimate and the true value. The MSE can be computed as MSE = variance + bias².  Large values can have too much impact, which is why we often use the root of the mean squared error, conveniently called root mean squared error (RMSE).

If you look at the RMSE (which is easy; Daniel’s code already computes it for you (in the variable RMSEmat)), you see that ε² and ω² both do have lower RMSE’s than η², but that the difference is close to neglectible. (Credits for the visualisation go to Daniël; I’ve used his code and simple replaced “BIASmat” by “RMSEmat”).

Comparison of the RMSE

When n = 10, for instance, RMSE(η²) = 0.122, RMSE(ε²) = 0.112 and RMSE(ω²) =  0.110. When n = 100, the values are respectively 0.0316, 0.0311 and 0.0310. (With some uncertainty due to the fairly low number of replications). To take it back to the New York to Amsterdam-flight comparison: now you don’t land at Berlin anymore, but at Groningen International Airport, which is, according to the airport’s website “conveniently close”.

To summarise: η² does indeed perform worse than ω² and ε², but the difference in performance is not as extreme as Daniël suggests. The poor behaviour of η² in terms of bias is almost completely compensated by good behaviour of η² in terms of variance. This especially holds when n is larger than, say, 25.

Another often-mentioned advantage of η² is that it is easier to compute than ω². However, we are not living in the era where we do our computations manually. Decent software (such as R or JASP) computes ω² for you with a press of a button. Furthermore, ease of computation can never be an argument: if you want to do easy things, don’t do science…