## Abstract

Dankel, SJ and Loenneke, JP. Effect sizes for paired data should use the change score variability rather than the pre-test variability. J Strength Cond Res XX(X): 000-000, 2018-Effect sizes provide a universal statistic detailing the magnitude of an effect while removing the influence of the sample size. Effect sizes and statistical tests are closely related with the exception that the effect size illustrates the magnitude of an effect in SD units, whereas the test statistic illustrates the magnitude of effect in SE units. Avoiding statistical jargon, we illustrate why calculations of effect sizes on paired data within the sports and exercise science literature are repeatedly performed incorrectly using the variability of the study sample as opposed to the variability of the actual intervention. Statistics and examples are provided to illustrate why effect sizes are being calculated incorrectly. The calculation of effect sizes when examining paired data supports the results of the test statistic, but only when the effect size calculation is made relative to the variability of the intervention (i.e., the change score SD) because this is what is used for the calculation of the test statistic. Effect size calculations that are made on paired data should be made relative to the SD of the change score because this provides the information of the statistical test while removing the influence of the sample size. After all, we are interested in how variable the intervention is rather than how variable the sample population is. Effect size calculations that are made on pre-test/post-test designs should be calculated as the change score divided by the SD of the change score.

Original language | English (US) |
---|---|

Journal | Journal of strength and conditioning research |

DOIs | |

State | E-pub ahead of print - Oct 24 2018 |