Synopsis: This is the most sciencey post I’ve written, it’s not heavy, but it’s enough that my trusty advisor recommended an overview.
The acute:chronic workload ratio (ACWR) is a very popular athlete monitoring method. However, it has recently come under criticism, with some researchers asking for it to be dismissed. I have used the ACWR for all my PhD research, as well as part of my daily practices in the club. It’s been a big learning curve for me to take a step back from what I thought I knew, and re-assess the facts. In this post I will present the evidence both for and against the method as fairly as I can, as well as share my opinion and the lessons I’ve learnt. I have the greatest respect for the researchers either side of the argument, and both sides have taught me how to be a better practitioner and researcher. Both using the ACWR and then critically reviewing it have revolutionised my practice. Working my brain through this blog post has made me certain of where I stand… for the moment.
If you work in team sports, you will have, or definitely should have, heard about the ACWR. For the last few years it has probably been the most widely adopted and researched method of athlete monitoring. If you are up to date with your reading, you will know that it has recently received some pretty heavy criticism. In this post I’m going to present the evidence for and against the method, then
Not discrediting any of the many researchers in this field (myself included), but there are basically two main players in this emerging feud; the guy who brought the ACWR concept to all our attentions and promotes it globally, and the guy who publicly condemns it across all social and academic platforms. They are the closest thing sport science research has had to a Tory/Labour divide, making their respective ‘arguments’ a hot topic of discussion.
Having used the ACWR method throughout my PhD, I have read over these arguments with growing interest. This surge of opposing research to such a popular concept has torn a FOR or AGAINST divide amongst practitioners. Much like politics, there is no clear ‘black or white’ answer. Both sides are tinted by the confirmation biases of those at the head of each party, leaving a shade of grey that is open to our interpretation.
My worry is that people are choosing a ‘side’ for the wrong reasons, such as fear of change, or fear of being left behind, rather than for the right reason, which is surely “Does it work for your athletes?”
In this post, I will present both arguments, as well as which camp I currently reside in, and why.
“Be a free thinker and don’t accept everything you hear as truth. Be critical and evaluate what you believe in.”Aristotle
The Background: Way back in the 70s, a group of researchers (Bannister & co), created a model to give an estimate of an athlete’s performance. After a few failed attempts, they determined that performance was calculated as the difference between fitness (+ve) and fatigue (-ve). Both fitness and fatigue decay once the stimulus stops and recovery begins; except fitness doesn’t ‘disappear’ as quickly as fatigue. So, in theory, repeated exposure to the optimum stimuli (“workload”), results in increases in fitness and performance improvements… as long as there is enough recovery for fatigue to subside.
The ACWR is based on this concept. The method typically models the work done in one week (acute workload), relative to the work done in the previous four weeks (chronic workload). The acute workload being the most transient and short lived, is representative of Bannister’s ‘fatigue’. Whilst chronic workload, is ‘fitness’. If ‘fitness’ is greater than ‘fatigue’, the athlete is considered to be well prepared for performance. However, if ‘fatigue’ is greater than ‘fitness’, the athlete could suffer performance decrements, as well as an increase in injury risk. So, the ACWR was launched as a ratio that indicates an athletes level of preparedness.
Despite the Bannister model estimating performance, the initial ACWR research focussed on the relationships between the ratio and injury risk. Presumably this is because individual performance within a team sport (where the original FOR guy worked) is often subjective, whilst injury provides a much more objective dependent factor.
The FOR party: The first few papers published across 2014-2016 were mostly in rugby, cricket and Australian football. The general findings were: 1. ‘Spikes’ in the acute workload above the chronic workload were indicative of a significantly heightened injury risk. 2. When the acute workload was high, a higher chronic workload was associated with a smaller injury risk than a lower one. Really basically, if you do way more than what you’ve done before, you are more likely to get injured. But if you have built up your ‘fitness’ over time, you are more robust to higher workloads, and therefore less likely to get injured.
Pretty common sense right? That is why it appealed to everyone; such a simple concept, but the ACWR allowed you to put numbers and objectivity to it. It is not that the findings of these studies taught us all something about conditioning and workload prescription that we didn’t know. Any decent practitioner or coach aims to prescribe the optimal workload which enhances physical capacity (fitness) without unduly increasing the risk of injury. What it did do was allow us to calculate how much is too much? What ratio causes a significant increase in injury risk? How high does the chronic workload need to be, in order to be ‘protective’ against workload spikes? Those numbers allowed us to prescribe with more certainty.
I loved it, I love numbers that have a purpose in reality. I ran my first study on the ACWR and injury risk in youth football in 2015 and the second one in senior football in 2018. I wasn’t the only one; by 2019 papers were published across a wide range of sports, using various tools to measure workload, mainly GPS and sRPE. Different methods were used to determine the ACWR such as rolling averages, exponentially weighted averages and coupling or uncoupling of the acute and chronic workloads. Different time frames were used, 1:2 weeks, 1:3 weeks etc.
In almost all studies, regardless of the population, the consensus was the same – acute spikes heighten injury risk whilst higher chronic loads appear to reduce this risk. Over a small number of years, this monitoring method which started out with one research group in Australia, had spread across the world, to all major team sports.
This method, if it worked as everyone said it did, allowed practitioners to ‘safely’ prescribe higher workloads, which increased the physical capacity of the athletes (they got fitter), whilst minimising injury risk. Coaches, performance and medical staff were all happy.
The AGAINST party: Then the other guy called bullshit.
Flaws were highlighted in study designs, statistical approaches and theoretical frameworks (or lack of). The against party called everyone out for being so quick to take up this method and spread it like wildfire, without critically examining the scientific evidence or testing it, and therefore enhancing any existing evidence.
I’ll try and breakdown the main points made in opposition to the method:
Opinion > evidence – Criticism was given for the large number of editorials written on the ACWR in comparison to the small number of studies. Ultimately, the ACWR had gained popularity based on a number of well written, problem-solving, opinion pieces, as opposed to vast, sound, scientific evidence.
The first two studies published on the ACWR (the foundation) from the same research group had different results. One showed a relationship with injury in the current week, the other a relationship in the subsequent week. So the editorials that followed (prior to any other ACWR research) regarding training smarter and harder etc were already based on inconsistent results.
No hypothesis or theoretical framework – There are so many different ways of defining the ACWR and the workload categories – there is no framework and a lot of variety, meaning you can just pick out whichever method suits your data best. Basically you can create as many categories or ratios as you like, include or exclude data for various reasons and change how you calculate the ACWR to get the most appealing answers. You can fish for something worth showing.
The statistical approaches and study designs – Same as most applied research in sport, the sample sizes were very low, a team of 14-30 players. This means injury cases were also low (in terms of statistical power) and often occurring to the same players more than once within a sample.
This body of research also highlighted than sport scientists are rarely great statisticians. Most of the ACWR research used the ‘easy’ stats tests we learnt at uni – X² tests and logistical regressions. These are not sufficient to model and analyse time-varying exposures and outcomes (workload and injury risk change over time so they qualify here). Time to event models are recommended (but rarely used) as they take into account within-athlete correlations, censoring (people dropping out), subsequent injuries within the same athlete and confounding factors that affect workload over time (e.g. athlete characteristics).
Lastly and most notably – He condemned the ratio itself – The final nail in the coffin and the main divider of practitioners, was the recent publications stating that the ACWR itself doesn’t show anything. By dividing the numerator (acute) by the denominator (chronic), you are just rescaling the numerator. So if you divide the acute workload by a random chronic workload you get a similar injury risk (odds ratio) than if you divide it by the actual chronic workload – not ideal.
Also, chronic workload is usually the average of four weeks of acute workload. By averaging data, you reduce the variance and therefore the standard deviation. The result is that, when you divide the acute workload by the chronic workload, you not only rescale it, but reduce the variation in this value as well (smooth it), which in turn decreases the p-values and increases the odds ratios. So you falsely magnify the relationship between ACWR and injury risk.
Furthermore, ratios are dependent on direction of comparison, so if you increase the workload from 800 to 1000, it is a 20% increase. However, if you decrease the workload from 1000 to 800, it is a 25% decrease. Therefore, there is more ‘weight’ on an higher ACWR, where the acute workload is the same, but the chronic workload is lower.
In summary – The against party called for the ACWR to be dismissed and all previous studies to reconsider their findings.
At first I was gutted that the criticism of the ACWR was so strong, and also that I felt I had almost blindly adopted it into my research and my practice. Then I realised this is much bigger than a ACWR debate, it highlights the huge chasm between true academics and applied practitioners.
Will I write a paper without the help of a statistician again – probably not. Will I be more critical of new emerging methods – 100%. Have I learnt a lot about the expected standards of published research – yes. Will I keep using the ACWR in practice – for the moment, yes. Here’s why?…
It works for what we use it for – to prescribe workloads! – For each player, we plot the acute workload and the chronic workload across a number of weeks (see graph below). That means, for every player, I can see how much ‘workload’ (defined by a number of key metrics), they have done over the last 4 weeks. I can then make recommendations for the current week, that neither greatly exceed, or massively undercut the last four weeks – depending on whether it is a deload, maintenance or overload week.
The ratios (or % as we use them) provide a guide for how much to overload or deload. I have learnt a lot about normalisation and scaling of data from the against party which will, without a doubt, enhance my future research. But in practice I’m not looking for odds ratios or p values. Whether its because of statistical artefacts or actual results – the guideline findings make sense in practice. If this week, a player does 175% of what they did on average over the last 4 weeks, that’s a pretty huge step up, 3/4 of the week again. For example, If they’ve been doing 2000m high speed running on average a week, which is typical, and they suddenly do 3500m, the player may experience some fatigue, or at least not be conditioned for this, and therefore be at greater risk of injury. We should probably avoid this. If they do 120%, so 2400m – that’s a good overload. How do I know it’s a good overload? Experience, knowing the players, common sense and the stats.
Using the ACWR in practice has contributed to our players training harder without an increase in injury risk. I wrote a chapter on it for my PhD, applying the findings of my previous research to practice – hopefully with the help of a statistician, and some sharper minds than my own, we can make it something worth publishing. It may not be the most rigorous scientific method in the world, I have definitely had my eyes opened to that, it has informed our practice. Would I advocate using it it as a stand alone injury prevention tool – absolutely not. Injury is multifactorial, workload prescription merely contributes a piece of a large, complex puzzle.
“Don’t bring me problems. Bring me solutions.”Elie Tahari
What is the alternative? I’ve said where I stand at the moment. Does that mean I will forever use the ACWR for athlete monitoring? Not necessarily. At the moment, academics and researchers have done what we all should have done a long time ago, and critically reviewed applied practice. As of yet, what hasn’t been offered is a viable solution. I am excited for the day that happens. A solution that not only ticks all the research boxes and works perfectly in a database of numbers, but can also be applied in the fast, dynamic world of sport, where those numbers become people. A solution that can be understood and operated whilst carrying out all the other tasks we do, to give our athletes the best chance of performance success. That is where the clever people at academic institutions can really help us out.
References (not limited to these but mentioned in the post) –
Banister, E., Calvert, T., Savage, M., & Bach, A. (1975). A Systems Model of
Training for Athletic Performance. Australian Journal of Sports Medicine,
Gabbett, T. (2016). The Training-Injury Prevention Paradox: Should Athletes be Training Smarter and Harder? British Journal of Sports Medicne, 50,
Hulin, B., Gabbett, T., Blanch, P., Chapman, P., & Bailey, D. O. (2014). Spikes
in Acute Workload are Associated With Increased Injury Risk in Elite
Cricket Fast Bowlers. British Journal of Sports Medicine, 48, 708-712.
Hulin, B., Gabbett, T., Lawson, D., Caputi, P., & Sampson, J. (2016). The
Acute:Chronic Workload Ratio Predicts Injury: High Chronic Workload
May Decrease Injury Risk in Elite Rugby League Players. British Journal
of Sports Medicine, 50(4), 231-236.
Impellizzeri, FM., Woodcock, S., Coutts, AJ., Fanchini. M., McCall, A., & Vigotsky, AD. (2020). Acute to random workload ratio is ‘as’ associated with injury as acute to actual chronic workload ratio: time to dismiss ACWR and its components. SportRχiv. https://doi.org/10.31236/osf.io/e8kt4
Nielsen, RO., Bertelsen, ML., Ramskov, D, et al. (2019) Time-to-event analysis for sports injury research part 1: time-varying exposures British Journal of Sports Medicine, 53:61-68.