I hate to say it, but many of my fellow science-interested lifters who throw up #evidencebased in their posts are the perfect ammunition for the eye-rolling bros who view the whole concept of a scientific approach to lifting with disdain. The disdain these bros have often doesn’t actually come from science, but rather from the people who throw science in their faces online. You know the stereotype: the overthinking, spreadsheet using, macro-tracking, PubMed ninja who always manages to point out logical fallacies, but never manages to make gains.
Here’s the thing, I wish it always was just a stereotype. But speaking from experience, it’s sometimes closer to the truth than I’d like to admit. A lot of us (and I say “us” because I have resembled elements of this stereotype at times) operate from a place of anxiety. The technical term is FOMOOG (fear of missing out on gains). Many who are drawn to science are drawn to the perceived definitive nature of science, because they have a poor relationship with uncertainty. Anecdotes aren’t always trustworthy, so scientific publications become the safety blanket. Less than stellar progress leads to micromanaging and tracking more variables, reading more studies, and incorporating more and more science because of FOMOOG.
One day you look up and you’ve recently started an auto regulated daily undulating program guided by both RIR and your new velocity tracker, you’re doing PAP top sets before your back off sets on main lifts, BFR on your single joint accessories, myo-reps on your compound accessories, autoregulated deloads dictated by a combined scale averaging your session RPEs, HRV scores from your new Fitbit and 1-10 soreness ratings, and you’ve got a fast decay exponential taper planned for the end of the block when you want to test AMRAPs and 1RMs. When you send your program to your friends you have to include a page on your spreadsheet with definitions for all the acronyms. You also just recalculated your macros and started taking creatine, fish oil, beta alanine, citrulline malate, caffeine, vitamin d3, a multivitamin, and you’re trying to eat more high-nitrate fruits and vegetables. Sheesh, you’ve barely had time to make fun of people for training with body part splits this week with all of the planning, reading, purchasing, and implementation! But it was well worth it, now you can cite studies for each of these decisions. You’ve truly earned that “evidencebased” hashtag.
Obviously, this is a bit facetious on my part and a caricature not representative of any large segment of our community. But just barely. For some of us, if we’re honest, we can admit it’s also uncomfortably close to the truth of where we are, or at least where we have been. So, what’s the problem with the above? Hell, some of you might be thinking “Wait, I do a lot of that, are those things no longer evidence-based, should I not be doing those things?” Before you rush to open another window to see if a new meta-analysis came out disproving one or more of these strategies, just chill, and keep reading. The issue is not with the strategies, but with how they are implemented.
Ask yourself this, why aren’t there any studies where all, or even more than a few of these strategies are studied simultaneously? The savvy reader probably realizes that for a study to be valid, it has to control for confounding variables, and scientists typically make only one variable different between groups in order to isolate its effect. That’s the scientific method in a nutshell: change only one thing, observe what happens. That’s also what some of the bros who think they have a disdain for science do. The bros frustrated by the caricature above, who make gains or coach successfully by focusing on “what works”, are using an inherently scientific approach without realizing it. Their plans might be simple, and might not be filled with science, but they are absolutely using the scientific method. Complexity is not the same as being scientific, and more often than not, complexity serves to obfuscate rather than optimize.
As a coach, I learned from being a scientist. Scientists design studies to answer a question and isolate the outcome. When a good coach troubleshoots, they do the same. A bad coach on the other hand, throws the kitchen sink of all the possibly beneficial strategies at a problem simultaneously. Even if doing so works, they don’t know why. It’s not reproducible, they didn’t become better at solving a specific problem. They didn’t update their pattern-recognition algorithm we call “experience” by finding a specific solution to a problem that might be of use in the future. The bad coach in this example is really not a coach at all, but a collection of knowledge. They don’t have skill, which is developed through experience, which they aren’t gaining from their approach.
There is another layer to this discussion, beyond just “change one thing at a time”. Earlier I said that many who are drawn to science are drawn to its perceived definitive nature. If you can identify that in yourself, it’s important to always hold close what science is at its core. Truly, science is about embracing uncertainty. The statistics used to infer study outcomes are embedded with probability not certainty. Science is constantly self-correcting and evolving by design, and it can only do so by being appropriately uncertain about its findings. If we apply inappropriately high levels of certainty to findings that are actually still uncertain, there would be no impetus to refine a field. Science is a process of being continually less wrong, not a process of finding truths.
This is an easy notion to nod your head at, but a difficult thing to truly internalize. Have you ever struggled to let go of one of the many scientific safety blankets in your training or nutrition you held dear once data came out showing it didn’t work, or didn’t work as well as you once thought? I have. I thought high protein diets would have a large effect on lean mass retention while dieting, that RPE would have a large effect on strength gains when used to auto regulate load, and that refeeds and diet breaks would have a large effect on body composition and mitigating the effects of low-energy availability. Right now, the data leans towards a small effect at best in each case. But with that said, these are emerging fields. It’s also possible that more data might show those effect sizes are actually medium, neither small nor large.
That’s not a comfortable mindset for someone who craves certainty, and it takes continued vigilance even for scientists to retain it. It’s even harder for those who put their ego on the line fighting about science on social media, or who promote a certain method, or worse, sell a certain method that science used to support.
Having been someone who’s lost a safety blanket or two, my first reaction isn’t a zen-like “the data are what the data are…ohmmmm”. My first reaction, every time, is a little bit of denial, anger, and frustration. It doesn’t last long, and I don’t often act on it, but the desire to throw my toys is absolutely there. But if I was to explode in a fit of resentment and lash out at science when my safety blankets turned out to be less effective than I previously thought, that’s the same thing as someone saying “Science? Ha! They don’t get anything right, last week they said x, this week they say y! I’m sticking with common sense!”
If you really want to be scientific, not only do you have to use the scientific method in your training versus as many scientific methods in your training as possible, but you also have to understand the very nature of science as a process of embracing uncertainty, rather than a safety blanket against it.
Awesome read brother.
Eric Helms says
The True Adonis says
Eric, the longer I train (20 years straight) the more you realize that pretty much everything works as long as you put effort in.
I have tried everything under the sun and guess what I am about to go back to doing. The BRO-Split of one or two muscle groups a day each week.
Because it works and I like it.
Why exactly does training/coaching need to resemble a scientific experiment? Seems like an unfounded conclusion is being snuck in here. Science and the scientific method = good therefore good train must look like an embodiment of the scientific method.
Do we need to only change one variable at a time all the way back to rediscovering foundational principles, like overload?
Imagine entering a cycling race and re-inventing the wheel.
Eric Helms says
Benjamin, Happy Holidays, thank you for reading my article and also thank you for your comment and questions. Let me see if I can be of help in my response.
First, the conclusion that the scientific method could be useful when making training changes wasn’t something I “snuck in”, it’s the main point of the article! Training doesn’t need to (and shouldn’t) resemble a scientific experiment (however based on your questions you might have an inaccurate understanding of what scientific experiments resemble, which I’ll cover). However, I do recommend making isolated changes to your training when you have the time to do so in order to better infer whether the change produced a result, instead of changing multiple things and not knowing what caused the outcome. That said, there are times when changing multiple variables simultaneously might be a good idea. For one example, it might be a good idea when you have an important, time restricted goal where you are willing to sacrifice diagnostic clarity (linking a change with an outcome) for a better chance of success because you don’t have the time to change one thing at a time and sequentially connect dots.
To answer your second question “Do we need to only change one variable at a time all the way back to rediscovering foundational principles, like overload?” no offence, but of course not! And I didn’t suggest doing so in this blog. Just like your comment about re-inventing the wheel to enter a cycling race is intended to be completely ridiculous, to me, so too is the idea of ignoring centuries of resistance training knowledge, like the concept of overload which stems from the legend of Milo of Croton in the 6th century BC.
I think your first question might hint at the source of your confusion: “does training/coaching need to resemble a scientific experiment?” it sounds like you might not understand how scientific studies are conducted? Ignoring first principles isn’t something I or other scientists do when conducting scientific experiments on resistance training. In applied studies we don’t re-invent exercise equipment, we use barbells, dumbbells and cables and we start with a volume, frequency, load, perceived effort, rest period, and range of motion (etc.) previously established in the last 70+ years of modern exercise science. Then, we change something else to compare interventions. Since re-inventing the proverbial wheel isn’t what happens when actual scientific experiments are conducted, that certainly isn’t how real world training should be conducted either.
To clarify, this article is about how to make changes, not about where to start. We have a decade plus of content about where to start, based on the foundational principles. To understand where to start, you have to not only know what the foundational principles are, but also what aspects of exercise science are less established and more established, and then what guidelines emerge from them, and the various ways to apply those guidelines based on your goals, and individual needs/situation. As you can see in the decade of content I and 3DMJ have produced, we start at a point that includes those foundational principles and we also recommend a few less foundational concepts that likely have utility for most lifters.
If you’d like to learn about this, I’d start with the Muscle and Strength Pyramid Training and Nutrition Books (or the old YouTube videos which are free). These are delivered in a hierarchical nature of the most foundational, most impactful principles, to the least, and the guidelines that stem from them. You might have missed that the examples in the “evidence based lifter” caricature included strategies that emerged in recent times, like RPE, PAP, VBT, and not foundational principles like overload.
Hopefully this clarifies the article and answers your questions, once again, thanks for reading, commenting, and happy holidays!
Hi Eric, Happy Holidays – I wasn’t expecting a reply (I rather inappropriately made the comment in a provocative/throw-away YouTube comment style without much thought) and am embarrassed and ashamed now that you have. You, the article, and this platform deserve better!
Firstly, it was of course ridiculous to suggest one might end up “rediscovering foundational principles, like overload”, but I think it’s not entirely ridiculous to worry that with an approach like this (which is very similar to Mike T’s Emerging Strategies) something like peaking, or periodization, might end up having to be reinvented.
Regarding my accusation of an unfounded conclusion being ‘snuck in’ – admittedly what I was meaning wasn’t very clear – and perhaps not even to myself. I do however think there is some equivocation and confusion in your article that does lead to something resembling an unfounded conclusion! You make a convincing case for why only one variable should be changed when troubleshooting/as a diagnostic approach for when problems are encountered in terms of knowing what worked and developing skill as a coach – and I have no argument with that. However, you do also have other targets of criticism which I think your argument doesn’t stretch too but which you lump together:
a) Complexity: “Complexity is not the same as being scientific, and more often than not, complexity serves to obfuscate rather than optimize.” Complexity as having “as many scientific methods in your training as possible”. Complexity presented in opposition to the plans of bro’s which “might be simple”’ and “focus[…] on “what works””. These comments are joined by the identification of the FOMOOG motivation held by the ‘evidence based caricature’ which seemed to a large degree to be a description of training program complexity in general. Already existing complexity isn’t the same thing as complexity introduced in the context of a problem (i.e. when trouble shooting.), e.g. changing more than one thing at once. As you’ve outlined in your reply we always begin from a basis of previously established science. If you begin training using barbells, dumbbells, reps, sets, overload, RIR, peaking, you are already beginning from a context of complexity. While we may be able to more-or-less confidently identify the change of a single variable as the solution to a problem or solution to a lack of progress, progress itself is surely always the product of a complex set of variables. There doesn’t seem to be anything gained from ‘simplicity’ itself or as few ‘scientific methods’ as possible.
b) Complexity that involves week to week changes in variables. Some of the examples in ‘evidence based caricature’ are more specifically examples of variables changing on a micro-cycle to micro-cycle basis. These follow on from a similar caricature you constructed on the SBS podcast episode on MASS recently: “I increased the number of sets every week, I also advanced my RPE, and I also have a linear progression of reps and load so my reps are going down and my RPE is going up and my sets are going up”. The podcast also featured the scientific method/s quip. The claim seems to be that it is better to “sacrifice theoretical optimality for diagnostic clarity”, that the stable, focusing on ‘what works’ approach allows you to more accurately attribute progress to a set state of variables. However, fatigue masks fitness adaptations during a mesocycle and the timeframe for progress in advanced trainees is likely to stretch beyond a few weeks. Both are reasons to not only be sceptical of the need for week to week stability but actually wary of being guided by the supposed diagnostic clarity it brings: one’s training could easily trend toward optimising for the development or expression of strength rather than hypertrophy. It’s therefore only at the level of the mesocycle that we can attribute progress (or troubleshoot lack of progress), i.e. when a sufficient length of time has past and any accumulated fatigue has been removed through a deload. At the mesocycle level programs with moving variables (across the meso) can be assessed just as easily as stable programs – simply by averaging set volumes, RIR, etc. Keeping all variables stable, and presumably attempting to limit the accumulation of fatigue, might sacrifice training quality and long-term progress for a chimera of instantaneous diagnostic clarity.
Eric Helms says
Benjamin, I really appreciate your well thought out reply, no need to feel ashamed or embarrassed, but I do appreciate your acknowledgement!
This is a fantastic clarification, and you bring up a lot of good points.
a) I agree with everything you said here, and this article could have had a clearer target for my critiques. I was coming at this from the angle of when making changes, versus where to start. However, my caricature wasn’t obviously described in such a way to make that clear. Like you said, and like I said in my initial response to your first comment, we absolutely do have to start somewhere, and it’s not fair nor accurate to describe a system that has a lot of previously established scientific principles and methods in it as needlessly complex or even suboptimal. That absolutely wasn’t my intention, and maybe I should have written this article to be clearer that the set of methods I described in my caricature are not inherently problematic, or even problematic when combined, but rather a problem if used as a shotgun approach to deal with a plateau. I totally agree there isn’t anything to be gained from simplicity for simplicity’s sake.
b) The SBS podcast episode and this blog are similar to some degree as I discuss the utility of the scientific method, but they are distinct. This article is about making changes, the SBS podcast was about training for hypertrophy. So, while I certainly appreciate your thoughts on what level of mesocycle complexity you think might be ideal in the case of an advanced lifer seeking hypertrophy, those points stand adjacent to this article. This article isn’t meant to be a companion piece to that discussion. Anywho, I’d be happy to dive in a bit though on these comments as I think I could better clarify what points I’m trying to make.
My claim was *almost* “that it is better to sacrifice theoretical optimality for diagnostic clarity”. To more accurately state my claim, it is: when presented with an uncertain possibility of theoretical optimality, at the cost of losing diagnostic clarity, I wouldn’t advise paying that price. That’s an important clarification in my mind.
Now, this is a philosophical perspective that has to be applied uniquely to any given situation. The individual has to assess what they are and aren’t certain about and what they do and don’t consider theoretical. It sounds like we evaluate certain training concepts as more or less certain or more or less theoretical than one another, which is fine and to be expected. However, even if I did share the exact same views on what I was certain about or considered more or less theoretical, I would still apply my claim, just to different variables (the ones I was not certain would be beneficial if they cost me diagnostic clarity).
This concept comes back to complexity, and like I agreed there is nothing to be gained from simplicity for simplicity’s sake, there is also nothing to be gained from complexity for complexity’s sake. Rather, in each case you have to weigh the potential cost vs benefit, i.e., when is added complexity likely beneficial? You gave a salient example of when you might need a more complex system to discern changes in an advanced lifter seeking hypertrophy, but I would still approach it philosophically with the same rubric.
Apologies if the last couple paragraphs were repetitive, but I think this is an important clarification on my part as it sounds like you think I’m chasing “instantaneous diagnostic clarity”, as that’s what you finished your comment with? I don’t know what “instantaneous” diagnostic clarity is specifically, and I don’t want to straw-man that wording. So, applying the principle of charity and steel-manning you, I think your meaning is perhaps “an inappropriate time scale for diagnostic clarity”? If so, I agree, and I’ll go back to my clarified claim: “when presented with an uncertain possibility of theoretical optimality, at the cost of losing diagnostic clarity, I wouldn’t advise paying that price.” This applies to the time scale as well. As you correctly pointed out, advanced lifters take longer to realize gains, as the magnitude of the gains is less, so you have to scale your system of diagnosis to your best guess of when you would expect to see gains.
Benjamin, thank you again for your acknowledgement of how the initial comment came across. Again, no need to be embarrassed or ashamed, but I do appreciate it. I also appreciate the time you took to expand in this follow up with some very good points. I do see how the blog could have been clearer and I appreciate that feedback!
Chris Dishno says
Literally one of the best, if not the best article I’ve read in the past couple of years on the subject of lifting. I’m a big believer in a science-based approach to training and still learning. Wonder if you have any recommendations on books and/or articles that beginners to a science-based approach like myself might gain more insight from? Thanks!
Eric Helms says
Chris, wow, thank you I’m honoured to hear that! And if you’re looking for books the Muscle and Strength Pyramids which I wrote with Andy Morgan and our very own Andrea Valdez here at 3DMJ are a good start! They are under our “Products” tab at the top of our page.
Diego Rascon says
Dr Helms, thank you for this article. I actually identify with a lot of the characteristics you’re referencing in this article, so this helps me see my flaws on the matter. I was wondering, what would be a sign of there being a need to change something? And when the change is made, how long should we observe before deciding if said change is making a significant difference? Thank you for your work.
Eric Helms says
You’re very welcome, and thank you!
Great question Diego, and generally a sign of when to change something is when you aren’t making progress. Your follow up question, of course, hints at how this is not necessarily always an obvious occurrence. It very much depends on the variable in question, and the individual. For example, fat loss can be observed in the order of weeks, strength gains on compound lower body exercises for novices and early stage intermediates can also be observed in the order of weeks. However, muscle gain in advanced natural lifters may take most of the year to be observable. This is a bit too broad to answer in a comment response, and not to shamelessly self promote, but I do have sections in my books where I go into detail on how to measure progress for fat loss, muscle gain, and strength gain at various stages of a lifter’s career if you’re interested.
Diego Rascon says
Of course I would love to read them! Thanks for the response!
Eric Helms says
You’re very welcome, and I’d be honoured! If you get them, thank you and enjoy!
That was entertaining. The only problem I have with science of late is that intensity is almost viewed as a negative. Oh crap you went to failure? You’re fuk’d dude. That’s where we’re at today. Sad.
Eric Helms says
Kerfluffle, glad it was entertaining! And it might be helpful to make a distinction in your mind between “science of late” and people who talk about science online and the people who critique them. Most of the actual scientific studies I’m aware of on training to failure are specifically evaluating if there is anything to be gained, or anything to be lost, and how close to failure you need to be, and if in different contexts it is has more or less potential to be helpful or hindering, just remember science doesn’t happen on YouTube 🙂
Mike H says
Hi Eric, I have a question on progressing with linear periodisation. On a compound moment, say 6-8 reps, week one 1 achieve 8,8,8 with 50kg. On week 2 I only get 7,6,6 with 52.5kg. How would you set up week 3 after being unable to get 7,7,7?
Eric Helms says
Hey Mike! I’m assuming you have the muscle and strength pyramid books? If so, check out the examples in there of what to do if you plateau. You have to also assess if you have plateaued, which is possible, and that you need to adjust something. However, you might have just started too heavy, and been barely able to complete 3x8x50kg, and your rate of progression is too aggressive.
Ryan Cherry says
I have the Muscle And Strength Pyramids training book. I love it! It helped me understand small rocks versus big rocks. However, I’ve been struggling with clavicle issues due to a previous AC joint for a while. I can overhead press with dumbbells or machine for high reps without much issue (which is not fun). However, if I have laterals and sufficient horizontal pressing and incline work and cable flyes, is vertical pressing necessary for hypertrophy? I know you list it as a core foundational movement. I’ve been training seriously for about 3-4 years (not doing a bro split). I train four days a week in an upper lower fashion due to time constraints.
Eric Helms says
Sounds like you’ve got plenty of anterior delt work from other pressing (including incline), so yeah probably gtg.
Ryan Cherry says