What It Takes to Win? — The Illusion of Precision (Part 2 of 3)
Let me start with a story where a “What It Takes to Win” number actually worked.
When I was coaching the amateur trade team Team KGF / Huub-Wattbike, racing under the tongue-in-cheek banner of the People’s Republic of Derbados, we set a very clear objective: win a Track World Cup (an Olympic qualification event) in the blue ribband Team Pursuit.
At the time, that result would give us the exposure we needed to secure sponsorship for the following season. If not, we would be yesterdays news and a laughing stock.
Looking at the results from the previous World Cup season, we noticed something simple: a time of roughly 3:57.0 in the Team Pursuit had consistently been enough to win. So we set 3:57.0 as our benchmark.
But the way we used that number was very different from how WITTW is often used today.
First, it was based on very recent competition data, not a forecast several years into the future.
The last amateurs: The People’s Republic of Derbados in the Team Pursuit whom I coached went on to win an Olympic Qualifying World Cup event.
Second, it was stretching but realistic for the riders we had. Our starting point was 4:04.0 - a 3% gap which doesn’t sound like much but its big in sport!
And most importantly, we did not treat it as judgement.
The number was a reference point, not a verdict.
Training sessions weren’t labelled “on track” or “behind schedule”. If we rode faster, great. If we rode slower, it was simply information to evaluate what happened and how can it improve.
The benchmark helped us understand where we were relative to the level required — but it never defined success or failure on a given day.
That distinction matters.
Because what we were really doing wasn’t predicting the future.
We were anchoring our preparation to the current competitive level.
(spoiler: we did win a World Cup in that time and became the fifth fastest “nation” ever.)
The problem begins when we try to predict the future
In Part 1 of this series, I showed how different “evidence-based” models can produce very different predictions for the same event.
I used the Women’s Flying 200 m time trial in track cycling as an example. Using historical Olympic data, plausible predictions for the fastest qualifying time at the Paris 2024 Olympics ranged between 9.90 and 10.23 seconds.
The actual fastest time was 10.03 seconds.
Now extend the same thinking to the LA 2028 Olympics.
Using the same dataset, reasonable projections for the fastest qualifier range between 9.69 and 9.95 seconds.
The current benchmark is 10.03 seconds, set in Paris 2024. Depending on which projection you choose, the improvement required ranges from roughly:
0.8% faster (9.95 s)
3.4% faster (9.69 s)
Let’s choose 9.83 seconds as a middle scenario and examine what an athlete would need to produce to achieve that time.
To estimate performance, sprint cycling models break the ride down into its primary resistive forces:
Aerodynamic drag (CdA)
Air density
Rolling resistance (Crr)
Drivetrain efficiency
Mass and inertia
These factors are then used to calculate the power required to ride the predicted time.
On paper, this feels precise.
But most of these variables are not measured perfectly in race conditions. They are estimated from wind-tunnel testing, track sessions, modelling assumptions and environmental forecasts.
Let’s be generous and assume each variable is 98% accurate.
If the model estimates an athlete needs 1500 W of effective race power to ride 9.83 seconds, the true requirement could realistically sit somewhere between:
1440 W and 1560 W.
That’s already about ±4% uncertainty in the power estimate alone.
Now combine that with the uncertainty in the predicted time itself. The difference between plausible forecasts (9.95 to 9.69 seconds) already implies roughly 2–10% variation in required power.
Put together, a model suggesting an athlete needs roughly 6% improvement might actually represent anything from almost no improvement to well over 10%.
Yet once a WITTW time is selected, it often becomes doctrine.
Training sessions are judged against it. Progress is measured against it. Athletes are labelled as “on track” or “behind”.
But the model itself already contains layers of estimation and compounded uncertainty.
So the real question becomes:
Are we actually reducing uncertainty — or simply hiding it behind precise-looking numbers?
WITTW often creates the illusion of precision.
The time looks exact. The power targets look scientific. The pathway looks engineered.
But underneath, the model is built on estimates of estimates.
In the final article of this series, I’ll explore a more useful question:
If “What It Takes to Win” is flawed, what should high-performance programmes do instead?
How do we plan for success in a system that is inherently uncertain?
— Mehdi

