Distinguishing Skill & Luck

Reading Time: 4 minutes

I’m just back from a sunny and relaxing summer break, where I had the great pleasure of reading and reflecting on the excellent book The Success Equation (“TSE”) by Michael Mauboussin which sets as its goal “untangling skill and luck in business, sports and investing“. The clear aim of this “untangling” being the ability to make better decisions.

What did I like about the book, and what are the takeaways for our industry?

It’s a theme that’s of interest to me and having read a few books in the area before, the early chapters of TSE covered material that was relatively familiar from the likes of Fooled By Randomness, Risk Savvy and Thinking Fast and Slow. The main thrust being exposing the fact that our hard-wired pattern-recognition systems, which serve us well in many instances, can fail to generate the right insights or decisions under increasing levels of uncertainty. On a side note, I’ve yet to see a convincing explanation as to why evolution didn’t sort this out long ago (Gerd Gigerenzer comes closest to this in Risk Savvy where he offers an explanation for our heightened fear of plane crashes vs car crashes).

It’s in the second section where Mauboussin really starts to come at things from a different angle, as he offers several great quantitative tools for attributing the role of luck including blending “luck” and “skill” distributions, and use of simple null tests to attribute actual variance to luck-driven variance. These tools are applied to several different sports (again, a helpful area where Mauboussin offers a different perspective to other books, drawing on concepts that will be familar to fans of Money Ball). The benefit of sports data being that it allows for a relatively complete dataset free from many of the survivorship biases that haunt business and financial market data. True, the sports models have their limits when applied in other areas, but the book is good at drawing out meaningful “so what” points:

– Placing an activity on the luck/skill spectrum helps us make better decisions

– If we fail to appreciate the role of luck in an activity, we may make bad decisions by overreacting to short-term feedback

– We should take a different approach to improving performance in luck-driven vs skill driven activities, the latter calls for relentless practice, coaching and adjusting to feedback, the former calls for a adherence to a process-driven approach, without overreacting to short term successes and failures (care not to mis-apply De Moivre’s law).

– We need to be very careful indeed in areas where outcomes are governed by a power-law distribution, for example the cumulative advantage characteristics generated when outcomes are influenced by social interactions (“likes” or views on social media being a great example), as here averages can be meaningless and prediction is even tougher

– The paradox of skill tells us that the more skilled the participants or competitors in an area become, the greater the role of luck

On a side point Mauboussin makes one of the more intuitive explanations of correlation between two variables I’ve seen, and explains what this tells us about forecasting one variable based on another. He presents a nice rule-of-thumb relating correlation back to the James-Stein estimator, and shrinkage factor which tells us how much we should “shrink” a short-run series of observations toward the population mean to estimate  a player’s long-run average. Mauboussin quotes statisticians that estimate the shrinkage factor for baseball batting average based off a small sample at 0.2, and the shrinkage factor for estimating a team’s win percentage based off a whole season (162) games at 0.7. In a neat extension, Mauboussin quotes the result that relates the shrinkage factor for estimating a team’s win percentage in major league baseball (a game of c30% luck) as equal to n/(74+n), where n is the number of games played.

Again the sporting datasets come back to help us in making the examples tangible as we see how a player’s likely long-term batting average might be estimated from their short-run average and the average for all players. For different sports we see that we need a larger sample to start getting confident on estimating a teams likely long term win record from their current streak. This depends on the relative contribution of luck and skill to the sport and we see that basketball, with a relatively low luck contribution (12%) requires less track record data to be confident of a team’s likely win ratio than baseball or football.

What are the takeaways for investing?

There are clearly drawn out consequences for the investment industry – most notably and obviously when it comes to evaluating the track records of funds managers.

– We must admit luck plays a significant role in most investment track records, and population data is affected by survivorship bias (so far, so familiar – but so what ?)

– What this means is that we should favor process-based approaches, and managers that can evidence sticking to a consistent process without overreacting to short-term gains or losses – especially ones that are not over confident in the ability of predictions, and consider the conterfactual

– We should consider the role of alternative statistics to pure performance track record, as there may be better* statistics that exist. Examples include active share for fund managers, and EPS growth vs Sales growth for individual stocks

– Beware of conclusions drawn from small data subsets, such as performance in particular market environments of which there might not be many, outliers can easily be generated in small samples but are likely to be the result of noise

– Be way of agency costs and constraints placed by organizational alignment (eg “asset gathering” or incentive alignments)

– Be cautious when considering strategies that expose themselves to “Black Swan” type risks, as you need an awful lot of data for the true distribution to make itself known

– Look for organisations and individuals that honestly and precisely measure how their actions turned out, and when they were wrong

* Mauboussin defines a good statistic as one that is both persistent (properties remain through time) and predictive (is correlated with the desired outcome)

I’d thoroughly recommend this book to anyone interested in the field of decision-making under uncertainty, and it fits well alongside other well-known material on the topic such as Fooled By Randomness, Risk Savvy and Thinking Fast and Slow.


Leave a Reply