The best book I read over the summer was Matthew Syed’s Black Box Thinking.
The central theme of the book is really fear of, and reaction to, failure.
We have an allergic aversion to failure. We try to avoid it, cover it up and airbrush. The phenomenon of cognitive dissonance is the name for the deeply-rooted behavioural trait that causes us to naturally reject ideas or even evidence that conflicts with our own worldview. This can be incredibly damaging to progress in many cases.
There’s a huge need to learn from failure, it can be extremely helpful (Syed cites the example of the aviation industry learning from air disasters to vastly improve the safety record).
Readers of similarly themed books (eg the work of Charles Duhigg, Khoi Tu or even David Eagleman) will find a lot of the examples used by Syed a little tired and overdone by now. However I found that Syed was able to extract sufficient new insight from some of these well trodden case studies and weave them together with the central theme effectively.
Creating by experimenting is often more effective than creating by blueprint.
Cognitive dissonance / confirmation bias
There are some powerful behavioural psychological forces at play that can be quite counter-productive to progress in today’s world. For example, the tendancy of reframing when faced with evidence that we’re wrong … divorces us from the pain of recognising that we were wrong. It’s not even conscious when it happens.
For example –
Open vs closed loops.
Syed defines open loops as operating by benefiting from feedback for example aircraft black box & medical randomised control trials.
Closed loops do not systematically collect feedback AND more seriously do not have the mindset to confront, recognise and learn from failure. Closed loops systems are dangerous because they block progress – whether that be progress in safety, improving care, innovation or surviving in the commercial world.
Evolution itself is the best example of learning from failure.
Narrative fallacy Vs RCT
Narrative fallacies arise inevitably from our continuous attempt (need) to make sense of the world around us. The explanatory stories that people find compelling are simple and concrete. But they often assign a greater role to talent/stupidity/intentions than to luck and often rely on a few events that did happen rather than the countless that didn’t.
Stories are good but beware the narrative fallacy. Eg scared straight. Statistical biases.
Need counter factual and control group.
Marginal gains & feedback loop
Marginal gains is not about making small changes and hoping they fly. Rather it is about breaking down a big problem into small parts in order to rigorously establish what works and what doesn’t.
Break a performance into a component parts & you can build back up with confidence > brailsford
Some programs are hard to create controlled trials for, eg aid to Africa. Break down into component parts >> marginal gains
The existence of a local maximum reveals the inherent limitation of marginal gains. Sometimes you need a big leap forward to get past a local maximum. Need to do both marginal gains and big-picture thinking.
Contradictory information jars us psychologicaly. It nudges us into looking for unusual connections. Innovation comes from making new connections between familiar things.
Find a hidden connection to solve a problem. Failure and epiphany are linked. Brilliant ideas can emerge from engagement with a problem for months or years.
Innovation is context dependent – a response to a particular problem at a particular time & place.
Big picture & small picture. Innovation + discipline = success. There exists a threshold level of innovation required for a firm to be successful, beyond that it depends on the discipline to implement.
One thought on “Black Box Thinking”