
I picked up Steven Johnson’s 2018 book Farsighted in a remainder shop a few weeks ago, which may say something about interest levels in the idea, or about the short attention span of the publishing industry. Johnson’s known for his interest in the intersection of science and ideas, with books such as The Ghost Map, The Invention of Air and Where Good Ideas Comes From.
Farsighted is a book about the science of long-term decision making. Several stories recur through it. One is the story of the successful American raid that killed Osama bin Laden in 2011. Another is about George Washington’s almost completely unsuccessful attempt to defend New York during the revolutionary war. A third–more of a theme–is a series of stories about water spaces that were filled in or recovered.
Collect Pond
One of these–Collect Pond on Manhattan, long gone–is about what happens when we get such decisions wrong. By the late 18th century, the water in the pond had been poisoned by the tanneries that had set up around it, and it had become a dumping ground. A proposal to turn it into a public park, to be financed by investment in buildings around the edge of the park, failed because property financiers weren’t interested enough. Instead, the springs were covered over and housing built on top.
It didn’t end there: the buildings started to sink as the biomass from the lake started to decompose, foul smells emerged from the earth, the houses flooded easily. It quickly degenerated into a slum—Five Points, which became notorious in the nineteenth century. Clearly this is a decision with a host of unintended consequences.
As Johnson notes (p.21), “Many hard choices contain interior decisions that have to be adjudicated separately… To make the right choice, you have to figure out how to structure the decision properly, which is itself an important skill.”
Farsighted decisions are different
Farsighted decisions share a number of characteristics:
They involve multiple interacting variables; they demand thinking that covers a full spectrum of different experiences and scales; they force us to predict the future with varying levels of certainty. They often feature conflicting objectives, or potential useful objectives that are not visible at first glance.
(p.24)
In other words, they’re complex. He breaks the complexity out into eight factors which make good farsighted decisions challenging:
- Complex decisions involve multiple variables
- Complex decisions involve ‘full spectrum’ analysis (they contain variables from completely different frames of reference)
- Complex decisions force us to predict the future (!)
- Complex decisions involve varied levels of uncertainty
- Complex decisions often involve conflicting objectives
- Complex decisions harbour undiscovered options
- Complex decisions are prone to “System 1” failings (see below)
- Complex decisions are vulnerable to failures of collective intelligence.
The prediction problem
Obviously futurists would demur at the prospect of predicting the future. One of the obvious characteristics of complex systems is that they are inherently unpredictable, and prediction (or forecasting) and calculations of likelihood are both impractical and epistemologically challenging.
And it happens that Johnson knows this. He quotes Herbert Simon as saying, in his Nobel Economics Prize lecture (pdf), that the classical economics model “calls for complete knowledge of, or ability to compute, the consequences that will follow on each of the alternatives… It calls for the ability to compare consequences, no matter how diverse and heterogeneous, in terms of some consistent measure of utility.” (p.23)
And a few pages later, Tolstoy’s character Prince Andrei, in War and Peace: “What theory or science is possible where the conditions and circumstances are unknown, and the active forces cannot be ascertained?” (p.32)
Wrapped in jargon
But let’s step back a bit. This is a book about Type 2 decisions (in Kahneman and Tversky’s framework), not Type 1. Johnson is not interested in our Type 1 heuristics and shortcuts: he’s interested in Type 2 situations, where we have to wrestle with complexity, ambiguity, and ignorance to get to a decision. In passing, he observes (p. 14) that most of the research experiments that populate research on Type 1/Type 2 thinking are not about deciding in the face of complexity. Instead they’re puzzles to identify sources of bias.
And one of the frustrating things about the book is that although he says it has taken him a decade to finish it, he doesn’t seem to have thought quite hard enough about it. By this, I mean that there are repeated places where important concepts are still wrapped in jargon.
The ‘full spectrum’ thing
So, for example, we have repeated discussions of “full spectrum” analysis, which seems to mean that you need to look at the whole system, not just part of it (this, in contrast, is described as “narrowband analysis”). The metaphor is taken from the sound mixing desk, and listening to the full spectrum of sound (p. 24), but I’m not sure it helps. What it actually means is that “you have to think about a decision from multiple perspectives”, which isn’t exactly the same as hearing the sound of the whole orchestra.
Or later: there’s an important discussion of the role of having different types of thinkers in the decision process—and even more importantly, not having those who think differently co-opted by the group. Cass Sunstein and Reid Hastie argued that groups tend to focus on shared information when they get together.
Groups, thinking
People who are “cognitively central” to the group have “a disproportionate influence”; people who are “cognitively peripheral end up having little influence and participate less”. (This can be mitigated by good facilitation and process design, of course).
But however designed, the point is to make sure that the “unshared information” (described as “hidden profiles” from work by Garold Stasser and William Titus) is shared and not lost (pp. 51-52). The presence of difference in a group itself makes a difference: “diversity trumps ability” (p.53) Again, describing this as being about “hidden profiles” only illuminates part of the issue.
Anomalies
Some of the discussion about decision making is the most valuable part of the book. Johnson revisits the well-known story from Gary Klein of the fire commander who pulled his team from a building seconds before the floor collapsed. Klein writes that “The whole pattern did not fit right. His expectations were violated, and he realized that he did not quite know what was going on” (p. 57).
Malcolm Gladwell uses this story in his book Blink, as evidence of the value of intuition: “the fireman… instantly found a pattern in the chaos.” Johnson suggests, I think rightly, that it’s actually a story about how years of experience helps you to detect anomalies:
To me, the parable of the basement fire teaches us how important it is to be aware of our blind spots, to recognize the elements of a situation that we don’t understand. The commander’s many years of experience fighting fires didn’t prime him to perceive the hidden truth of the basement fire; it simply allowed him to recognize that he was missing something.
(P. 59)
As Johnson puts it a page or so later, adapting Donald Rumsfeld, there are are three principal types of uncertainty: “knowable unknowns, inaccessible unknowns, and unknowable unknowns.” Understanding the differences between these is critical in building a decision map. In some places, you might as well just write in, “Here might be monsters.”
Broadening the options
One of the reasons that decisions fail is because people narrow in on particular options too quickly. Johnson writes about a trick that helps here: of taking the existing options off the table and asking participants to come up with other solutions instead. One reason: The management professor Paul Nutt found that decisions based on a single option were seen later as successful only half the time, whereas decisions between at least two alternatives were seen as successful two-thirds of the time. This is also a reminder that decision-making is an inexact science.
And, of course, innovation in terms of options often comes from the edge. The High Line in New York, derelict for years, had not been demolished only because of a long-running argument about who was going to pay for it. The suggestion that it could become a raised linear park came out of a community meeting, and was initially dismissed as fanciful by the Giuiliani administration in the city.
The super-forecaster problem
Although the section on forecasting sits uncomfortably with the rest of the book, for the obvious reasons, there’s a note of caution, in passing, about the performance of Philip Tetlock’s “super-forecasters”. The super-forecasters’ idea gets an excitable press these days, in places like the current Downing Street administration, which probably suggests that they don’t quite understand the literature.
In Tetlock’s research, the average forecaster—usually experts with a narrow focus—performed worse than chance. The super-forecasters performed 20% better, “which meant they only slightly outperformed chance” (p.86). Instead, Johnson discusses the dangers of extrapolation and proposes the use of a range of tools, including scenarios, red teaming (pdf) and “pre-mortems”, to challenge expectations about the future.
Some gaps
Of course, this is a territory where futurists also play, sometimes quite effectively. One of the gaps in the book is that he could usefully have spent a little more time with a few more futurists. He references Peter Schwartz’ work with Smith and Hawken (this is described in The Art of the Long View). He’s also been pointed towards Pierre Wack, possibly also by Schwartz, who gets a namecheck in the credits.
But given the time he spent writing and researching the book, Johnson might have been able to stumble across a bit /more of the relevant /futures literature. He’d certainly have realised that the word “prediction” is unhelpful in dealing with how we describe possible future states in complex systems.
And there is work in the complexity literature—Andy Stirling’s model on the limits of risk, for example, or Dave Snowden’s Cynefin approach—that seem relevant to thinking about long-term decision-making, but don’t appear here. It’s almost as if he’s blind to thinking that has emerged from outside of the United States.
The limits of data
He might also have been a little more sceptical about the world of big data, which makes up the spine of the weakest chapter here, on “The Global Choice.” At least on my reading, he parks most of the observations he’s made earlier on the difficulty of complex long-term decision-making, and gets sidetracked by a discussion about the role of AI and super-intelligent machines in future decision making. This would be a classic example of ‘narrow-band’ thinking.
My criticisms here shouldn’t be taken as fatal. The discussion of the role of literature in helping with empathy is intriguing. Johnson is a big fan of George Eliot’s Middlemarch, and also delves, relevantly, into Eliot’s biography. There are good examples. The authors he cites are interesting, even if there seem to be gaps. In summary: This is a well-written book with some good stories in it that will be of interest to anyone interested in how we make complex long-term decisions.