55 pages • 1 hour read
A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Good Judgment Project coleader Philip Tetlock is a Canadian American born in 1954. He is the Annenberg Professor at the University of Pennsylvania with appointments in Wharton, psychology, and political science. His research interests include the assessment of good judgment and the criteria used by social scientists to evaluate judgment and define error and bias.
Prior to embarking on the Good Judgment Project with his partner Barbara Mellers and writing Superforecasting, Tetlock made his name with the 2005 publication Expert Political Judgment: How Good Is It? How Can We Know? This work drew upon Tetlock’s research between 1980 and 2003, which staged forecasting tournaments for 284 experts including academics and government officials of diverse political persuasions. The results, which showed that these specialists’ predictions were no more reliably accurate than a dart-throwing chimp and performed worse than their lower-status colleagues, indicated the grand scale of mediocrity in American political forecasting. The idea that non-experts could outperform experts at forecasting is also explored in detail in Superforecasting through the theme of Hedgehogs and Foxes.
These findings paved the way for the founding of the Good Judgment Project and ignited Tetlock’s fascination with superforecasters—the non-experts whose modes of thinking are conducive to accurate forecasts. Although Superforecasting is co-authored with Dan Gardner, Tetlock often emerges as the “I” voice in the text, personally talking about his research and conclusions over the years. Through his vivid descriptions of superforecasters, his personal investment in the project and his passion for good judgment are manifest. Since the book’s 2015 publication, the Good Judgment Project has continued to grow and recruit new superforecasters, as Tetlock makes creating a process that produces accurate predictions his life’s mission.
Canadian Dan Gardner is a New York Times bestselling author and an honorary senior fellow at the University of Ottawa. His areas of expertise include decision-making, forecasting, and risk, and he lectures around the world on these topics in addition to advising officials, including the prime minister of Canada.
Prior to working on Superforecasting with Philip Tetlock, Gardner wrote the book Future Babble: Why Predictions Fail—And Why We Believe Them Anyway (2010). In this work, he explores why the public is attracted to those who forecast the future confidently and why those who make a sequence of outrageously wrong forecasts are able to get away with it. This dovetails with the parts of Superforecasting that explore the reasons behind mediocrity in contemporary forecasting, including the media’s taste for those who deliver clear (if inaccurate) narratives and the vested interests of the establishment in keeping forecasts flattering and compelling rather than precise.
The Good Judgment Project is a multiyear forecasting study that was co-launched by Philip Tetlock and Barbara Mellers in summer 2011, with initial funding from the Intelligence Advanced Research Projects Agency, an entity within the American intelligence community. The project enlisted the participation of 2,800 volunteers who were tasked with estimating the likelihood of specific global events. These volunteers, both individually and in teams, employed data analysis, critical thinking, and data aggregation techniques to formulate their predictions within specified deadlines. Tetlock observed that certain volunteers consistently outperformed their peers, despite lacking expertise in the subject matter. He referred to these exceptionally skilled individuals as “superforecasters” and embarked on a quest to identify the factors that set apart their prediction methodologies; Superforecasting relays part of that quest.
Triple-Pulitzer-prize-winning political journalist and New York Times op-ed columnist Tom Friedman (1953) is mentioned in Superforecasting as an intellectual contrast to the typical superforecaster. The authors write how “when big events happen—markets crash, wars loom, leaders tremble—we turn to the experts, those in the know” like Friedman (1). They thus show how intelligent people trust the prognostications of eloquent experts rather than relying on their own judgment. The public’s and politicians’ preference for Friedman’s forecasts is evident in his invitation to advise political leaders at summits such as the World Economic Forum at Davos in Switzerland. However, the authors note that while the accuracy of superforecasters’ predictions is continually put to the test, this is not the case with Friedman, whose forecasts have not been quantitatively tested. Instead, people assess his predictions in sweeping general terms, such as “he nailed the Arab Spring” or “he screwed up on the 2003 invasion of Iraq” (3). Here, the authors hold up Friedman’s unqualified prominence as a forecaster as part of the reason for forecasting’s mediocrity.
Instead of touting him as a forecaster, the authors propose that Friedman makes a better superquestioner, with the capacity to pose inquiries that penetrate through the most mystifying situations. He would thus be a perfect partner for a superforecaster, even though he should not take the latter’s place.
Archie Cochrane (1909-1988) was a pioneer of evidence-based medical testing, which revolutionized medical practice by standardizing treatment. Prior to Cochrane’s intervention, physicians preferred to consult their individual expertise rather than data from randomized trials. Cochrane despised what he termed “the God complex” in physicians who claimed that they instinctively knew best and therefore had no need of science (31). He set out to prove them wrong by staging randomized trials in cardiac wards, wanting to observe whether patients would recover better at home or in the hospital; the physicians were convinced that hospital treatment would be better, but the final results suggested that home care was superior. Cochrane also showed that randomized trials add real value to medical care. Throughout Superforecasting, randomized trials are shown to be a valuable model for forecasting. The Good Judgment Project’s progression toward this model is discussed in the theme of Forecasting: Between Science and Art.
Still, Cochrane also fell for a single expert’s prediction, despite knowing experts are fallible, when an esteemed surgeon erroneously diagnosed him with cancer and he did not question it. The authors use this as an example of how anyone can fall into the trap of false forecasts if they do not question their own and others’ judgments. This proves the authors’ point that one’s superforecaster status is by no means fixed, but a result of continual practice.
Superforecasting is populated with anecdotes of the Good Judgment Project’s superforecasters. These figures share the following dominant traits: a mix of curiosity, intelligence, and humility and a growth mindset that helps them overcome adversity and mistakes.
Throughout the book, the authors are keen to emphasize the contrast between superforecasters’ unthreatening outward demeanor and their potential threat to the establishment. This contrasts nowhere more obviously than with the description of Doug Lorch, the California retiree who, with “his gray beard, thinning hair, and glasses […] doesn’t look like a threat to anyone” (91). Still, the fact that Doug—who is not an expert on American Intelligence—beat the experts at the IC could be threatening to the status quo. First, a relative layman outperforming highly trained intelligence specialists sets the American Intelligence Project on shaky foundations and hints at the risks to national security. Second, it challenges society’s faith in single-subject experts. The authors show that superforecasters are far more like foxes than hedgehogs, as they rely upon a mastery of several skills rather than concentrating on a single one.
While the authors explore typical superforecaster traits—such as above-average intelligence, numeracy, and the willingness to receive new information and apply it to a problem—they maintain that nothing is as useful to superforecasting as a commitment to perpetual self-improvement. One example is superforecaster Elizabeth Sloane, a brain cancer survivor who endured severe cerebral damage and joined the Good Judgment Project to “re-grow her synapses” (187). Her story demonstrates the importance of a growth mindset in overcoming adversity, and this same attitude helps superforecasters as they learn from errors to hone their approach.
Finally, whereas the media forecasting landscape is dominated by big names and egos, the GJP’s superforecasters prioritize accuracy, as it is only through aggregation and self-subordinating teamwork that accuracy can be truly maximized.
Lebanese-born Nassim Taleb (1960) became famous for a risk-conscious investment strategy predicated upon the expectation of “black swan” events, or entirely unpredictable disasters that can cause investments to fail. Taleb, who saw how Lebanon’s war in the 1980s reversed the financial fortunes of his wealthy family, judged that the future was entirely unpredictable and that humans should brace themselves (and their investments) accordingly. He published his beliefs in The Black Swan: The Impact of the Highly Improbable (2007).
Although Tetlock and Taleb have collaborated on projects such as the 2013 paper “On the Difference Between Binary Prediction and True Exposure, With Implications for Forecasting Tournaments and Prediction Markets,” the two men differ in their views on forecasting. Taleb argues that certain crucial events are unpredictable and that this unpredictability renders forecasting of limited use. Tetlock, however, draws attention to the historical recurrence of such black swan events and the possibility of predicting their precursory factors. He adds that with events such as the 1789 storming of the Bastille, which led to the French Revolution, it is difficult to fully demarcate the principal catalyst or conclusively determine which aspects were predictable and which completely random. Nevertheless, the authors share Taleb’s healthy respect for the unpredictability of the future and agree that many erroneous forecasts are due to people overestimating their forecasting ability.
Nobel-prize-winning Israeli American psychologist Daniel Kahneman (1934) coined the concepts of System 1 and System 2 thinking that are important for Tetlock and Gardner’s thoughts on why instinctive assumptions can derail predictions about complex matters. Kahneman has shown how the spontaneous judgments that arise from System 1 thinking hold some evolutionary benefit; however, such unconscious thought is a liability when it comes to predicting the outcome of an event with many actors, such as whether the late Palestinian leader Yasser Arafat was poisoned with polonium.
While Kahneman respects the Good Judgment Project and cites its work in his book Noise: A Flaw in Human Judgment (2021), he thinks that “superforecasters are always just a System 2 slipup away from a blown forecast and a nasty tumble down the rankings” (233). Tetlock shares Kahneman’s view of the perennial fallibility of human judgment, but he is more optimistic that “smart, dedicated people can inoculate themselves to some degree against certain cognitive illusions” (233). The difference in the men’s opinions is important because if Tetlock is right, organizations will stand to gain more from relying upon actors who are trained to resist biases and predict the future. Conversely, if Kahneman is right, superforecasters’ predictions are too unreliable to be consistently useful.
John Maynard Keynes (1886-1946) was a British economist famous for his belief that government intervention could stabilize the economy and prevent financial crises. He was a key detractor of laissez-faire economics, which proposed that the market should be left to determine its own fate.
Keynes displayed the adaptability, curiosity, and open-mindedness that characterize superforecasters. He was famously (though probably falsely) reported to have said, “When the facts change, I change my mind” (154). While this attitude made Keynes an extraordinary investor who amassed wealth even while the British economy was failing, it is inimical to the hedgehoglike pundits who thrive in today’s media. Nevertheless, Keynes’s fabled temperament is a model for superforecasters, who must respond to changing events that take their cases in unpredictable directions. Superforecasters can also take inspiration from Keynes’s willingness to admit mistakes, learn from them quickly, and adapt his strategy.
Plus, gain access to 8,800+ more expert-written Study Guides.
Including features:
Business & Economics
View Collection
Canadian Literature
View Collection
Common Reads: Freshman Year Reading
View Collection
New York Times Best Sellers
View Collection
Politics & Government
View Collection
Psychology
View Collection
Science & Nature
View Collection
Self-Help Books
View Collection
Teams & Gangs
View Collection
The Best of "Best Book" Lists
View Collection