
#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable
06/17/19 • 103 min
1 Listener
It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.
The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.
In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.
How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?
Sunstein — coauthor of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.
He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.
• Links to learn more, summary and full transcript.
• 80,000 Hours Annual Review 2018.
• How to donate to 80,000 Hours.
In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions.
According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case.
In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss:
• How much people misrepresent their views in democratic countries.
• Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.
• When is it justified to encourage your own group to polarise?
• Sunstein's difficult experiences as a pioneer of animal rights law.
• Whether activists can do better by spending half their resources on public opinion surveys.
• Should people be more or less outspoken about their true views?
• What might be the next social revolution to take off?
• How can we learn about social movements that failed and disappeared?
• How to find out what people really think.
Chapters:
• Rob’s intro (00:00:00)
• Cass's Harvard lecture on How Change Happens (00:02:59)
• Rob & Cass's conversation about the book (00:41:43)
The 80,000 Hours Podcast is produced by Keiran Harris.
It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.
The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.
In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.
How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?
Sunstein — coauthor of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.
He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.
• Links to learn more, summary and full transcript.
• 80,000 Hours Annual Review 2018.
• How to donate to 80,000 Hours.
In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions.
According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case.
In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss:
• How much people misrepresent their views in democratic countries.
• Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.
• When is it justified to encourage your own group to polarise?
• Sunstein's difficult experiences as a pioneer of animal rights law.
• Whether activists can do better by spending half their resources on public opinion surveys.
• Should people be more or less outspoken about their true views?
• What might be the next social revolution to take off?
• How can we learn about social movements that failed and disappeared?
• How to find out what people really think.
Chapters:
• Rob’s intro (00:00:00)
• Cass's Harvard lecture on How Change Happens (00:02:59)
• Rob & Cass's conversation about the book (00:41:43)
The 80,000 Hours Podcast is produced by Keiran Harris.
Previous Episode

#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI
When you're building a bridge, responsibility for making sure it won't fall over isn't handed over to a few 'bridge not falling down engineers'. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.
When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.
Far from being an overhead on the 'real' work, it’s an essential part of making AI systems work at all. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.
Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether.
• Want to be notified about high-impact opportunities to help ensure AI remains safe and beneficial? Tell us a bit about yourself and we’ll get in touch if an opportunity matches your background and interests.
• Links to learn more, summary and full transcript.
• And a few added thoughts on non-research roles.
With the goal of designing systems that are reliably consistent with desired specifications, DeepMind have recently published work on important technical challenges for the machine learning community.
For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an 'adversary' that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable.
He's also looking into 'training specification-consistent models' and formal verification', while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards.
In today’s interview, we focus on the convergence between broader AI research and robustness, as well as:
• DeepMind’s work on the protein folding problem
• Parallels between ML problems and past challenges in software development and computer security
• How can you analyse the thinking of a neural network?
• Unique challenges faced by DeepMind’s technical AGI safety team
• How do you communicate with a non-human intelligence?
• What are the biggest misunderstandings about AI safety and reliability?
• Are there actually a lot of disagreements within the field?
• The difficulty of forecasting AI development
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
Next Episode

#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better
Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case?
Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can't assess the likelihood of different outcomes we're in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul's Drag Race.
Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day.
He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better.
Along with other psychologists, he identified that many ordinary people are attracted to a 'folk probability' that draws just three distinctions — 'impossible', 'possible' and 'certain' — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% as against 57% likely.
• Links to learn more, summary and full transcript
• The calibration training app
• Sign up for the Civ-5 counterfactual forecasting tournament
• A review of the evidence on good forecasting practices
• Learn more about Effective Altruism Global
In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014.
That was five years ago. In today's interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement.
We discuss how his work can be applied to your personal life to answer high-stakes questions, like how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by the Open Philanthropy Project and Clearer Thinking that teaches you to distinguish your '70 percents' from your '80 percents'.)
We also bring some tough methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions that shape the world their profession, as it has been for Tetlock over many decades.
We view Tetlock's work as so core to living well that we've brought him back for a second and longer appearance on the show — his first was back in episode 15.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.
The 80,000 Hours Podcast is produced by Keiran Harris.
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/80000-hours-podcast-134884/59-cass-sunstein-on-how-change-happens-and-why-its-so-often-abrupt-and-6613684"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to #59 – cass sunstein on how change happens, and why it's so often abrupt & unpredictable on goodpods" style="width: 225px" /> </a>
Copy