
Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering
07/28/21 • 76 min
“We think that the most important thing right now is capacity building. We’re not so much focused on having impact now or in the next year, we’re thinking about the long term and the very big picture... Now, what exactly does capacity building mean? It can simply mean getting more people involved... I would frame it more in terms of building a healthy community that’s stable in the long term... And one aspect that’s just as important as the movement building is that we need to improve our knowledge of how to best reduce suffering. You could call it ‘wisdom building’... And CRS aims to contribute to [both] through our research... Some people just naturally tend to be more inclined to explore a lot of different topics... Others have maybe more of a tendency to dive into something more specific and dig up a lot of sources and go into detail and write a comprehensive report and I think both these can be very valuable... What matters is just that overall your work is contributing to progress on... the most important questions of our time.”
- Tobias Baumann
There are many different ways that we can reduce suffering or have other forms of positive impact. But how can we increase our confidence about which actions are most cost-effective? And what can people do now that seems promising?
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.
Topics discussed in the episode:
- Who is currently working to reduce risks of astronomical suffering in the long-term future (“s-risks”) and what are they doing? (2:50)
- What are “information hazards,” how concerned should we be about them, and how can we reduce them? (12:21)
- What is the Center for Reducing Suffering’s theory of change and what are its research plans? (17:52)
- What are the main bottlenecks to further progress in the field of work focused on reducing s-risks? (29:46)
- Does it make more sense to work directly on reducing specific s-risks or on broad risk factors that affect many different risks? (34:27)
- Which particular types of global priorities research seem most useful? (38:15)
- What are some of the implications of taking a longtermist approach for animal advocacy? (45:31)
- If we decide that focusing directly on the interests of artificial sentient beings is a high priority, what are the most important next steps in research and advocacy? (1:00:04)
- What are the most promising career paths for reducing s-risks? (1:09:25)
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
“We think that the most important thing right now is capacity building. We’re not so much focused on having impact now or in the next year, we’re thinking about the long term and the very big picture... Now, what exactly does capacity building mean? It can simply mean getting more people involved... I would frame it more in terms of building a healthy community that’s stable in the long term... And one aspect that’s just as important as the movement building is that we need to improve our knowledge of how to best reduce suffering. You could call it ‘wisdom building’... And CRS aims to contribute to [both] through our research... Some people just naturally tend to be more inclined to explore a lot of different topics... Others have maybe more of a tendency to dive into something more specific and dig up a lot of sources and go into detail and write a comprehensive report and I think both these can be very valuable... What matters is just that overall your work is contributing to progress on... the most important questions of our time.”
- Tobias Baumann
There are many different ways that we can reduce suffering or have other forms of positive impact. But how can we increase our confidence about which actions are most cost-effective? And what can people do now that seems promising?
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.
Topics discussed in the episode:
- Who is currently working to reduce risks of astronomical suffering in the long-term future (“s-risks”) and what are they doing? (2:50)
- What are “information hazards,” how concerned should we be about them, and how can we reduce them? (12:21)
- What is the Center for Reducing Suffering’s theory of change and what are its research plans? (17:52)
- What are the main bottlenecks to further progress in the field of work focused on reducing s-risks? (29:46)
- Does it make more sense to work directly on reducing specific s-risks or on broad risk factors that affect many different risks? (34:27)
- Which particular types of global priorities research seem most useful? (38:15)
- What are some of the implications of taking a longtermist approach for animal advocacy? (45:31)
- If we decide that focusing directly on the interests of artificial sentient beings is a high priority, what are the most important next steps in research and advocacy? (1:00:04)
- What are the most promising career paths for reducing s-risks? (1:09:25)
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Previous Episode

Tobias Baumann of the Center for Reducing Suffering on moral circle expansion, cause prioritization, and reducing risks of astronomical suffering in the long-term future
“If some beings are excluded from moral consideration then the results are usually quite bad, as evidenced by many forms of both current and historical suffering... I would definitely say that those that don’t have any sort of political representation or power are at risk. That’s true for animals right now; it might be true for artificially sentient beings in the future... And yeah, I think that is a plausible priority. Another candidate would be to work on other broad factors to improve the future such as by trying to fix politics, which is obviously a very, very ambitious goal... [Another candidate would be] trying to shape transformative AI more directly. We’ve talked about the uncertainty there is regarding the development of artificial intelligence, but at least there’s a certain chance that people are right about this being a very crucial technology; and if so, shaping it in the right way is very important obviously.”
- Tobias Baumann
Expanding humanity’s moral circle to include farmed animals and other sentient beings is a promising strategy for reducing the risk of astronomical suffering in the long-term future. But are there other causes that we could focus on that might be better? And should reducing future suffering actually be our goal?
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.
Topics discussed in the episode:
- Why moral circle expansion is a plausible priority for those of us focused on doing good (2:17)
- Tobias’ view on why we should accept longtermism — the idea that the value of our actions is determined primarily by their impacts on the long-term future (5:50)
- Are we living at the most important time in history? (14:15)
- When, if ever, will transformative AI arrive? (20:35)
- Assuming longtermism, should we prioritize focusing on risks of astronomical suffering in the long-term future (s-risks) or on maximizing the likelihood of positive outcomes? (27:00)
- What sorts of future beings might be excluded from humanity’s moral circle in the future, and why might this happen? (37:45)
- What are the main reasons to believe that moral circle expansion might not be a very promising way to have positive impacts on the long-term future? (41:40)
- Should we focus on other forms of values spreading that might be broadly positive, rather than expanding humanity’s moral circle? (48:55)
- Beyond values spreading, which other causes should people focused on reducing s-risks consider prioritizing (50:25)
- Should we expend resources on moral circle expansion and other efforts to reduce s-risk now or just invest our money and resources in order to benefit from compound interest? (1:00:02)
- If we decide to focus on moral circle expansion, should we focus on the current frontiers of the moral circle, such as farmed animals, or focus more directly on groups of future beings we are concerned about? (1:03:06)
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Next Episode

Thomas Metzinger on a moratorium on artificial sentience development
And for an applied ethics perspective, I think the most important thing is if we want to minimize suffering in the world, and if we want to minimize animal suffering, we should always, err on the side of caution, we should always be on the safe side.
- Thomas Metzinger
Should we advocate for a moratorium on the development of artificial sentience? What might that look like, and what would be the challenges?
Thomas Metzinger was a full professor of theoretical philosophy at the Johannes Gutenberg Universitat Mainz until 2022, and is now a professor emeritus. Before that, he was the president of the German cognitive science society from 2005 to 2007, president of the association for the scientific study of consciousness from 2009 to 2011, and an adjunct fellow at the Frankfurt Institute for advanced studies since 2011. He is also a co-founder of the German Effective Altruism Foundation, president of the Barbara Wengeler Foundation, and on the advisory board of the Giordano Bruno Foundation. In 2009, he published a popular book, The Ego Tunnel: The Science of the Mind and the Myth of the Self, which addresses a wider audience and discusses the ethical, cultural, and social consequences of consciousness research. From 2018 to 2020 Metzinger worked as a member of the European Commission's high level expert group on artificial intelligence.
Topics discussed in the episode:
- 0:00 introduction
- 2:12 Defining consciousness and sentience
- 9:55 What features might a sentient artificial intelligence have?
- 17:11 Moratorium on artificial sentience development
- 37:46 Case for a moratorium
- 49:30 What would a moratorium look like?
- 53:07 Social hallucination problem
- 55:49 Incentives of politicians
- 1:01:51 Incentives of tech companies
- 1:07:18 Local vs global moratoriums
- 1:11:52 Repealing the moratorium
- 1:16:01 Information hazards
- 1:22:21 Trends in thinking on artificial sentience over time
- 1:39:38 What are the open problems in this field, and how might someone work on them with their career?
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
The Sentience Institute Podcast - Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering
Transcript
Welcome to the sentence Institute podcast. We interview activists, entrepreneurs and researchers about the most effective strategies to expand. Humanity's moral circle. I'm Jamie Harris research, essentially since you and, and my fussy , Chris , welcome to our 17th episode of the podcast. This is the second episode with Tobias Bauman of the center for reducing suffering. In the first episode, I spoke to devise mostly about why he thinks we should
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/the-sentience-institute-podcast-467997/tobias-baumann-of-the-center-for-reducing-suffering-on-global-prioriti-63067479"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to tobias baumann of the center for reducing suffering on global priorities research and effective strategies to reduce suffering on goodpods" style="width: 225px" /> </a>
Copy