
#43 - Artificial General Intelligence and the AI Safety debate
08/28/22 • 67 min
Some people think that advanced AI is going to kill everyone. Some people don't. Who to believe? Fortunately, Ben and Vaden are here to sort out the question once and for all. No need to think for yourselves after listening to this one, we've got you covered.
We discuss:
- How well does math fit reality? Is that surprising?
- Should artificial general intelligence (AGI) be considered "a person"?
- How could AI possibly "go rogue?"
- Can we know if current AI systems are being creative?
- Is misplaced AI fear hampering progress?
References:
- The Unreasonable effectiveness of mathematics
- Prohibition on autonomous weapons letter
- Google employee conversation with chat bot
- Gary marcus on the Turing test
- Melanie Mitchell essay.
- Did MIRI give up? Their (half-sarcastic?) death with dignity strategy
- Kerry Vaughan on slowing down AGI development.
Contact us
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Which prompt would you send to GPT-3 in order to end the world? Tell us before you're turned into a paperclip over at [email protected]
Some people think that advanced AI is going to kill everyone. Some people don't. Who to believe? Fortunately, Ben and Vaden are here to sort out the question once and for all. No need to think for yourselves after listening to this one, we've got you covered.
We discuss:
- How well does math fit reality? Is that surprising?
- Should artificial general intelligence (AGI) be considered "a person"?
- How could AI possibly "go rogue?"
- Can we know if current AI systems are being creative?
- Is misplaced AI fear hampering progress?
References:
- The Unreasonable effectiveness of mathematics
- Prohibition on autonomous weapons letter
- Google employee conversation with chat bot
- Gary marcus on the Turing test
- Melanie Mitchell essay.
- Did MIRI give up? Their (half-sarcastic?) death with dignity strategy
- Kerry Vaughan on slowing down AGI development.
Contact us
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Which prompt would you send to GPT-3 in order to end the world? Tell us before you're turned into a paperclip over at [email protected]
Previous Episode

#42 (C&R, Chap 12+13) - Language and the Body-Mind Problem
Ben and Vaden sit down to discuss what is possibly Popper's most confusing essay ever: Language and the Body-Mind Problem: A restatement of Interactionism. Determinism, causality, language, bodies, minds, and Ferris Buhler. What's not to like! Except for the terrible writing, spanning the entire essay. And before we get to that, we revolutionize the peer-review system in less than 10 minutes.
We discuss
- Problems with the current peer-review system and how to improve it
- The Mind-Body Problem
- How chaos theory relates to determinism
- The four functions of language
- Why you don't argue with thermometers
- Whether Popper thinks we can build AGI
- Why causality occurs at the level of ideas, not just of atoms
References
- Link to the essay, which you should most definitely read for yourself.
- Ben's call to abolish peer-review
- Discrete Analysis Math Journal
- Pachinko
- Karl Buhler's theory of language
Quotes
This, I think, solves the so-called problem of 'other minds'. If we talk to other people, and especially if we argue
with them, then we assume (sometimes mistakenly) that they also argue: that they speak intentionally about
things, seriously wishing to solve a problem, and not merely behaving as if they were doing so. It has often been seen
that language is a social affair and that solipsism, and doubts about the existence of other minds, become
selfcontradictory if formulated in a language. We can put this now more clearly. In arguing with other people (a thing
which we have learnt from other people), for example about other minds, we cannot but attribute to them intentions,
and this means, mental states. We do not argue with a thermometer.
- C&R, Chap 13
Once we understand the causal behaviour of the machine, we realize that its behaviour is purely expressive or
symptomatic. For amusement we may continue to ask the machine questions, but we shall not seriously argue with it--
unless we believe that it transmits the arguments, both from a person and back to a person.
- C&R, Chap 13
If the behaviour of such a machine becomes very much like that of a man, then we may mistakenly believe that
the machine describes and argues; just as a man"who does not know the working of a phonograph or radio may
mistakenly think that it describes and argues. Yet an analysis of its mechanism teaches us that nothing of this kind
happens. The radio does not argue, although it expresses and signals.
- C&R, Chap 13
It is true that the presence of Mike in my environment may be one of the physical 'causes' of my saying, 'Here is
Mike'. But if I say, 'Should this be your argument, then it is contradictory', because I have grasped or realized that it is
so, then there was no physical 'cause' analogous to Mike; I do not need to hear or see your words in order to realize
that a certain theory (it does not matter whose) is contradictory. The analogy is not to Mike, but rather to my
realization that Mike is here.
- C&R, Chap 13
The fear of obscurantism (or of being judged an obscurantist) has prevented most anti-obscurantists from saying
such things as these. But this fear has produced, in the end, only obscurantism of another kind.
- C&R, Chap 13
When's the last time you argued with your thermometer? Tell us over at [email protected]
Image Credit: http://humanities.exeter.ac.uk/modernlanguages/research/groups/linguistics/
Next Episode

#44 - Longtermism Revisited: What We Owe the Future
Like moths to a flame, we come back to longtermism once again. But it's not our fault. Will MacAskill published a new book, What We Owe the Future, and billions (trillions!) of lives are at stake if we don't review it. Sisyphus had his task and we have ours. We're doing it for the (great great great ... great) grandchildren.
We discuss:
- Whether longtermism is actionable
- Whether the book is a faithful representation of longtermism as practiced
- Why humans are actually cool, despite what you might hear
- Some cool ideas from the book including career advice and allowing vaccines on the free market
- Ben's love of charter cities and whether he's is a totalitarian at heart
- The plausability of "value lock-in"
- The bizarro world of population ethics
References:
"Bait-and-switch" critique from a longtermist blogger: https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future
Quote: "For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin."
Contact us
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
How long is your termist? Tell us at [email protected]
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/increments-239559/43-artificial-general-intelligence-and-the-ai-safety-debate-26698327"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to #43 - artificial general intelligence and the ai safety debate on goodpods" style="width: 225px" /> </a>
Copy