Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Increments - #45 - Four Central Fallacies of AI Research (with Melanie Mitchell)

#45 - Four Central Fallacies of AI Research (with Melanie Mitchell)

10/31/22 • 53 min

Increments

We were delighted to be joined by Davis Professor at the Sante Fe Insitute, Melanie Mitchell! We chat about our understanding of artificial intelligence, human intelligence, and whether it's reasonable to expect us to be able to build sophisticated human-like automated systems anytime soon.

Follow Melanie on twitter @MelMitchell1 and check out her website: https://melaniemitchell.me/

We discuss:

  • AI hype through the ages
  • How do we know if machines understand?
  • Winograd schemas and the "WinoGrande" challenge.
  • The importance of metaphor and analogies to intelligence
  • The four fallacies in AI research:
    • 1. Narrow intelligence is on a continuum with general intelligence
    • 2. Easy things are easy and hard things are hard
    • 3. The lure of wishful mnemonics
    • 4. Intelligence is all in the brain
  • Whether embodiment is necessary for true intelligence
  • Douglas Hofstadter's views on AI
  • Ray Kurzweil and the "singularity"
  • The fact that Moore's law doesn't hold for software
  • The difference between symbolic AI and machine learning
  • What analogies have to teach us about human cognition

Errata

References:

Contact us

Eliezer was more scared than Douglas about AI, so he wrote a blog post about it. Who wrote the blog post, Eliezer or Douglas? Tell us at over at [email protected].

Special Guest: Melanie Mitchell.

Support Increments

plus icon
bookmark

We were delighted to be joined by Davis Professor at the Sante Fe Insitute, Melanie Mitchell! We chat about our understanding of artificial intelligence, human intelligence, and whether it's reasonable to expect us to be able to build sophisticated human-like automated systems anytime soon.

Follow Melanie on twitter @MelMitchell1 and check out her website: https://melaniemitchell.me/

We discuss:

  • AI hype through the ages
  • How do we know if machines understand?
  • Winograd schemas and the "WinoGrande" challenge.
  • The importance of metaphor and analogies to intelligence
  • The four fallacies in AI research:
    • 1. Narrow intelligence is on a continuum with general intelligence
    • 2. Easy things are easy and hard things are hard
    • 3. The lure of wishful mnemonics
    • 4. Intelligence is all in the brain
  • Whether embodiment is necessary for true intelligence
  • Douglas Hofstadter's views on AI
  • Ray Kurzweil and the "singularity"
  • The fact that Moore's law doesn't hold for software
  • The difference between symbolic AI and machine learning
  • What analogies have to teach us about human cognition

Errata

References:

Contact us

Eliezer was more scared than Douglas about AI, so he wrote a blog post about it. Who wrote the blog post, Eliezer or Douglas? Tell us at over at [email protected].

Special Guest: Melanie Mitchell.

Support Increments

Previous Episode

undefined - #44 - Longtermism Revisited: What We Owe the Future

#44 - Longtermism Revisited: What We Owe the Future

Like moths to a flame, we come back to longtermism once again. But it's not our fault. Will MacAskill published a new book, What We Owe the Future, and billions (trillions!) of lives are at stake if we don't review it. Sisyphus had his task and we have ours. We're doing it for the (great great great ... great) grandchildren.

We discuss:

  • Whether longtermism is actionable
  • Whether the book is a faithful representation of longtermism as practiced
  • Why humans are actually cool, despite what you might hear
  • Some cool ideas from the book including career advice and allowing vaccines on the free market
  • Ben's love of charter cities and whether he's is a totalitarian at heart
  • The plausability of "value lock-in"
  • The bizarro world of population ethics

References:
"Bait-and-switch" critique from a longtermist blogger: https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future

Quote: "For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin."

Contact us

How long is your termist? Tell us at [email protected]

Support Increments

Next Episode

undefined - #46 (Bonus) - Arguing about probability (with Nick Anyos)

#46 (Bonus) - Arguing about probability (with Nick Anyos)

We make a guest appearance on Nick Anyos' podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology.

You can find Nick's podcast on institutional design here, and his substack here.

We discuss:

  • The lack of feedback loops in longtermism
  • Whether quantifying your beliefs is helpful
  • Objective versus subjective knowledge
  • The difference between prediction and explanation
  • The difference between Bayesian epistemology and Bayesian statistics
  • Statistical modelling and when statistics is useful

Links

  • Philosophy and the practice of Bayesian statistics by Andrew Gelman and Cosma Shalizi
  • EA forum post showing all forecasts beyond a year out are uncalibrated.
  • Vaclav smil quote where he predicts a pandemic by 2021:

    The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.

    - Global Catastropes and Trends, p.46

  • Reference for Tetlock's superforecasters failing to predict the pandemic. "On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were)."

Contact us

Errata

  • At the beginning of the episode Vaden says he hasn't been interviewed on another podcast before. He forgot his appearence on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks.

Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to [email protected].

Photo credit: James O’Brien for Quanta Magazine

Support Increments

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/increments-239559/45-four-central-fallacies-of-ai-research-with-melanie-mitchell-26698325"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to #45 - four central fallacies of ai research (with melanie mitchell) on goodpods" style="width: 225px" /> </a>

Copy