Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
The Foresight Institute Podcast - Existential Hope Podcast: Roman Yampolskiy | The Case for Narrow AI

Existential Hope Podcast: Roman Yampolskiy | The Case for Narrow AI

06/26/24 • 47 min

The Foresight Institute Podcast

Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.


Session Summary

We discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health.


Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcasts


Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.


Hosted by Allison Duettmann and Beatrice Erkers


Follow Us: Twitter | Facebook | LinkedIn | Existential Hope Instagram


Explore every word spoken on this podcast through Fathom.fm.



Hosted on Acast. See acast.com/privacy for more information.

plus icon
bookmark

Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.


Session Summary

We discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health.


Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcasts


Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.


Hosted by Allison Duettmann and Beatrice Erkers


Follow Us: Twitter | Facebook | LinkedIn | Existential Hope Instagram


Explore every word spoken on this podcast through Fathom.fm.



Hosted on Acast. See acast.com/privacy for more information.

Previous Episode

undefined - Existential Hope Worldbuilding: 1st place | Cities of Orare

Existential Hope Worldbuilding: 1st place | Cities of Orare

This episode features an interview with the 1st place winners of our 2045 Worldbuilding challenge!


Why Worldbuilding?

We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?


Cities of Orare – our 1st place winners

Cities of Orare imagines a future where AI-powered prediction markets called Orare amplify collective intelligence, enhancing liberal democracy, economic distribution, and policy-making. Its adoption across Africa and globally has fostered decentralized governance, democratizing decision-making, and spurring significant health and economic advancements.


Read more about the 2045 world of Cities of Orare: https://www.existentialhope.com/worlds/beyond-collective-intelligence-cities-of-orare

Access the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuilding


Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.


Hosted by Allison Duettmann and Beatrice Erkers


Follow Us: Twitter | Facebook | LinkedIn | Existential Hope Instagram


Explore every word spoken on this podcast through Fathom.fm.



Hosted on Acast. See acast.com/privacy for more information.

Next Episode

undefined - Existential Hope: The Flourishing Foundation at the Transformative AI Hackathon

Existential Hope: The Flourishing Foundation at the Transformative AI Hackathon

The Flourishing Foundation

In February 2024, we partnered with the Future of Life Institute on a hackathon to design institutions that can guide and govern the development of AI. The winner of the hackathon was the Flourishing Foundation, who are focused on our relationship with AI and other emerging technologies. They challenge innovators to envision and build life-centered products, services, and systems, specifcially, to enable TAI-enabled consumer technologies to promote human well-being by developing new norms, processes, and community-driven ecosystems.


At their core, they explore the question of "Can AI make us happier?"


Connect: https://www.flourishing.foundation/


Read about the hackathon: https://foresight.org/2024-xhope-hackathon/


Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.


Hosted by Allison Duettmann and Beatrice Erkers


Follow Us: Twitter | Facebook | LinkedIn | Existential Hope Instagram


Explore every word spoken on this podcast through Fathom.fm.



Hosted on Acast. See acast.com/privacy for more information.

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/the-foresight-institute-podcast-252058/existential-hope-podcast-roman-yampolskiy-the-case-for-narrow-ai-57197657"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to existential hope podcast: roman yampolskiy | the case for narrow ai on goodpods" style="width: 225px" /> </a>

Copy