Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Dwarkesh Podcast - Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment
plus icon
bookmark

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

03/27/23 • 47 min

2 Listeners

Dwarkesh Podcast

I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:

time to AGI

leaks and spies

what's after generative models

post AGI futures

working with Microsoft and competing with Google

difficulty of aligning superhuman AI

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00) - Time to AGI

(05:57) - What’s after generative models?

(10:57) - Data, models, and research

(15:27) - Alignment

(20:53) - Post AGI Future

(26:56) - New ideas are overrated

(36:22) - Is progress inevitable?

(41:27) - Future Breakthroughs


Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
plus icon
bookmark

I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:

time to AGI

leaks and spies

what's after generative models

post AGI futures

working with Microsoft and competing with Google

difficulty of aligning superhuman AI

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00) - Time to AGI

(05:57) - What’s after generative models?

(10:57) - Data, models, and research

(15:27) - Alignment

(20:53) - Post AGI Future

(26:56) - New ideas are overrated

(36:22) - Is progress inevitable?

(41:27) - Future Breakthroughs


Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Previous Episode

undefined - Nat Friedman - Reading Ancient Scrolls, Open Source, & AI

Nat Friedman - Reading Ancient Scrolls, Open Source, & AI

Next Episode

undefined - Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Episode Comments
























Featured in these lists

Generate a badge

Get a badge for your website that links back to this episode

<a href="https://goodpods.com/podcasts/dwarkesh-podcast-202059/ilya-sutskever-openai-chief-scientist-building-agi-alignment-future-mo-28964544"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to ilya sutskever (openai chief scientist) - building agi, alignment, future models, spies, microsoft, taiwan, & enlightenment on goodpods" style="width: 225px" /> </a>

Copy

Select type & size
Open dropdown icon
share badge image
Privacy Policy

This website uses cookies to analyze performance and traffic on our website to improve user experience. View our Privacy Policy