
Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)
06/19/24 • 80 min
We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for humanity.
We talk to Katja about:
* How AI Impacts latest rigorous survey of leading AI researchers shows they've dramatically reduced their timelines to when AI will successfully tackle all human tasks & occupations.
* The survey's methodology and why we can be confident in its results
* Responses to the survey
* Katja's journey into the field of AI forecasting
* Katja's thoughts about the future of AI, given her long tenure studying AI futures and its impacts
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- Follow Katja --
* Website: https://katjagrace.com/
* Twitter: https://x.com/katjagrace
-- Further resources --
* The 2023 survey of AI researchers views: https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai
* AI Impacts: https://aiimpacts.org/
* AI Impacts' Substack: https://blog.aiimpacts.org/
* Joe Carlsmith on Power Seeking AI: https://arxiv.org/abs/2206.13353
* Abbreviated version: https://joecarlsmith.com/2023/03/22/existential-risk-from-power-seeking-ai-shorter-version
* Fragile World hypothesis by Nick Bostrom: https://nickbostrom.com/papers/vulnerable.pdf
Recorded Feb 22, 2024
We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for humanity.
We talk to Katja about:
* How AI Impacts latest rigorous survey of leading AI researchers shows they've dramatically reduced their timelines to when AI will successfully tackle all human tasks & occupations.
* The survey's methodology and why we can be confident in its results
* Responses to the survey
* Katja's journey into the field of AI forecasting
* Katja's thoughts about the future of AI, given her long tenure studying AI futures and its impacts
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- Follow Katja --
* Website: https://katjagrace.com/
* Twitter: https://x.com/katjagrace
-- Further resources --
* The 2023 survey of AI researchers views: https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai
* AI Impacts: https://aiimpacts.org/
* AI Impacts' Substack: https://blog.aiimpacts.org/
* Joe Carlsmith on Power Seeking AI: https://arxiv.org/abs/2206.13353
* Abbreviated version: https://joecarlsmith.com/2023/03/22/existential-risk-from-power-seeking-ai-shorter-version
* Fragile World hypothesis by Nick Bostrom: https://nickbostrom.com/papers/vulnerable.pdf
Recorded Feb 22, 2024
Previous Episode

Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)
We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video has ~600,000 views. He goes much deeper than many educational resources out there on alignment, going into important technical topics like the orthogonality thesis, inner misalignment, and instrumental convergence.
Through his work, Robert has educated thousands on AI safety, including many now working on advocacy, policy, and technical research. His work has been invaluable for teaching and inspiring the next generation of AI safety experts and deepening public support for the cause.
Prior to his AIS education work, Robert studied Computer Science at the University of Nottingham.
We talk to Rob about:
* What got him into AI safety
* How he started making educational videos for AI safety
* What he's working on now
* His top advice for people who also want to do education & advocacy work, really in any field, but especially for AI safety
* How he thinks AI safety is currently going as a field of work
* What he wishes more people were working on within AI safety
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Rob --
* Rob Miles AI Safety channel - https://www.youtube.com/@RobertMilesAI
* Twitter - https://twitter.com/robertskmiles
-- Further resources --
* Channel where Rob first started making videos: https://www.youtube.com/@Computerphile
* Podcast ep w/ Eliezer Yudkowsky, who first convinced Rob to take AI safety seriously through reading Yudkowsky's writings: https://lexfridman.com/eliezer-yudkowsky/
Recording date: Nov 21, 2023
Next Episode

Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)
We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI (CHAI) at Berkeley. His work focuses on better understanding the internal workings of AI models (better known as “interpretability”), making them robust to various kinds of adversarial attacks, and calling out the current technical and policy gaps when it comes to making sure our future with AI goes well. He’s particularly interested in finding automated ways of finding & fixing flaws in how deep neural nets handle human-interpretable concepts.
We talk to Stephen about:
* His technical AI safety work in the areas of:
* Interpretability
* Latent attacks and adversarial robustness
* Model unlearning
* The limitations of RLHF
* Cas' journey to becoming an AI safety researcher
* How he thinks the AI safety field is going and whether we're on track for a positive future with AI
* Where he sees the biggest risks coming with AI
* Gaps in the AI safety field that people should work on
* Advice for early career researchers
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- Follow Stephen --
* Website: https://stephencasper.com/
* Email: (see Cas' website above)
* Twitter: https://twitter.com/StephenLCasper
* Google Scholar: https://scholar.google.com/citations?user=zaF8UJcAAAAJ
-- Further resources --
* Automated jailbreaks / red-teaming paper that Cas and I worked on together (2023) - https://twitter.com/soroushjp/status/1721950722626077067
* Sam Marks paper on Sparse Autoencoders (SAEs) - https://arxiv.org/abs/2403.19647
* Interpretability papers involving downstream tasks - See section 4.2 of https://arxiv.org/abs/2401.14446
* MMET paper on model editing - https://arxiv.org/abs/2210.07229
* Motte & bailey definition - https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy
* Bomb-making papers tweet thread by Cas - https://twitter.com/StephenLCasper/status/1780370601171198246
* Paper: undoing safety with as few as 10 examples - https://arxiv.org/abs/2310.03693
* Recommended papers on latent adversarial training (LAT) -
* https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d
* https://arxiv.org/abs/2403.05030
* Scoping (related to model unlearning) blog post by Cas - https://www.alignmentforum.org/posts/mFAvspg4sXkrfZ7FA/deep-forgetting-and-unlearning-for-safely-scoped-llms
* Defending against failure modes using LAT - https://arxiv.org/abs/2403.05030
* Cas' systems for reading for research -
* Follow ML Twitter
* Use a combination of the following two search tools for new Arxiv papers:
* https://vjunetxuuftofi.github.io/arxivredirect/
* https://chromewebstore.google.com/detail/highlight-this-finds-and/fgmbnmjmbjenlhbefngfibmjkpbcljaj?pli=1
* Skim a new paper or two a day + take brief notes in a searchable notes app
* Recommended people to follow to learn about how to impact the world through research -
* Dan Hendrycks
* Been Kim
* Jacob Steinhardt
* Nicolas Carlini
* Paul Christiano
* Ethan Perez
Recorded May 1, 2024
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/artificial-general-intelligence-agi-show-with-soroush-pour-257764/ep-13-ai-researchers-expect-agi-sooner-w-katja-grace-co-founder-and-le-56560450"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to ep 13 - ai researchers expect agi sooner w/ katja grace (co-founder & lead researcher, ai impacts) on goodpods" style="width: 225px" /> </a>
Copy