AI 2027: month-by-month model of intelligence explosion — Scott Alexander & Daniel Kokotajlo

AI 2027:按月构建的智能爆炸模型 — 斯科特·亚历山大与丹尼尔·科科塔伊洛

Dwarkesh Podcast

2025-04-03

3 小时 4 分钟
PDF

单集简介 ...

Scott and Daniel break down every month from now until the 2027 intelligence explosion. Scott Alexander is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel Kokotajlo resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking millions in equity to speak out about AI safety. We discuss misaligned hive minds, Xi and Trump waking up, and automated Ilyas researching AI progress. I came in skeptical, but I learned a tremendous amount by bouncing my objections off of them. I highly recommend checking out their new scenario planning document, AI 2027 Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- Sponsors * WorkOS helps today’s top AI companies get enterprise-ready. OpenAI, Cursor, Perplexity, Anthropic and hundreds more use WorkOS to quickly integrate features required by enterprise buyers. To learn more about how you can make the leap to enterprise, visit workos.com * Jane Street likes to know what's going on inside the neural nets they use. They just released a black-box challenge for Dwarkesh listeners, and I had a blast trying it out. See if you have the skills to crack it at janestreet.com/dwarkesh * Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh To sponsor a future episode, visit dwarkesh.com/advertise. ---------- Timestamps (00:00:00) - AI 2027 (00:06:56) - Forecasting 2025 and 2026 (00:14:41) - Why LLMs aren't making discoveries (00:24:33) - Debating intelligence explosion (00:49:45) - Can superintelligence actually transform science? (01:16:54) - Cultural evolution vs superintelligence (01:24:05) - Mid-2027 branch point (01:32:30) - Race with China (01:44:47) - Nationalization vs private anarchy (02:03:22) - Misalignment (02:14:52) - UBI, AI advisors, & human future (02:23:00) - Factory farming for digital minds (02:26:52) - Daniel leaving OpenAI (02:35:15) - Scott's blogging advice Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
更多

单集文稿 ...

  • Today, I have the great pleasure of chatting with Scott Alexander and Daniel Cocotello.

  • Scott is, of course, the author of the blog Slate Star Codex, Astral Codex 10 Now.

  • It's actually been a, as you know, a big bucket list item of mine to get you on the podcast.

  • So this is all the first podcast we've ever done, right?

  • Yes.

  • And then Daniel is the director of the AI Futures project.

  • And you have both just launched today something called AI 2027.

  • So, what is this?

  • Yeah, AI 2027 is our scenario trying to forecast the next few years of AI progress.

  • We're trying to do two things here.

  • First of all, is we just want to have a concrete scenario at all.

  • So, you have all these people, Sam Altman, Dario Ahmadai,

  • Elon Musk saying, gonna have AGI in three years, super intelligence in five years.

  • And people just think that's crazy

  • because right now we have chatbots that's able to do like a Google search,

  • not much more than that in a lot of ways.

  • And so people ask, how is it going to be AGI in three years?

  • What we wanted to do is provide a story, provide the transitional fossils.

  • So start right now, go up to 2027 when there's AGI,

  • 2028 when there's potentially super intelligence, show on a month by month level what happened.