Andrej Karpathy — AGI is still a decade away

安德烈·卡帕西 —— 通用人工智能仍需十年才能实现

Dwarkesh Podcast

2025-10-18

2 小时 25 分钟
PDF

单集简介 ...

The Andrej Karpathy episode. During this interview, Andrej explains why reinforcement learning is terrible (but everything else is much worse), why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education. It was a pleasure chatting with him. Watch on YouTube; read the transcript. Sponsors * Labelbox helps you get data that is more detailed, more accurate, and higher signal than you could get by default, no matter your domain or training paradigm. Reach out today at labelbox.com/dwarkesh * Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our accounts, cash flows, AR, and AP all in one place. Apply online in minutes at mercury.com * Google’s Veo 3.1 update is a notable improvement to an already great model. Veo 3.1’s generations are more coherent and the audio is even higher-quality. If you have a Google AI Pro or Ultra plan, you can try it in Gemini today by visiting https://gemini.google Timestamps (00:00:00) – AGI is still a decade away (00:29:45) – LLM cognitive deficits (00:40:05) – RL is terrible (00:49:38) – How do humans learn? (01:06:25) – AGI will blend into 2% GDP growth (01:17:36) – ASI (01:32:50) – Evolution of intelligence & culture (01:42:55) - Why self driving took so long (01:56:20) - Future of education Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
更多

单集文稿 ...

  • Today, I'm speaking with Andre Carpathi.

  • Andre, why do you say that this will be the decade of agents and not the year of agents?

  • Well, first of all, thank you for having me here.

  • I'm excited to be here.

  • So the quote that you just mentioned, it's the decade of agents.

  • That's actually a reaction to an existing pre-existing quote,

  • I should say, where I think some of the labs,

  • I'm not actually sure who said this,

  • but they were alluding to this being the year of agents with respect to LLMs and how they were going to evolve.

  • And I think I was triggered by that

  • because I feel like there's some over predictions going on in the industry.

  • And in my mind, this is really a lot more accurately described as the decade of agents.

  • And we have some very early agents that are actually extremely impressive and that I use daily,

  • you know, Claude and Codex and so on.

  • But I still feel like there's so much work to be done.

  • And so I think my reaction is like, we'll be working with these things for a decade.

  • They're going to get better and it's going to be wonderful.

  • But I think I was just reacting to the timelines, I suppose, of the of the implication.

  • What do you think will take a decade to accomplish?

  • What are the bottlenecks?