Will scaling work? [Narration]

规模扩大是否可行?[旁白]

Dwarkesh Podcast

2024-01-19

25 分钟
PDF

单集简介 ...

This is a narration of my blog post, Will scaling work?. You read the full post here: https://www.dwarkeshpatel.com/p/will-scaling-work Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow me on Twitter for updates on future posts and episodes. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
更多

单集文稿 ...

  • Hey everyone, this is a narration of a blog post I wrote called Will Scaling Work.

  • You can find the full version on my website, thewarcashpatel.com.

  • It was originally published December 26th, 2023.

  • Will Scaling Work.

  • When should we expect AGI?

  • If we can keep scaling LLMs++ and get better and more general performance as a result,

  • then there's reason to expect powerful AIs by 2040 or much sooner,

  • which can automate most cognitive labor and speed up further AI progress.

  • However, if scaling doesn't work,

  • then the path to AGI seems much longer and more intractable for reasons I explained in the post.

  • In order to think through both the pro and the con arguments about scaling,

  • I wrote the post as a debate between two characters I made up, believer and skeptic.

  • When will we run out of data?

  • Skeptic.

  • We're about to run out of high quality language data next year.

  • Even taking hand wavy scaling curves seriously implies that we'll need 1E35 flops for an AI that is reliable and smart enough to write a scientific paper.

  • And that's table stakes for the abilities an AI would need to automate further AI research and continue progress when scaling becomes infeasible.

  • Which means...

  • We need 5 ooms, that is, orders of magnitude, more data than we seem to have.

  • I'm worried that when people hear 5 ooms off, how they register it is, oh,