Carl Shulman (Pt 1) — Intelligence explosion, primate evolution, robot doublings, & alignment

卡尔·舒尔曼(第一部分)——智能爆炸、灵长类进化、机器人翻倍与对齐

Dwarkesh Podcast

2023-06-14

2 小时 44 分钟
PDF

单集简介 ...

In terms of the depth and range of topics, this episode is the best I’ve done. No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of. We ended up talking for 8 hours, so I'm splitting this episode into 2 parts. This part is about Carl’s model of an intelligence explosion, which integrates everything from: * how fast algorithmic progress & hardware improvements in AI are happening, * what primate evolution suggests about the scaling hypothesis, * how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers, * how quickly robots produced from existing factories could take over the economy. We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer. The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff. Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00:00) - Intro (00:01:32) - Intelligence Explosion (00:18:03) - Can AIs do AI research? (00:39:00) - Primate evolution (01:03:30) - Forecasting AI progress (01:34:20) - After human-level AGI (02:08:39) - AI takeover scenarios Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
更多

单集文稿 ...

  • Human level AI is deep, deep into an intelligence explosion.

  • Things like inventing the transformer or discovering chinchilla scaling and doing your training runs more optimally or creating flash attention.

  • That set of inputs probably would yield the kind of AI capabilities needed for intelligence explosion.

  • You have a race between, on the one hand,

  • the project of getting strong interpretability and shaping motivations.

  • And on the other hand, these AIs in ways that you don't perceive make the AI take over happen.

  • We spend more compute by having a larger brain than other animals,

  • and then we have a longer childhood.

  • It's not like us to like having a bigger model and having more training time with it.

  • It seemed very implausible that we couldn't do better than completely brute force evolution.

  • How quickly are we running through those orders of magnitude?

  • Hey everybody, just wanted to give you a heads up.

  • So I ended up talking to Carl for like...

  • seven or eight hours.

  • So we ended up splitting this episode into two parts.

  • I don't want to put all of that on you at once.

  • In this part,

  • we get deep into Carl's model of an intelligence explosion and what that implies for alignment.

  • The next part, which we'll release next week,

  • is all about the specific mechanisms of an AI takeover.