2023-06-14
2 小时 44 分钟Human level AI is deep, deep into an intelligence explosion.
Things like inventing the transformer or discovering chinchilla scaling and doing your training runs more optimally or creating flash attention.
That set of inputs probably would yield the kind of AI capabilities needed for intelligence explosion.
You have a race between, on the one hand,
the project of getting strong interpretability and shaping motivations.
And on the other hand, these AIs in ways that you don't perceive make the AI take over happen.
We spend more compute by having a larger brain than other animals,
and then we have a longer childhood.
It's not like us to like having a bigger model and having more training time with it.
It seemed very implausible that we couldn't do better than completely brute force evolution.
How quickly are we running through those orders of magnitude?
Hey everybody, just wanted to give you a heads up.
So I ended up talking to Carl for like...
seven or eight hours.
So we ended up splitting this episode into two parts.
I don't want to put all of that on you at once.
In this part,
we get deep into Carl's model of an intelligence explosion and what that implies for alignment.
The next part, which we'll release next week,
is all about the specific mechanisms of an AI takeover.