2024-03-28
3 小时 12 分钟Okay, today I have the pleasure to talk with two of my good friends, Shilto and Trenton.
Shilto.
Just make stuff.
I wasn't going to say anything.
Let's do this in reverse.
How am I going to start it with my good friends?
Yeah, Gemini 1.5, the context line, just wow.
Shit.
Anyways, Shilto, Noah Brown.
Noah Brown, the guy who wrote the diplomacy paper, he said this about Shilto.
He said, he's only been in the field for 1.5 years,
but people in AI know that he was one of the most important people behind Gemini's success.
And Trenton, who's an anthropic,
Works on mechanistic and turbability and it was widely reported that he has solved alignment So this will be a capabilities only podcast alignment is already solved so no need to discuss further Okay,
so let's start by talking about context links.
Yep It seemed to be under hype to given how important it seems to me to be that you can just put a million tokens in the context There's apparently some other news that you know got pushed to the front for some reason,
but Yeah,
tell me about how you see the future of a long context links and what that implies for these models Yeah,
so I think it's really on the hype Because until I started working on it,
I didn't really appreciate how much of a step up in intelligence it was for the model to have the onboarding problem basically instantly solved.