Adam Marblestone – AI is missing something fundamental about the brain

亚当·玛布拉斯顿——人工智能缺失了关于大脑的一些基本要素

Dwarkesh Podcast

2025-12-31

1 小时 49 分钟
PDF

单集简介 ...

Adam Marblestone is CEO of Convergent Research. He’s had a very interesting past life: he was a research scientist at Google Deepmind on their neuroscience team and has worked on everything from brain-computer interfaces to quantum computing to nanotech and even formal mathematics. In this episode, we discuss how the brain learns so much from so little, what the AI field can learn from neuroscience, and the answer to Ilya’s question: how does the genome encode abstract reward functions? Turns out, they’re all the same question. Watch on YouTube; read the transcript. Sponsors * Gemini 3 Pro recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process — honestly, I couldn’t have investigated this question without it. Try Gemini 3 Pro today gemini.google.com * Labelbox helps you train agents to do economically-valuable, real-world tasks. Labelbox’s network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at labelbox.com/dwarkesh To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – The brain’s secret sauce is the reward functions, not the architecture (00:22:20) – Amortized inference and what the genome actually stores (00:42:42) – Model-based vs model-free RL in the brain (00:50:31) – Is biological hardware a limitation or an advantage? (01:03:59) – Why a map of the human brain is important (01:23:28) – What value will automating math have? (01:38:18) – Architecture of the brain Further reading Intro to Brain-Like-AGI Safety - Steven Byrnes’s theory of the learning vs steering subsystem; referenced throughout the episode. A Brief History of Intelligence - Great book by Max Bennett on connections between neuroscience and AI Adam’s blog, and Convergent Research’s blog on essential technologies. A Tutorial on Energy-Based Learning by Yann LeCun What Does It Mean to Understand a Neural Network? - Kording & Lillicrap E11 Bio and their brain connectomics approach Sam Gershman on what dopamine is doing in the brain Gwern’s proposal on training models on the brain’s hidden states Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
更多

单集文稿 ...

  • The big million dollar question that I have that I've been trying to get the answer to through all these interviews with AI researchers,

  • how does the brain do it?

  • We're throwing way more data at these LLMs and they still have a small fraction of the total capabilities that a human does.

  • So what's going on?

  • Yeah.

  • I mean, this might be the quadrillion dollar question or something like that.

  • You could make an argument this is the most important question in science.

  • claim to know the answer.

  • I also don't really think that the answer will necessarily come even from a lot of smart people thinking about it as much

  • as they are.

  • My overall like meta level take is that we have to empower the field of neuroscience to just make neuroscience a more powerful field technologically and otherwise to actually be able to crack a question like this.

  • But maybe the way that we would think about this Now with like modern AI,

  • neural nets deep learning is that there's sort of these certain key components of that.

  • There's the architecture.

  • There's maybe hyperparameters of the architecture.

  • How many layers do you have or sort of properties of that architecture?

  • There is the learning algorithm itself.

  • How do you train it?

  • You know, back prop gradient descent.

  • Is it something else?