The Economist.
Modern artificial intelligence is seemingly unstoppable.
AI models can help you sort through your emails.
They can create images.
They can write code and even implement it.
You can use them to produce deeply researched reports on the topics that you might be interested in.
And if you answer a few questions,
they can even help you write a business plan for your next big idea.
The common factor in all these amazing skills is that
large language models do all these things in the digital world.
What would it take for AIs to become useful in the physical space that we all live in?
Do LLMs even know or understand anything about the physics of the real world?
Well, to some extent, they do.
Because the language on which they've been trained contains information about how the world works.
But that knowledge is gappy.
It's massively incomplete.
And that's not good enough if you want AI models to, say, control a robotic body
that can interact in real time with a city or in a factory or a classroom.
Or if you want your AI system to be able to give you instructions
on the best way to go from one part of a city to another.