2024-08-28
7 分钟The Economist Hello, Alok Jha here.
I host Babbage, our science and tech podcast.
Welcome to Editors' Picks.
Here's an article hand-picked from the latest edition of The Economist, read aloud.
I thought you might enjoy it.
HAL-9000,
the murderous computer in 2001 A Space Odyssey is one of many examples in science fiction of an artificial intelligence or AI that outwits its human creators with deadly consequences.
Recent progress in AI, notably the release of chat GPT,
has pushed the question of existential risk up the international agenda.
In March 2023, a host of tech luminaries, including Elon Musk,
called for a pause of at least six months in the development of AI over safety concerns.
At an AI safety summit in Britain last autumn,
politicians and boffins discussed how best to regulate this potentially dangerous technology.
Fast forward to today, though, and the mood has changed.
Fears that the technology was moving too quickly have been replaced by worries that AI may be less widely useful in its current form than expected,
and that tech firms may have overhyped it.
At the same time,
the process of drawing up rules has led policymakers to recognise the need to grapple with existing problems associated with AI,
such as bias, discrimination and violation of intellectual property rights.
As the final chapter in our schools briefs on AI explains,