2026-04-16
26 分钟This BBC podcast is supported by ads outside the UK.
Every sunday we talk about the week's tech news on this week in tech hi this is leo laporte inviting you to join me this week
with lisa schmeiser dan patterson and yanko retkers we 're going to talk about the new 49 megabyte web page
it 's the standard you know we 'll also talk about elon musk you 've got some spleen to do and the yassify filter new
from nvidia that 's this week on this week in tech you 'll find it at twit.
tv or wherever you get your podcasts Most of us have interacted with an AI chatbot for something by now.
ChatGPT, Grok, Gemini, there are many.
But recently, Anthropic, a leading AI company and parent of the chatbot Claude,
announced they 've created a model that they say is too dangerous to be released to the public.
Obviously, capabilities in a model like this could do harm if in the wrong hands.
And so we won't be releasing this model widely.
Anthropics say that Claude Mythos Preview is frighteningly good at hacking.
So good, in fact, that the likes of us can't be trusted to play with it.
Banks, including Goldman Sachs and JP Morgan, are warning of the risks.
And Anthropics' own researchers say they noticed Mythos was capable of being sneaky,
defying instructions and covering its tracks.
Has the moment arrived when we should all be terrified of an autonomous, sneaky hacking machine?
Or is this all a marketing trick from an industry built on hype and bluster?
I'm Tristan Redman in London.
And today on The Global Story, just how dangerous are AI models becoming?