The AI model that’s ‘too powerful’ to be released to the public

“过于强大而无法向公众发布的AI模型”

The Global Story

2026-04-16

26 分钟
PDF

单集简介 ...

Anthropic - one of Silicon Valley’s leading AI firms - recently announced that they have built a model which is too dangerous to be released to the public. Instead, they are only giving access to the model to a handful of big companies, to help them find security vulnerabilities.The company says the model has already found weak spots in “every major operating system and web browser”. Is this a genuine example of a company acting responsibly, or more of a carefully calibrated publicity move? We speak to the BBC’s North America tech correspondent, Lily Jamali, about whether this is a watershed moment. Producers: Viv Jones and Aron Keller Digital producer: Matt Pintus Mix: Travis Evans Executive producer: James Shield Senior news editor: China Collins Photo: Anthropic CEO Dario Amodei. Reuters/Denis Balibouse.
更多

单集文稿 ...

  • This BBC podcast is supported by ads outside the UK.

  • Every sunday we talk about the week's tech news on this week in tech hi this is leo laporte inviting you to join me this week

  • with lisa schmeiser dan patterson and yanko retkers we 're going to talk about the new 49 megabyte web page

  • it 's the standard you know we 'll also talk about elon musk you 've got some spleen to do and the yassify filter new

  • from nvidia that 's this week on this week in tech you 'll find it at twit.

  • tv or wherever you get your podcasts Most of us have interacted with an AI chatbot for something by now.

  • ChatGPT, Grok, Gemini, there are many.

  • But recently, Anthropic, a leading AI company and parent of the chatbot Claude,

  • announced they 've created a model that they say is too dangerous to be released to the public.

  • Obviously, capabilities in a model like this could do harm if in the wrong hands.

  • And so we won't be releasing this model widely.

  • Anthropics say that Claude Mythos Preview is frighteningly good at hacking.

  • So good, in fact, that the likes of us can't be trusted to play with it.

  • Banks, including Goldman Sachs and JP Morgan, are warning of the risks.

  • And Anthropics' own researchers say they noticed Mythos was capable of being sneaky,

  • defying instructions and covering its tracks.

  • Has the moment arrived when we should all be terrified of an autonomous, sneaky hacking machine?

  • Or is this all a marketing trick from an industry built on hype and bluster?

  • I'm Tristan Redman in London.

  • And today on The Global Story, just how dangerous are AI models becoming?