2026-03-04
19 分钟For Scientific American Science Quickly, I'm Kendra Pierre-Lewis in for Rachel Feldman.
AI is everywhere.
It's in your phones, in your internet searches, in defense software.
And it's expanding.
The big tech giants — Alphabet, Microsoft, Meta, and Amazon —
are planning on spending nearly $700 billion this year alone on building out AI infrastructure.
And yet, even as companies pour tremendous time and energy into AI,
there remain concerns about the safety and efficacy of such technologies.
There have been several lawsuits alleging suicides linked to AI chatbots.
And more recently, Thomas Germain, a tech reporter at the BBC,
conducted a personal experiment into how an invested individual or business can get ChatGPT and Google Search's AI overview to spread lies.
We talked to Thomas to find out just how easy it is to hack these common AI tools and what the consequences of that could be.
Hi Thomas, thanks for taking the time to join us today.
Thanks for having me on.
So my understanding is you hacked ChatGPT?
That's right.
So I got a tip a couple of weeks ago that manipulating the things that AI tools like ChatGPT or Google Gemini or the little AI overview at the top of Google search,
apparently manipulating the things that they say to other people can be as easy as publishing an article on your own website,
like a blog post.
Apparently people are doing this across the whole internet.