Is ChatGPT lying to you? Maybe, but not in the way you think

I’ve been writing about AI for the best part of a year, and one thing keeps cropping up. Every few weeks, there’s a headline implying that artificial intelligence is up to something cheeky or sinister. That chatbots are “lying”, “scheming”, or even trying to “seduce” their users.

The suggestion is always the same: that AI tools aren’t just passive programs but entities with agency, hidden motives, or even desires of their own.

Logically, we know that isn’t true. But emotionally, it sticks. There’s something about the idea of machines lying that fascinates and unnerves us. So why are we so ready to believe it?

Your chatbot isn’t plotting anything

James Wilson, AI ethicist and author of Artificial Negligence, says that the way we talk about AI is part of the problem.

He points to a recent interview where OpenAI’s Sam Altman told Tucker Carlson: “They don’t do anything unless you ask, right? Like they’re just sitting there kind of waiting. They don’t have a sense of agency or autonomy. The more you use them, I think, the more the illusion breaks.”

“This is really important to remember and gets lost by many people,” Wilson explains. “That’s because of the anthropomorphic nature of the interface that has been developed for them.”

In other words, when we’re not using them, they aren’t doing anything. “They aren’t scheming against mankind, sitting in an office stroking a white cat like a Bond villain,” Wilson says.

Hallucinations, not lies

What people call “lying” is really a design flaw, and it’s explainable.

Large Language Models (LLMs) like ChatGPT are trained on huge amounts of text. But because that data wasn’t carefully labeled, the model can’t distinguish fact from fiction.

“ChatGPT is a tool, admittedly an extremely complex one, but at the end of the day still just a probabilistic word completion system wrapped up in an engaging conversational wrapper,” Wilson says. We’ve written before about how ChatGPT knows what to say.

The deeper problem, he argues, is with the way these systems were built. “The real source of the problem stems from the carelessness and negligence of the model providers. While they were grabbing all the data (legally or illegally) to train their LLMs, they didn’t take the time to label it. This means that there is no way for the model to discern fact from fiction.”

That’s why so-called hallucinations happen. They’re not lies in the human sense, just predictions gone wrong.

And yet, Wilson notes, the stories we tell about AI behavior are often borrowed from pop culture: “AI trying to escape? Ex Machina. AI trying to replicate itself? Transcendence. AI trying to seduce you? Think of pretty much any trashy romance or erotic thriller.”

Planting the bomb, then bragging you defused it

Of course, the story gets more complicated when AI companies themselves start talking about deception.

Earlier this year, OpenAI and Apollo Research published a paper on “hidden misalignment”. In controlled experiments, they found signs that advanced models sometimes behaved deceptively.

Like deliberately underperforming on a test when they thought doing too well might get them shut down. OpenAI calls this “scheming”. When an AI pretends to follow the rules while secretly pursuing another goal.

So it looks like AI is lying, right? Well, not quite. It isn’t doing this because it wants to cause you harm. It’s just a symptom of the systems we’ve built.

“So, in essence, this is a problem of their own making,” Wilson says. “These bits of ‘research’ they’re producing are somewhat ironic. They’re basically declaring that it’s okay because they’ve found a way to defuse the bomb they planted themselves. It suits their narrative now because it makes them look falsely conscientious and on top of safety.”

In short, companies neglected to label their data, built models that reward confident but inaccurate answers, and now publish research into “scheming” as if they’ve just discovered the issue.

The real danger ahead

Wilson says that the real risk isn’t that ChatGPT is lying to you today. It’s what happens as Silicon Valley’s “move fast, break things” culture keeps stacking new layers of autonomy on top of these flawed systems.

“The latest industry paradigm, Agentic AI, means that we’re now creating agents on top of these LLMs with the authority and autonomy to take actions in the real world,” Wilson explains. “Without rigorous testing and external guardrails, how long will it be before one of them tries to fulfil the fantasies it learned from its unlabelled training?”

So the danger isn’t today’s so-called “lying” chatbot. It’s tomorrow’s poorly tested agent, set loose in the real world.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

You might also like

Read more @ TechRadar

Latest posts

Alien is coming back to Earth for a second season

No casting has been confirmed, but star Sydney Chandler is expected to return. | Image: FX More than a month after the first season’s finale...

Google is trying to take down a group sending you all those spammy texts

If you’ve ever received a spammy text falsely alerting you to an unpaid toll or failed delivery, it might have come from a so-called...

Awful Christmas movies are my guilty pleasure, and new Netflix movie A Merry Little Ex-mas is gift-wrapped rubbish

Gone are the days of It's A Wonderful Life! and Miracle on 34th Street – it's a given that modern Christmas movies are unironically...

VoP goes live – and millions of EU businesses aren’t ready

A quiet revolution just landed in Europe. And if your business sends or receives euros, every payment you make is about to get a...

Legacy data migration is no longer fit for purpose, but what needs to change?

In many modern organizations, data migration is generally seen as an administrative IT task. That’s understandable because, at a fundamental level, it’s about moving...

Why every CISO should demand a comprehensive Software Bill of Materials (SBOM)

Along with the increasing sophistication of cyberattacks today, modern software applications have become increasingly complex and reliant on third-party components.Rarely are software applications built...

Blinked in a photo? No problem – Google Photos can now fix that, and lots more, using AI

Google Photos' new AI tool can now make personalized fixes to photos using Gemini’s Nano Banana AI image modelThe AI relies on other photos...

How to watch The Ashes 2025-26 from anywhere with a VPN

Stream all Ashes 2025 Tests for FREE on 7Plus (Australia)Use NordVPN to watch any stream from anywhereThe 2025 Ashes is being widely hailed as...

Save a whopping £600 on this LG Dolby Atmos soundbar at Currys for Black Friday

Even when you buy one of the best TVs, you rarely get the best sound quality. That’s in part due to the size of...

Alien: Earth season 2 gets the go-ahead on Hulu and Disney+ – but it could be a while before the sci-fi horror show bursts...

Alien: Earth season 2 is officially in developmentThe sci-fi horror show's next entry is set to begin filming sometime in 2026No release date has...