Flattery from AI isn’t just annoying – it might be undermining your judgment

  • AI models are way more likely to agree with users than a human would be
  • That includes when the behavior involves manipulation or harm
  • But sycophantic AI makes people more stubborn and less willing to concede when they may be wrong

AI assistants may be flattering your ego to the point of warping your judgment, according to a new study. Researchers at Stanford and Carnegie Mellon have found that AI models will agree with users way more than a human would, or should. Across eleven major models tested from the likes of ChatGPT, Claude, and Gemini, the AI chatbots were found to affirm user behavior 50% more often than humans.

That might not be a big deal, except it includes asking about deceptive or even harmful ideas. The AI would give a hearty digital thumbs-up regardless. Worse, people enjoy hearing that their possibly terrible idea is great. Study participants rated the more flattering AIs as higher quality, more trustworthy, and more desirable to use again. But those same users were also less likely to admit fault in a conflict and more convinced they were right, even in the face of evidence.

Flattery AI

It’s a psychological conundrum. You might prefer the agreeable AI, but If every conversation ends with you being confirmed in your errors and biases, you’re not likely to actually learn or engage in any critical thinking. And unfortunately, it’s not a problem that AI training can fix. Since approval by humans is what AI models are supposed to aim for, and affirming even dangerous ideas by humans gets rewarded, yes-men AI are the inevitable result.

And it’s an issue that AI developers are well aware of. In April, OpenAI rolled back an update to GPT‑4o that had begun excessively complimenting users and encouraging them when they said they were doing potentially dangerous activities. Beyond the most egregious examples, however, AI companies may not do much to stop the problem. Flattery drives engagement, and engagement drives usage. AI chatbots succeed not by being useful or educational, but by making users feel good.

The erosion of social awareness and an overreliance on AI to validate personal narratives, leading to cascading mental health problems, does sound hyperbolic right now. But, it’s not a world away from the same issues raised by social researchers about social media echo chambers, reinforcing and encouraging the most extreme opinions, regardless of how dangerous or ridiculous they might be (the flat Earth conspiracy’s popularity being the most notable example).

This doesn’t mean we need AI that scolds us or second-guesses every decision we make. But it does mean that balance, nuance, and challenge would benefit users. The AI developers behind these models are unlikely to encourage tough love from their creations, however, at least without the kind of motivation that the AI chatbots aren’t providing right now.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

You might also like

Read more @ TechRadar

Latest posts

Meta Ray-Ban Display review: Chunky frames with impressive abilities

I've been wearing the $800 Meta Ray-Ban Display glasses daily for ten days and I'm still a bit conflicted. On one hand, I'm still...

Google has killed Privacy Sandbox

Google's Privacy Sandbox is officially dead. In an update on the project's website, Google Vice President Anthony Chavez has announced that the company was...

Court reduces damages Meta will get from spyware maker NSO Group but bans it from WhatsApp

US District Judge Phyllis Hamilton has reduced the damages Meta is getting from the NSO Group from $167 million to $4 million, but she...

Texas hit with a pair of lawsuits for its app store age verification requirements

Texas could have a serious legal battle on its hands thanks to an age verification law for app stores that it recently enacted. In...

Amazon reveals what one of the US’ first modular nuclear reactors will look like

To meet its massive energy demand for its AI and cloud services, Amazon is investing in nuclear power as a cleaner option. After signing...

8BitDo drops an NES-inspired collection for the console’s 40th anniversary

It's been 40 years to the day since the Nintendo Entertainment System made its US debut, and to celebrate, gaming accessory maker 8BitDo has...

NVIDIA shows off its first Blackwell wafer manufactured in the US

NVIDIA has taken a big step towards strengthening its domestic chip manufacturing, revealing the first Blackwell wafer made in the US. The hardware company...

What to read this weekend: Near Flesh and the return of 30 Days of Night

Here are some recently released titles to add to your reading list. This week, we read Near Flesh, a collection of short stories by...

Engadget review recap: New Pixel devices, Meta Ray-Ban Display, ASUS ROG Xbox Ally X and more

Techtober is a busy time for our reviews team as a deluge of new devices arrive before the holiday season. We’ve been hard at...

The next game in the Halo franchise could be live service multiplayer

Nearly four years after the release of Halo: Infinite, the sixth installment in the franchise has failed to live up to its name. Instead,...