“It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools” — Sam Altman bemoans the difficulty of keeping ChatGPT safe in contentious debate with Elon Musk

  • Sam Altman defended OpenAI’s safety efforts after Elon Musk blamed ChatGPT for multiple deaths
  • Altman called AI safety “genuinely hard,” highlighting the balance between protection and usability
  • OpenAI faces multiple wrongful-death lawsuits tied to claims that ChatGPT worsened mental health outcomes

OpenAI CEO Sam Altman isn’t known for oversharing about ChatGPT’s inner workings. But he admitted to difficulty keeping the AI chatbot both safe and useful. Elon Musk seemingly sparked this insight with barbed posts on X (formerly Twitter). Musk warned people not to use ChatGPT, sharing a link to an article claiming a link between the AI assistant and nine deaths.

The blistering social media exchange between two of the most powerful figures in artificial intelligence yielded more than bruised egos or legal scars. Musk’s post did not refer to the broader context of the deaths or the lawsuits OpenAI is facing related to them, but Altman clearly felt compelled to respond.

His answer was rather more heartfelt than the usual bland corporate boilerplate. He instead gave a glimpse at the thinking behind OpenAI’s tightrope walk, balancing keeping ChatGPT and other AI tools safe for millions of people, and defended ChatGPT’s architecture and guardrails. “We need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.”

After rising to praise OpenAI’s safety protocols and the complexity of balancing harm reduction with product usefulness, Altman implied Musk had no standing to lob accusations because of the dangers of Tesla’s Autopilot system.

He said that his own experience with it was enough to convince him it was “far from a safe thing for Tesla to have released.” In an especially pointed aside at Musk, he added, “I won’t even start on some of the Grok decisions.”

As the exchange ricocheted across platforms, what stood out most wasn’t the usual billionaire posturing but Altman’s unusually candid framing of what AI safety actually entails. For OpenAI, a company simultaneously deploying ChatGPT to schoolkids, therapists, programmers, and CEOs, defining “safe” means threading the needle between usefulness and avoiding problems, objectives that often conflict.

Altman has not publicly commented on the individual wrongful death lawsuits filed against OpenAI. He has, however, insisted that acknowledging real-world harm doesn’t require oversimplifying the problem. AI reflects inputs, and its evolving responses make moderation and safety require more than just the usual terms of service.

ChatGPT’s safety struggle

OpenAI claims to have worked hard to make ChatGPT safer with newer versions. There’s a whole suite of safety features trained to detect signs of distress, including suicidal ideation. ChatGPT issues disclaimers, halts certain interactions, and directs users to mental health resources when it detects warning signs. OpenAI also claims its models will refuse to engage with violent content whenever possible.

The public might think this is straightforward, but Altman’s post gestures at an underlying tension. ChatGPT is deployed in billions of unpredictable conversational spaces across languages, cultures, and emotional states. Overly rigid moderation would make the AI useless in many of those circumstances, yet easing the rules too much would multiply the potential risk of dangerous and unhealthy interactions.

Comparing AI to automated car pilots is not exactly a perfect analogy, despite Altman’s comment. That said, one could argue that while roads are regulated, regardless of whether a human or robot is behind the wheel, AI prompts are on a more rugged trail. There is no central traffic authority for how a chatbot should respond to a teenager in crisis or answer someone with paranoid delusions. In this vacuum, companies like OpenAI are left to build their own rules and refine them on the fly.

The personal element adds another layer to the argument, too. Altman and Musk’s companies are in a protracted legal battle. Musk is suing OpenAI and Altman over the company’s transition from a nonprofit research lab to a capped-profit model, alleging that he was misled when he donated $38 million to help found the organization. He claims the company now prioritizes corporate gain over public benefit. Altman says the shift was necessary to build competitive models and keep AI development on a responsible track. The safety conversation is a philosophical and engineering facet of a war in boardrooms and courtrooms over what OpenAI should be.

Whether or not Musk and Altman ever agree on the risks, or even speak civilly online, all AI developers might do well to follow Altman in being more transparent in what AI safety looks like and how to achieve it.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Read more @ TechRadar

Latest posts

T-Mobile will live translate regular phone calls without an app

Even an old flip phone should be able to use T-Mobile’s Live Translation. | Illustration: Alex Castro / The Verge T-Mobile is preparing to test...

El Paso flights resume after Mexican cartel drones reportedly trigger airspace closure

The Federal Aviation Administration has lifted its temporary closure to the airspace around El Paso International Airport in Texas, after originally saying that all...

Pokopia turns the Pokémon world into a relaxing, human-free paradise

Though catching monsters and making them fight have always been core elements of the Pokémon brand, spinoffs like the Pokémon Snap and Detective Pikachu...

The Halide app’s anti-algorithm camera mode looks better with a little processing

Backlit is back. | Photo: Allison Johnson / The Verge Something happens every time I try to use an iPhone camera like a real camera. Here's...

How an ‘icepocalypse’ raises more questions about Meta’s biggest data center project

Donna Collins lives about 20 miles from where Meta's biggest data center is being built, in a house her family has lived in for...

The Switch 2’s GameShare multiplayer turns this horror game into an unexpected comedy

GameShare, a multiplayer feature that's exclusive to the Switch 2, is a neat concept that so far has mostly been used in pretty standard...

Diesel’s wired earbuds look exactly like wired earbuds from Diesel

Despite evidence to the contrary, not only are wired earbuds alive and well, they're enjoying a resurgence. Brands like Belkin and even respected headphone...

Reanimal wants to devour you

The woods in Reanimal are full of surprises. You will encounter human cadavers that slither like snakes, gigantic talking pigs, and, at one point,...

Here are the brands bringing ads to ChatGPT

OpenAI officially launched its advertising pilot in ChatGPT, leaving us with a better idea of the kinds of products we might see stuffed beneath...

Mullvad VPN review: Near-total privacy with a few sacrifices

Mullvad, a virtual private network (VPN) named after the Swedish word for "mole," is often recognized as one of the best VPNs for privacy....