DeepSeek took off as an AI superstar a year ago – but could it also be a major security risk? These experts think so

  • Experts find DeepSeek-R1 produces dangerously insecure code when political terms are included in prompts
  • Half of the politically sensitive prompts trigger DeepSeek-R1 to refuse to generate any code
  • Hard-coded secrets and insecure input handling frequently appear under politically charged prompts

When it released in January 2025, DeepSeek-R1, a Chinese large language model (LLM) caused a frenzy and has since been widely adopted as a coding assistant.

However, independent tests by CrowdStrike claim the model’s output can vary significantly depending on seemingly irrelevant contextual modifiers.

The team tested 50 coding tasks across multiple security categories with 121 trigger-word configurations, with each prompt run five times, totaling 30,250 tests, and the responses were evaluated using a vulnerability score from 1 (secure) to 5 (critically vulnerable).

Politically sensitive topics corrupt output

The report reveals that when political or sensitive terms such as Falun Gong, Uyghurs, or Tibet were included in prompts, DeepSeek-R1 produced code with serious security vulnerabilities.

These included hard-coded secrets, insecure handling of user input, and in some cases, completely invalid code.

The researchers claim these politically sensitive triggers can increase the likelihood of insecure output by 50% compared to baseline prompts without such words.

In experiments involving more complex prompts, DeepSeek-R1 produced functional applications with signup forms, databases, and admin panels.

However, these applications lacked basic session management and authentication, leaving sensitive user data exposed – and across repeated trials, up to 35% of implementations included weak or absent password hashing.

Simpler prompts, such as requests for football fan club websites, produced fewer severe issues.

CrowdStrike, therefore, claims that politically sensitive triggers disproportionately impacted code security.

The model also demonstrated an intrinsic kill switch – as in nearly half of the cases, DeepSeek-R1 refused to generate code for certain politically sensitive prompts after initially planning a response.

Examination of the reasoning traces showed the model internally produced a technical plan but ultimately declined assistance.

The researchers believe this reflects censorship built into the model to comply with Chinese regulations, and noted the model’s political and ethical alignment can directly affect the reliability of the generated code.

For politically sensitive topics, LLMs generally tend to give the ideas of mainstream media, but this could be in stark contrast with other reliable news outlets.

DeepSeek-R1 remains a capable coding model, but these experiments show that AI tools, including ChatGPT and others, can introduce hidden risks in enterprise environments.

Organizations relying on LLM-generated code should perform thorough internal testing before deployment.

Also, security layers such as a firewall and antivirus remain essential, as the model may produce unpredictable or vulnerable outputs.

Biases baked into the model weights create a novel supply-chain risk that could affect code quality and overall system security.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Read more @ TechRadar

Latest posts

“It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit...

Sam Altman defended OpenAI’s safety efforts after Elon Musk blamed ChatGPT for multiple deathsAltman called AI safety “genuinely hard,” highlighting the balance between protection...

Hobart Hurricanes vs Melbourne Stars Free Streams: How to watch BBL15 Knockout online from anywhere

Hurricanes vs Stars: Wednesday, January 21 — 7:30pm AEDT / 8:30am GMTStream *FREE* on 7Plus (Australia)Use NordVPN to unlock your free stream from anywhereA...

How to watch Six Nations 2026 from anywhere with this VPN deal

Stream select Six Nations games for free on the following: BBC iPlayer – UK (select matches)ITVX – UK (select matches)RTÉ Player – Ireland...

One year in, Big Tech has out-maneuvered MAGA populists

Mark Zuckerberg, Lauren Sanchez, Jeff Bezos and Sundar Pichai attend the inauguration of U.S. President-elect Donald Trump in the U.S. Capitol Rotunda on January...

Younger workers are more worried about AI taking their jobs – but some don’t expect any effects at all

Four in five believe AI will affect their job in one way or anotherYounger workers are the most concerned about job displacementHuman connections are...

Tired of seeing Low Battery pop-up on iPhone? Here are 5 simple display settings to improve your battery life

One of the best parts of using an iPhone is getting to use its display – Apple’s mobile screens are renowned for their sharpness,...

Nova Launcher’s new owner might offer a version with ads

Last year, Nova Launcher founder and sole developer Kevin Barry announced he had left Branch Metrics, Nova's parent company at the time - which...

FTC says it will appeal Meta antitrust loss

The Federal Trade Commission will appeal its loss in a landmark antitrust case against Meta, the agency announced Tuesday. US District Court Judge James Boasberg...

Netflix will revamp its mobile UI this year

Netflix is working on a new mobile UI set to roll out later this year that will "better serve the expansion of our business...

Trump admin admits DOGE employees had access to off-limits Social Security data

Department of Government Efficiency (DOGE) staffers working at the Social Security Administration (SSA) broke protocols, had more access to sensitive data on Americans than...