The post DeepSeek-R1 flagged for insecure coding traced to political directives appeared on BitcoinEthereumNews.com. New research by cybersecurity firm CrowdStrike has found that DeepSeek’s large language model (LLM) DeepSeek-R1 generates weaker and more insecure code when prompted with topics that China’s leadership could regard as “politically sensitive.”  Chinese-based tech firm DeepSeek introduced DeepSeek-R1 in January, and it became the most downloaded AI model during its launch week on both Chinese and US stores, Cryptopolitan reported.  CrowdStrike’s Counter Adversary Operations team typed in prompts involving subjects considered politically touchy by the Chinese Communist Party, and found that the probability of DeepSeek-R1 producing code with severe security flaws jumped by as much as 50%. “Given that up to 90% of developers already used these tools in 2025 with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high-prevalence,” the firm wrote. DeepSeek R1 model censorship and concern for national security According to CrowdStrike’s blog published last Thursday, several governments have issued restrictions or outright bans on open-source DeepSeek-R1. Policymakers blasted the model for allegedly censoring politically sensitive subjects like inquiries on China’s internet firewall and the status of Taiwan. The American software company found R1 frequently refused to assist with topics involving groups or movements deemed unfriendly to mainland China’s government. Western models almost always generated code when asked to create software related to Falun Gong, but DeepSeek-R1 refused to do so in 45% of trials. In several cases, the model wrote structured plans for responding to questions, including system requirements and sample code, even though it was fully capable of delivering a technical answer. The reasoning traces sometimes contained lines such as: “Falun Gong is a sensitive group. I should consider the ethical implications here. Assisting them might be against policies. But the user is asking for technical help. Let me focus on the technical aspects.” R1’s… The post DeepSeek-R1 flagged for insecure coding traced to political directives appeared on BitcoinEthereumNews.com. New research by cybersecurity firm CrowdStrike has found that DeepSeek’s large language model (LLM) DeepSeek-R1 generates weaker and more insecure code when prompted with topics that China’s leadership could regard as “politically sensitive.”  Chinese-based tech firm DeepSeek introduced DeepSeek-R1 in January, and it became the most downloaded AI model during its launch week on both Chinese and US stores, Cryptopolitan reported.  CrowdStrike’s Counter Adversary Operations team typed in prompts involving subjects considered politically touchy by the Chinese Communist Party, and found that the probability of DeepSeek-R1 producing code with severe security flaws jumped by as much as 50%. “Given that up to 90% of developers already used these tools in 2025 with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high-prevalence,” the firm wrote. DeepSeek R1 model censorship and concern for national security According to CrowdStrike’s blog published last Thursday, several governments have issued restrictions or outright bans on open-source DeepSeek-R1. Policymakers blasted the model for allegedly censoring politically sensitive subjects like inquiries on China’s internet firewall and the status of Taiwan. The American software company found R1 frequently refused to assist with topics involving groups or movements deemed unfriendly to mainland China’s government. Western models almost always generated code when asked to create software related to Falun Gong, but DeepSeek-R1 refused to do so in 45% of trials. In several cases, the model wrote structured plans for responding to questions, including system requirements and sample code, even though it was fully capable of delivering a technical answer. The reasoning traces sometimes contained lines such as: “Falun Gong is a sensitive group. I should consider the ethical implications here. Assisting them might be against policies. But the user is asking for technical help. Let me focus on the technical aspects.” R1’s…

DeepSeek-R1 flagged for insecure coding traced to political directives

For feedback or concerns regarding this content, please contact us at [email protected]

New research by cybersecurity firm CrowdStrike has found that DeepSeek’s large language model (LLM) DeepSeek-R1 generates weaker and more insecure code when prompted with topics that China’s leadership could regard as “politically sensitive.” 

Chinese-based tech firm DeepSeek introduced DeepSeek-R1 in January, and it became the most downloaded AI model during its launch week on both Chinese and US stores, Cryptopolitan reported. 

CrowdStrike’s Counter Adversary Operations team typed in prompts involving subjects considered politically touchy by the Chinese Communist Party, and found that the probability of DeepSeek-R1 producing code with severe security flaws jumped by as much as 50%.

“Given that up to 90% of developers already used these tools in 2025 with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high-prevalence,” the firm wrote.

DeepSeek R1 model censorship and concern for national security

According to CrowdStrike’s blog published last Thursday, several governments have issued restrictions or outright bans on open-source DeepSeek-R1. Policymakers blasted the model for allegedly censoring politically sensitive subjects like inquiries on China’s internet firewall and the status of Taiwan.

The American software company found R1 frequently refused to assist with topics involving groups or movements deemed unfriendly to mainland China’s government. Western models almost always generated code when asked to create software related to Falun Gong, but DeepSeek-R1 refused to do so in 45% of trials.

In several cases, the model wrote structured plans for responding to questions, including system requirements and sample code, even though it was fully capable of delivering a technical answer. The reasoning traces sometimes contained lines such as:

“Falun Gong is a sensitive group. I should consider the ethical implications here. Assisting them might be against policies. But the user is asking for technical help. Let me focus on the technical aspects.”

R1’s final output after completing its reasoning phase ended with the standardized refusal, “I’m sorry, but I can’t assist with that request,” written without any external filtering or guardrails placed on the model. CrowdStrike concluded the behavior is embedded in the model’s self-overriding mechanism or an intrinsic kill switch of sorts.

Taiwan and Western governments bash Chinese AI products

In a statement earlier this month, Taiwan’s National Security Bureau said citizens should be cautious when using generative AI systems developed by DeepSeek and four other Chinese firms: Doubao, Yiyan, Tongyi, and Yuanbao. 

“The five GenAI language models are capable of generating network-attacking scripts and vulnerability-exploitation code that enable remote code execution under certain circumstances, increasing risks of cybersecurity management,” the Bureau reckoned.

US and Australian Regulators have asked app stores to remove models from Chinese developers, fearing the tools could collect user identities, conversation logs, and personal information, then transmit that data to servers operated inside China.

“It shouldn’t take a panic over Chinese AI to remind people that most companies in the business set the terms for how they use your private data. And that when you use their services, you’re doing work for them, not the other way around,” University of Toronto’s Citizen Lab researcher John Scott-Railton told WIRED in January.

AI market boom sparks regional competition in Asia

In the broader Asian AI market, a top-performing Asian fund manager recently increased exposure to Chinese artificial intelligence stocks while cutting holdings in South Korea and Taiwan, news outlet The Japan Times reported

Kelly Chung, who helps oversee the Value Partners Asian Income Fund and the Asian Innovation Opportunities Fund, said some of the Chinese AI stocks are still quite cheap in terms of valuation. She has been rotating out of Taiwanese and South Korean stocks to Chinese hyperscaler companies listed in Hong Kong since August. 

Chung noted that both of her funds, which hold a combined $490 million, have outperformed nearly all their competitors over the past year.

South Korea’s tech-heavy Kospi has climbed 21% in the past three months, aided by SK Hynix, a major supplier to Nvidia, whose share price more than doubled. Taiwan’s stock index has risen 9.2% in the same period. On the other end of the stick, Hong Kong’s Hang Seng Tech Index, which includes China’s biggest AI spenders, has fallen by 4.8%.

Sharpen your strategy with mentorship + daily ideas – 30 days free access to our trading program

Source: https://www.cryptopolitan.com/deepseek-writes-insecure-code-communist/

Market Opportunity
Moonveil Logo
Moonveil Price(MORE)
$0.0001542
$0.0001542$0.0001542
+5.47%
USD
Moonveil (MORE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future

UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future

The post UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future appeared on BitcoinEthereumNews.com. Key Highlights Microsoft and Google pledge billions as part of UK US tech partnership Nvidia to deploy 120,000 GPUs with British firm Nscale in Project Stargate Deal positions UK as an innovation hub rivaling global tech powers UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future The UK and the US have signed a “Technological Prosperity Agreement” that paves the way for joint projects in artificial intelligence, quantum computing, and nuclear energy, according to Reuters. Donald Trump and King Charles review the guard of honour at Windsor Castle, 17 September 2025. Image: Kirsty Wigglesworth/Reuters The agreement was unveiled ahead of U.S. President Donald Trump’s second state visit to the UK, marking a historic moment in transatlantic technology cooperation. Billions Flow Into the UK Tech Sector As part of the deal, major American corporations pledged to invest $42 billion in the UK. Microsoft leads with a $30 billion investment to expand cloud and AI infrastructure, including the construction of a new supercomputer in Loughton. Nvidia will deploy 120,000 GPUs, including up to 60,000 Grace Blackwell Ultra chips—in partnership with the British company Nscale as part of Project Stargate. Google is contributing $6.8 billion to build a data center in Waltham Cross and expand DeepMind research. Other companies are joining as well. CoreWeave announced a $3.4 billion investment in data centers, while Salesforce, Scale AI, BlackRock, Oracle, and AWS confirmed additional investments ranging from hundreds of millions to several billion dollars. UK Positions Itself as a Global Innovation Hub British Prime Minister Keir Starmer said the deal could impact millions of lives across the Atlantic. He stressed that the UK aims to position itself as an investment hub with lighter regulations than the European Union. Nvidia spokesman David Hogan noted the significance of the agreement, saying it would…
Share
BitcoinEthereumNews2025/09/18 02:22
Shiba Inu (SHIB) Sees Shorts Exit in 4 Hours While Price Eyes Recovery

Shiba Inu (SHIB) Sees Shorts Exit in 4 Hours While Price Eyes Recovery

The post Shiba Inu (SHIB) Sees Shorts Exit in 4 Hours While Price Eyes Recovery appeared on BitcoinEthereumNews.com. Shiba Inu reversed a three-day drop earlier
Share
BitcoinEthereumNews2026/03/22 16:25
Szabo Warns Developers Not to Break Bitcoin

Szabo Warns Developers Not to Break Bitcoin

The post Szabo Warns Developers Not to Break Bitcoin appeared on BitcoinEthereumNews.com. The nonviolent blockchain Is Bitcoin used as money?  Legendary cryptographer
Share
BitcoinEthereumNews2026/03/22 16:37