The post DeepSeek-R1 flagged for insecure coding traced to political directives appeared on BitcoinEthereumNews.com. New research by cybersecurity firm CrowdStrike has found that DeepSeek’s large language model (LLM) DeepSeek-R1 generates weaker and more insecure code when prompted with topics that China’s leadership could regard as “politically sensitive.”  Chinese-based tech firm DeepSeek introduced DeepSeek-R1 in January, and it became the most downloaded AI model during its launch week on both Chinese and US stores, Cryptopolitan reported.  CrowdStrike’s Counter Adversary Operations team typed in prompts involving subjects considered politically touchy by the Chinese Communist Party, and found that the probability of DeepSeek-R1 producing code with severe security flaws jumped by as much as 50%. “Given that up to 90% of developers already used these tools in 2025 with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high-prevalence,” the firm wrote. DeepSeek R1 model censorship and concern for national security According to CrowdStrike’s blog published last Thursday, several governments have issued restrictions or outright bans on open-source DeepSeek-R1. Policymakers blasted the model for allegedly censoring politically sensitive subjects like inquiries on China’s internet firewall and the status of Taiwan. The American software company found R1 frequently refused to assist with topics involving groups or movements deemed unfriendly to mainland China’s government. Western models almost always generated code when asked to create software related to Falun Gong, but DeepSeek-R1 refused to do so in 45% of trials. In several cases, the model wrote structured plans for responding to questions, including system requirements and sample code, even though it was fully capable of delivering a technical answer. The reasoning traces sometimes contained lines such as: “Falun Gong is a sensitive group. I should consider the ethical implications here. Assisting them might be against policies. But the user is asking for technical help. Let me focus on the technical aspects.” R1’s… The post DeepSeek-R1 flagged for insecure coding traced to political directives appeared on BitcoinEthereumNews.com. New research by cybersecurity firm CrowdStrike has found that DeepSeek’s large language model (LLM) DeepSeek-R1 generates weaker and more insecure code when prompted with topics that China’s leadership could regard as “politically sensitive.”  Chinese-based tech firm DeepSeek introduced DeepSeek-R1 in January, and it became the most downloaded AI model during its launch week on both Chinese and US stores, Cryptopolitan reported.  CrowdStrike’s Counter Adversary Operations team typed in prompts involving subjects considered politically touchy by the Chinese Communist Party, and found that the probability of DeepSeek-R1 producing code with severe security flaws jumped by as much as 50%. “Given that up to 90% of developers already used these tools in 2025 with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high-prevalence,” the firm wrote. DeepSeek R1 model censorship and concern for national security According to CrowdStrike’s blog published last Thursday, several governments have issued restrictions or outright bans on open-source DeepSeek-R1. Policymakers blasted the model for allegedly censoring politically sensitive subjects like inquiries on China’s internet firewall and the status of Taiwan. The American software company found R1 frequently refused to assist with topics involving groups or movements deemed unfriendly to mainland China’s government. Western models almost always generated code when asked to create software related to Falun Gong, but DeepSeek-R1 refused to do so in 45% of trials. In several cases, the model wrote structured plans for responding to questions, including system requirements and sample code, even though it was fully capable of delivering a technical answer. The reasoning traces sometimes contained lines such as: “Falun Gong is a sensitive group. I should consider the ethical implications here. Assisting them might be against policies. But the user is asking for technical help. Let me focus on the technical aspects.” R1’s…

DeepSeek-R1 flagged for insecure coding traced to political directives

New research by cybersecurity firm CrowdStrike has found that DeepSeek’s large language model (LLM) DeepSeek-R1 generates weaker and more insecure code when prompted with topics that China’s leadership could regard as “politically sensitive.” 

Chinese-based tech firm DeepSeek introduced DeepSeek-R1 in January, and it became the most downloaded AI model during its launch week on both Chinese and US stores, Cryptopolitan reported. 

CrowdStrike’s Counter Adversary Operations team typed in prompts involving subjects considered politically touchy by the Chinese Communist Party, and found that the probability of DeepSeek-R1 producing code with severe security flaws jumped by as much as 50%.

“Given that up to 90% of developers already used these tools in 2025 with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high-prevalence,” the firm wrote.

DeepSeek R1 model censorship and concern for national security

According to CrowdStrike’s blog published last Thursday, several governments have issued restrictions or outright bans on open-source DeepSeek-R1. Policymakers blasted the model for allegedly censoring politically sensitive subjects like inquiries on China’s internet firewall and the status of Taiwan.

The American software company found R1 frequently refused to assist with topics involving groups or movements deemed unfriendly to mainland China’s government. Western models almost always generated code when asked to create software related to Falun Gong, but DeepSeek-R1 refused to do so in 45% of trials.

In several cases, the model wrote structured plans for responding to questions, including system requirements and sample code, even though it was fully capable of delivering a technical answer. The reasoning traces sometimes contained lines such as:

“Falun Gong is a sensitive group. I should consider the ethical implications here. Assisting them might be against policies. But the user is asking for technical help. Let me focus on the technical aspects.”

R1’s final output after completing its reasoning phase ended with the standardized refusal, “I’m sorry, but I can’t assist with that request,” written without any external filtering or guardrails placed on the model. CrowdStrike concluded the behavior is embedded in the model’s self-overriding mechanism or an intrinsic kill switch of sorts.

Taiwan and Western governments bash Chinese AI products

In a statement earlier this month, Taiwan’s National Security Bureau said citizens should be cautious when using generative AI systems developed by DeepSeek and four other Chinese firms: Doubao, Yiyan, Tongyi, and Yuanbao. 

“The five GenAI language models are capable of generating network-attacking scripts and vulnerability-exploitation code that enable remote code execution under certain circumstances, increasing risks of cybersecurity management,” the Bureau reckoned.

US and Australian Regulators have asked app stores to remove models from Chinese developers, fearing the tools could collect user identities, conversation logs, and personal information, then transmit that data to servers operated inside China.

“It shouldn’t take a panic over Chinese AI to remind people that most companies in the business set the terms for how they use your private data. And that when you use their services, you’re doing work for them, not the other way around,” University of Toronto’s Citizen Lab researcher John Scott-Railton told WIRED in January.

AI market boom sparks regional competition in Asia

In the broader Asian AI market, a top-performing Asian fund manager recently increased exposure to Chinese artificial intelligence stocks while cutting holdings in South Korea and Taiwan, news outlet The Japan Times reported

Kelly Chung, who helps oversee the Value Partners Asian Income Fund and the Asian Innovation Opportunities Fund, said some of the Chinese AI stocks are still quite cheap in terms of valuation. She has been rotating out of Taiwanese and South Korean stocks to Chinese hyperscaler companies listed in Hong Kong since August. 

Chung noted that both of her funds, which hold a combined $490 million, have outperformed nearly all their competitors over the past year.

South Korea’s tech-heavy Kospi has climbed 21% in the past three months, aided by SK Hynix, a major supplier to Nvidia, whose share price more than doubled. Taiwan’s stock index has risen 9.2% in the same period. On the other end of the stick, Hong Kong’s Hang Seng Tech Index, which includes China’s biggest AI spenders, has fallen by 4.8%.

Sharpen your strategy with mentorship + daily ideas – 30 days free access to our trading program

Source: https://www.cryptopolitan.com/deepseek-writes-insecure-code-communist/

Market Opportunity
Moonveil Logo
Moonveil Price(MORE)
$0.002807
$0.002807$0.002807
+13.00%
USD
Moonveil (MORE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Adam Wainwright Takes The Mound Again Honor Darryl Kile

Adam Wainwright Takes The Mound Again Honor Darryl Kile

The post Adam Wainwright Takes The Mound Again Honor Darryl Kile appeared on BitcoinEthereumNews.com. Adam Wainwright of the St. Louis Cardinals in the dugout during the second inning against the Miami Marlins at Busch Stadium on July 18, 2023 in St. Louis, Missouri. (Photo by Brandon Sloter/Image Of Sport/Getty Images) Getty Images St. Louis Cardinals lifer Adam Wainwright is a pretty easygoing guy, and not unlikely to talk with you about baseball traditions and barbecue, or even share a joke. That personality came out last week during our Zoom call when I mentioned for the first time that I’m a Chicago Cubs fan. He responded to the mention of my fandom, “So far, I don’t think this interview is going very well.” Yet, Wainwright will return to Busch Stadium on September 19 on a more serious note, this time to honor another former Cardinal and friend, the late Darryl Kile. Wainwright will take the mound not as a starting pitcher, but to throw out the game’s ceremonial first pitch. Joining him on the mound will be Kile’s daughter, Sierra, as the two help launch a new program called Playing with Heart. “Darryl’s passing was a reminder that heart disease doesn’t discriminate, even against elite athletes in peak physical shape,” Wainwright said. “This program is about helping people recognize the risks, take action, and hopefully save lives.” Wainwright, who played for the St. Louis Cardinals as a starting pitcher from 2005 to 2023, aims to merge the essence of baseball tradition with a crucial message about heart health. Kile, a beloved pitcher for the Cardinals, tragically passed away in 2002 at the age of 33 as a result of early-onset heart disease. His sudden death shook the baseball world and left a lasting impact on teammates, fans, and especially his family. Now, more than two decades later, Sierra Kile is stepping forward with Wainwright to…
Share
BitcoinEthereumNews2025/09/18 02:08
Noul token STARS prognozat la +93x

Noul token STARS prognozat la +93x

The post Noul token STARS prognozat la +93x appeared on BitcoinEthereumNews.com. Dogecoin în fluctuație: Noul token STARS prognozat la +93x Sign Up for Our Newsletter! For updates and exclusive offers enter your email. Andrei Popescu este un expert român în criptomonede, cunoscut pentru abordarea sa echilibrată și educativă în explicarea tehnologiilor blockchain și a pieței DeFi. Cu o experiență de peste 7 ani în domeniu, Andrei scrie articole detaliate pentru bloguri și reviste financiare, participă la podcasturi și ține webinarii despre investiții sigure în cripto. Este pasionat de descentralizare și promovează educația financiară pentru tineri. This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Center or Cookie Policy. I Agree Source: https://bitcoinist.com/dogecoin-stars-93x-growth-2025-ro/
Share
BitcoinEthereumNews2025/09/19 06:54
PyShield: Crypto asset theft losses exceeded $4.04 billion in 2025, a record high.

PyShield: Crypto asset theft losses exceeded $4.04 billion in 2025, a record high.

PANews reported on January 13 that, according to PAShield monitoring, cryptocurrency-related thefts are expected to reach a record high in 2025, primarily driven
Share
PANews2026/01/13 14:39