China’s DeepSeek has claimed its flagship AI system, known as R1, was trained for just $294,000, which is a fraction of the sums believed to be spent by US competitors. The details were published in a peer-reviewed paper in Nature this week, and it is likely to fuel further debate over Beijing’s ambitions in the […]China’s DeepSeek has claimed its flagship AI system, known as R1, was trained for just $294,000, which is a fraction of the sums believed to be spent by US competitors. The details were published in a peer-reviewed paper in Nature this week, and it is likely to fuel further debate over Beijing’s ambitions in the […]

Chinese AI firm says its model cost just $294,000 to train

2025/09/19 04:20
4 min read
For feedback or concerns regarding this content, please contact us at [email protected]

China’s DeepSeek has claimed its flagship AI system, known as R1, was trained for just $294,000, which is a fraction of the sums believed to be spent by US competitors.

The details were published in a peer-reviewed paper in Nature this week, and it is likely to fuel further debate over Beijing’s ambitions in the global artificial intelligence race. The Hangzhou-based company said the reasoning-focused model was trained using 512 Nvidia H800 chips. This hardware was designed specifically for China after the US prohibited sales of the more powerful H100 and A100 processors.

The paper, which was co-authored by founder Liang Wenfeng, marks the first time the firm has disclosed such costs.

DeepSeek uses a fraction of US models’ cost

In January, the release of DeepSeek’s cheaper AI tools destabilized global markets, resulting in a sell-off in tech shares on fears they could undercut established giants such as Nvidia and OpenAI.

However, Liang and his team have kept a low profile, surfacing only for sporadic product updates ever since.

The reported $294,000 price tag stands in stark contrast to estimates from American firms.

The chief executive of OpenAI, Sam Altman, in 2023 said: “Training foundational models cost much more than $100 million.” However, he did not give out any specific breakdown.

Training large language models involves running banks of powerful chips for extended periods, consuming enormous amounts of electricity while processing text and code. Industry observers have long assumed the bill for such projects runs into the tens or even hundreds of millions.

That assumption is now being challenged, and in a supplementary document, DeepSeek admitted it owns A100 chips and had used them in early development, before moving the full-scale training onto its H800 cluster. According to the tech firm, the model ran for 80 hours during its final training stage.

Even though Nvidia has insisted that the Chinese startup has access only to their H800 processors, American officials remain sceptical. A few months back, US sources told Reuters that DeepSeek illegally owns large volumes of the H100 chips that have export bans to China.

Putting innovation under the microscope

R1 has drawn attention not only for its low training costs but also because it may be the first major model to undergo formal peer review.

“This is a very welcome precedent, and if we don’t have this norm of sharing, it becomes very hard to evaluate risks,” said Lewis Tunstall, a machine-learning engineer at Hugging Face who reviewed the Nature paper.

The review process prompted DeepSeek to clarify technical details, including how its model was trained and what safeguards were in place.

“Going through a rigorous peer-review process certainly helps verify the validity and usefulness of the model,” said Huan Sun, an AI researcher at Ohio State University.

DeepSeek’s key breakthrough was using a pure reinforcement learning approach. Instead of relying on human-curated reasoning examples, according to the paper. The model was rewarded for solving problems correctly and gradually developed its own problem-solving strategies.

The firm says this trial-and-error system allowed R1 to verify its workings without copying human tactics.

“This model has been quite influential,” Sun added. “Almost all reinforcement learning work in 2025 may have been inspired by R1 one way or another.”

DeepSeek denies copying claims

Soon after R1’s release, speculation swirled that DeepSeek had leaned on rival outputs, particularly from OpenAI, to accelerate training; however, the company has now flatly denied that charge.

In correspondence with referees, DeepSeek insisted that R1 did not copy reasoning examples generated by OpenAI. However, like most large language models, it was trained on internet text. This means that some AI-produced content was inevitably included, and the explanation has convinced some reviewers.

“I cannot be 100% sure R1 was not trained on OpenAI examples. However, replication attempts by other labs suggest reinforcement learning is good enough on its own.” Tunstall said.

DeepSeek says R1 is built to excel at reasoning-heavy tasks such as coding and mathematics. Unlike most closed systems developed by U.S. firms, it was released as an open-weight model, freely downloadable by researchers. On the AI community site Hugging Face, it has already been downloaded more than 10 million times.

The firm spent around $6 million developing the base model that R1 is built upon, but even with that added, its costs fall well short of the sums associated with rivals. For many in the field, that makes R1 attractive.

Sun and colleagues recently tested the system on scientific data tasks and found it was not the most accurate, but among the best in terms of cost-to-performance.

 

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

CME Group to Launch Solana and XRP Futures Options

CME Group to Launch Solana and XRP Futures Options

The post CME Group to Launch Solana and XRP Futures Options appeared on BitcoinEthereumNews.com. An announcement was made by CME Group, the largest derivatives exchanger worldwide, revealed that it would introduce options for Solana and XRP futures. It is the latest addition to CME crypto derivatives as institutions and retail investors increase their demand for Solana and XRP. CME Expands Crypto Offerings With Solana and XRP Options Launch According to a press release, the launch is scheduled for October 13, 2025, pending regulatory approval. The new products will allow traders to access options on Solana, Micro Solana, XRP, and Micro XRP futures. Expiries will be offered on business days on a monthly, and quarterly basis to provide more flexibility to market players. CME Group said the contracts are designed to meet demand from institutions, hedge funds, and active retail traders. According to Giovanni Vicioso, the launch reflects high liquidity in Solana and XRP futures. Vicioso is the Global Head of Cryptocurrency Products for the CME Group. He noted that the new contracts will provide additional tools for risk management and exposure strategies. Recently, CME XRP futures registered record open interest amid ETF approval optimism, reinforcing confidence in contract demand. Cumberland, one of the leading liquidity providers, welcomed the development and said it highlights the shift beyond Bitcoin and Ethereum. FalconX, another trading firm, added that rising digital asset treasuries are increasing the need for hedging tools on alternative tokens like Solana and XRP. High Record Trading Volumes Demand Solana and XRP Futures Solana futures and XRP continue to gain popularity since their launch earlier this year. According to CME official records, many have bought and sold more than 540,000 Solana futures contracts since March. A value that amounts to over $22 billion dollars. Solana contracts hit a record 9,000 contracts in August, worth $437 million. Open interest also set a record at 12,500 contracts.…
Share
BitcoinEthereumNews2025/09/18 01:39
Shiba Inu Shibariumscan Hits 45% Indexing Progress

Shiba Inu Shibariumscan Hits 45% Indexing Progress

The post Shiba Inu Shibariumscan Hits 45% Indexing Progress appeared on BitcoinEthereumNews.com. Shiba Inu’s ecosystem is showing steady technical progress as infrastructure
Share
BitcoinEthereumNews2026/03/18 04:30
BlackRock boosts AI and US equity exposure in $185 billion models

BlackRock boosts AI and US equity exposure in $185 billion models

The post BlackRock boosts AI and US equity exposure in $185 billion models appeared on BitcoinEthereumNews.com. BlackRock is steering $185 billion worth of model portfolios deeper into US stocks and artificial intelligence. The decision came this week as the asset manager adjusted its entire model suite, increasing its equity allocation and dumping exposure to international developed markets. The firm now sits 2% overweight on stocks, after money moved between several of its biggest exchange-traded funds. This wasn’t a slow shuffle. Billions flowed across multiple ETFs on Tuesday as BlackRock executed the realignment. The iShares S&P 100 ETF (OEF) alone brought in $3.4 billion, the largest single-day haul in its history. The iShares Core S&P 500 ETF (IVV) collected $2.3 billion, while the iShares US Equity Factor Rotation Active ETF (DYNF) added nearly $2 billion. The rebalancing triggered swift inflows and outflows that realigned investor exposure on the back of performance data and macroeconomic outlooks. BlackRock raises equities on strong US earnings The model updates come as BlackRock backs the rally in American stocks, fueled by strong earnings and optimism around rate cuts. In an investment letter obtained by Bloomberg, the firm said US companies have delivered 11% earnings growth since the third quarter of 2024. Meanwhile, earnings across other developed markets barely touched 2%. That gap helped push the decision to drop international holdings in favor of American ones. Michael Gates, lead portfolio manager for BlackRock’s Target Allocation ETF model portfolio suite, said the US market is the only one showing consistency in sales growth, profit delivery, and revisions in analyst forecasts. “The US equity market continues to stand alone in terms of earnings delivery, sales growth and sustainable trends in analyst estimates and revisions,” Michael wrote. He added that non-US developed markets lagged far behind, especially when it came to sales. This week’s changes reflect that position. The move was made ahead of the Federal…
Share
BitcoinEthereumNews2025/09/18 01:44