Tsinghua and Microsoft trained a full AI coding model using only synthetic data, with no real-world inputs at any stage.Tsinghua and Microsoft trained a full AI coding model using only synthetic data, with no real-world inputs at any stage.

Chinese AI model trained entirely on synthetic data runs on Nvidia H20 and H200 chips

Tsinghua University and Microsoft Research Asia trained a full AI model using only fake data. No real-world samples at all.

The entire dataset was artificially generated through a new pipeline called SynthSmith, and the system ran on Nvidia chips from start to finish. The team didn’t just pull off a novelty test. They built a working model with 7 billion parameters that beat much bigger models trained on human data.

Their paper, posted January 11 on arXiv, claims that the X-Coder they trained outperformed coding models with 14 billion parameters, even though it never saw real-world text.

“In-depth analysis reveals that scaling laws hold on our synthetic dataset,” the researchers wrote. This team included names from Tsinghua University, Microsoft Research Asia, and Wuhan University.

Researchers use Nvidia chips to skip real-world data entirely

The training setup leaned hard on Nvidia hardware. For supervised fine-tuning, they used 128 Nvidia H20 chips for 220 hours straight. After that, they switched to 32 H200 chips for another seven full days to handle the reinforcement learning phase. These weren’t random choices. The H20 is tuned for inference, and the H200 is built for high-end training. These are the most powerful chips available to Chinese firms right now, thanks to export control exemptions the Trump administration approved after Nvidia lobbied hard to make them available in China.

The researchers said the pipeline itself wasn’t the problem when it came to scaling. It was all about compute power.

Wu Jie, the lead author and a master’s student at Tsinghua, said the real reason they hadn’t taken the pipeline to 100 billion or trillion-parameter models was simply, “computational constraints, rather than limitations of the pipeline itself.”

By releasing the code publicly, they hope others can build on the project without needing to pay massive training costs. The paper also points out a trend in AI.

Models are now expected to “think” over longer timeframes and handle complex reasoning, which has pushed the need for way more compute during inference, not just training.

Chinese team builds faster chip using old fabrication tech

Separately, a new chip called ACCEL was built by Chinese scientists using light particles, not electricity. The chip (short for All-Analogue Chip Combining Electronics and Light) was tested in a lab and hit 4.6 PFLOPS.

That’s 3,000 times faster than Nvidia’s A100, and the Chinese chip used 4 million times less energy. This makes it one of the most efficient AI chips ever made for specific tasks like image recognition or autonomous driving.

It won’t replace CPUs or smartphone chips yet, but the team thinks it could work in wearables, electric vehicles, or smart factories.

The chip was built using a 20-year-old process by Semiconductor Manufacturing International Corporation. It avoided the need for advanced lithography machines that China still can’t access.

“Deployment of photonic computing systems used to be a challenge due to complicated structural design and vulnerability to noise and system errors,” Tsinghua said in an article.

The chip avoids this by combining photonic and analog electronics in a new framework. It doesn’t handle general computing tasks like file compression, but it’s great for AI vision and low-light sensing.

One crazy detail: the energy it takes to run modern chips for an hour could keep ACCEL running for 500 years. That low power demand also makes it easier to deal with heat issues, which limit how small chips can get.

The chip’s functions include traffic identification, lowlight imaging, and real-time vision, using ambient light directly in the sensing process. The team said it’s not a general-purpose chip, but it fills a very specific need.

Funding came from the National Key R&D Programme and the National Natural Science Foundation of China. A Beijing chip company called MakeSens, co-founded by one of the researchers, was involved and recently launched a low-power analog chip too.

Tsinghua’s Dai Qionghai, one of the project leads, said building a new computing architecture was just the first step.

“The more important challenge is to bring this new architecture to practical applications, solving major national and public needs, which is our responsibility.”

The team hasn’t said anything about when this chip might hit the market.

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Cashing In On University Patents Means Giving Up On Our Innovation Future

Cashing In On University Patents Means Giving Up On Our Innovation Future

The post Cashing In On University Patents Means Giving Up On Our Innovation Future appeared on BitcoinEthereumNews.com. “It’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress,” writes Pipes. Getty Images Washington is addicted to taxing success. Now, Commerce Secretary Howard Lutnick is floating a plan to skim half the patent earnings from inventions developed at universities with federal funding. It’s being sold as a way to shore up programs like Social Security. In reality, it’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress. Yes, taxpayer dollars support early-stage research. But the real payoff comes later—in the jobs created, cures discovered, and industries launched when universities and private industry turn those discoveries into real products. By comparison, the sums at stake in patent licensing are trivial. Universities collectively earn only about $3.6 billion annually in patent income—less than the federal government spends on Social Security in a single day. Even confiscating half would barely register against a $6 trillion federal budget. And yet the damage from such a policy would be anything but trivial. The true return on taxpayer investment isn’t in licensing checks sent to Washington, but in the downstream economic activity that federally supported research unleashes. Thanks to the bipartisan Bayh-Dole Act of 1980, universities and private industry have powerful incentives to translate early-stage discoveries into real-world products. Before Bayh-Dole, the government hoarded patents from federally funded research, and fewer than 5% were ever licensed. Once universities could own and license their own inventions, innovation exploded. The result has been one of the best returns on investment in government history. Since 1996, university research has added nearly $2 trillion to U.S. industrial output, supported 6.5 million jobs, and launched more than 19,000 startups. Those companies pay…
Share
BitcoinEthereumNews2025/09/18 03:26
Tether launches US-regulated stablecoin, banks warn of deposit flight risk

Tether launches US-regulated stablecoin, banks warn of deposit flight risk

Tether has launched USA₮, marking its first fully compliant offering for American users under the newly enacted GENIUS Act.
Share
Crypto.news2026/01/28 01:47
USD/CAD slides to six-month lows ahead of Fed and BoC decisions

USD/CAD slides to six-month lows ahead of Fed and BoC decisions

The post USD/CAD slides to six-month lows ahead of Fed and BoC decisions appeared on BitcoinEthereumNews.com. The Canadian Dollar (CAD) trades on the front foot
Share
BitcoinEthereumNews2026/01/28 02:21