The post America’s Open Source AI Gambit: Two Labs, One Question—Can the US Compete? appeared on BitcoinEthereumNews.com. Two American AI labs released open-source models this week, each taking dramatically different approaches to the same problem: how to compete with China’s dominance in publicly accessible AI systems. Deep Cogito dropped Cogito v2.1, a massive 671-billion-parameter model that its founder, Drishan Arora, calls “the best open-weight LLM by a U.S. company.” Not so fast, countered The Allen Institute for AI, which just dropped Olmo 3, billing it as “the best fully open base model.” Olmo 3 boasts complete transparency, including its training data and code.  Ironically, Deep Cognito’s flagship model is built on a Chinese foundation. Arora acknowledged on X that Cogito v2.1 “forks off the open-licensed Deepseek base model from November 2024.” That sparked some criticism and even debate about whether fine-tuning a Chinese model counts as American AI advancement, or whether it just proves how far U.S. labs have fallen behind. > best open-weight LLM by a US company this is cool but i’m not sure about emphasizing the “US” part since the base model is deepseek V3 https://t.co/SfD3dR5OOy — elie (@eliebakouch) November 19, 2025 Regardless, the efficiency gains Cogito shows over DeepSeek are real. Deep Cognito claims Cogito v2.1 produces 60% shorter reasoning chains than DeepSeek R1 while maintaining competitive performance. Using what Arora calls “Iterated Distillation and Amplification”—teaching models to develop better intuition through self-improvement loops—the startup trained its model in a mere 75 days on infrastructure from RunPod and Nebius. If the benchmarks are true, this would be the most powerful open-source LLM currently maintained by a U.S. team. Why it matters So far, China has been setting the pace in open-source AI, and U.S. companies increasingly rely—quietly or openly—on Chinese base models to stay competitive. That dynamic is risky. If Chinese labs become the default plumbing for open AI worldwide, U.S. startups… The post America’s Open Source AI Gambit: Two Labs, One Question—Can the US Compete? appeared on BitcoinEthereumNews.com. Two American AI labs released open-source models this week, each taking dramatically different approaches to the same problem: how to compete with China’s dominance in publicly accessible AI systems. Deep Cogito dropped Cogito v2.1, a massive 671-billion-parameter model that its founder, Drishan Arora, calls “the best open-weight LLM by a U.S. company.” Not so fast, countered The Allen Institute for AI, which just dropped Olmo 3, billing it as “the best fully open base model.” Olmo 3 boasts complete transparency, including its training data and code.  Ironically, Deep Cognito’s flagship model is built on a Chinese foundation. Arora acknowledged on X that Cogito v2.1 “forks off the open-licensed Deepseek base model from November 2024.” That sparked some criticism and even debate about whether fine-tuning a Chinese model counts as American AI advancement, or whether it just proves how far U.S. labs have fallen behind. > best open-weight LLM by a US company this is cool but i’m not sure about emphasizing the “US” part since the base model is deepseek V3 https://t.co/SfD3dR5OOy — elie (@eliebakouch) November 19, 2025 Regardless, the efficiency gains Cogito shows over DeepSeek are real. Deep Cognito claims Cogito v2.1 produces 60% shorter reasoning chains than DeepSeek R1 while maintaining competitive performance. Using what Arora calls “Iterated Distillation and Amplification”—teaching models to develop better intuition through self-improvement loops—the startup trained its model in a mere 75 days on infrastructure from RunPod and Nebius. If the benchmarks are true, this would be the most powerful open-source LLM currently maintained by a U.S. team. Why it matters So far, China has been setting the pace in open-source AI, and U.S. companies increasingly rely—quietly or openly—on Chinese base models to stay competitive. That dynamic is risky. If Chinese labs become the default plumbing for open AI worldwide, U.S. startups…

America’s Open Source AI Gambit: Two Labs, One Question—Can the US Compete?

Two American AI labs released open-source models this week, each taking dramatically different approaches to the same problem: how to compete with China’s dominance in publicly accessible AI systems.

Deep Cogito dropped Cogito v2.1, a massive 671-billion-parameter model that its founder, Drishan Arora, calls “the best open-weight LLM by a U.S. company.”

Not so fast, countered The Allen Institute for AI, which just dropped Olmo 3, billing it as “the best fully open base model.” Olmo 3 boasts complete transparency, including its training data and code.

Ironically, Deep Cognito’s flagship model is built on a Chinese foundation. Arora acknowledged on X that Cogito v2.1 “forks off the open-licensed Deepseek base model from November 2024.”

That sparked some criticism and even debate about whether fine-tuning a Chinese model counts as American AI advancement, or whether it just proves how far U.S. labs have fallen behind.

Regardless, the efficiency gains Cogito shows over DeepSeek are real.

Deep Cognito claims Cogito v2.1 produces 60% shorter reasoning chains than DeepSeek R1 while maintaining competitive performance.

Using what Arora calls “Iterated Distillation and Amplification”—teaching models to develop better intuition through self-improvement loops—the startup trained its model in a mere 75 days on infrastructure from RunPod and Nebius.

If the benchmarks are true, this would be the most powerful open-source LLM currently maintained by a U.S. team.

Why it matters

So far, China has been setting the pace in open-source AI, and U.S. companies increasingly rely—quietly or openly—on Chinese base models to stay competitive.

That dynamic is risky. If Chinese labs become the default plumbing for open AI worldwide, U.S. startups lose technical independence, bargaining power, and the ability to shape industry standards.

Open-weight AI determines who controls the raw models that every downstream product depends on.

Right now, Chinese open-source models (DeepSeek, Qwen, Kimi, MiniMax) dominate global adoption because they are cheap, fast, highly efficient, and constantly updated.

Image: Artificialanalysis.ai

Many U.S. startups already build on them, even when they publicly avoid admitting it.

That means U.S. firms are building businesses on top of foreign intellectual property, foreign training pipelines, and foreign hardware optimizations. Strategically, that puts America in the same position it once faced with semiconductor fabrication: increasingly dependent on someone else’s supply chain.

Deep Cogito’s approach—starting from a DeepSeek fork—shows the upside (rapid iteration) and the downside (dependency).

The Allen Institute’s approach—building Olmo 3 with full transparency—shows the alternative: if the U.S. wants open AI leadership, it has to rebuild the stack itself, from data to training recipes to checkpoints. That’s labor-intensive and slow, but it preserves sovereignty over the underlying technology.

In theory, if you already liked DeepSeek and use it online, Cogito will give you better answers most of the time. If you use it via API, you’ll be twice as happy, since you’ll pay less money to generate good replies thanks to its efficiency gains.

The Allen Institute took the opposite tack. The whole family of Olmo 3 models arrives with Dolma 3, a 5.9-trillion-token training dataset built from scratch, plus complete code, recipes, and checkpoints from every training stage.

The nonprofit released three model variants—Base, Think, and Instruct—with 7 billion and 32 billion parameters.

“True openness in AI isn’t just about access—it’s about trust, accountability, and shared progress,” the institute wrote.

Olmo 3-Think 32B is the first fully open-reasoning model at that scale, trained on roughly one-sixth the tokens of comparable models like Qwen 3, while achieving competitive performance.

Image: Ai2

Deep Cognito secured $13 million in seed funding led by Benchmark in August. The startup plans to release frontier models up to 671 billion parameters trained on “significantly more compute with better datasets.”

Meanwhile, Nvidia backed Olmo 3’s development, with vice president Kari Briski calling it essential for “developers to scale AI with open, U.S.-built models.”

The institute trained on Google Cloud’s H100 GPU clusters, achieving 2.5 times less compute requirements than Meta’s Llama 3.1 8B

Cogito v2.1 is available for free online testing here. The model can be downloaded here, but beware: it requires a very powerful card to run.

Olmo is available for testing here. The models can be downloaded here. These ones are more consumer-friendly, depending on which one you choose.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/349466/americas-open-source-ai-gambit-two-labs-one-question-can-the-us-compete

Market Opportunity
OpenLedger Logo
OpenLedger Price(OPEN)
$0.17948
$0.17948$0.17948
+6.23%
USD
OpenLedger (OPEN) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Trust Wallet issues security alert: It will never ask users for their mnemonic phrase or private key.

Trust Wallet issues security alert: It will never ask users for their mnemonic phrase or private key.

PANews reported on January 17 that Trust Wallet issued a security warning on its X platform, stating that it will never ask users for their mnemonic phrases or
Share
PANews2026/01/17 21:10
Crypto Market Cap Edges Up 2% as Bitcoin Approaches $118K After Fed Rate Trim

Crypto Market Cap Edges Up 2% as Bitcoin Approaches $118K After Fed Rate Trim

The global crypto market cap rose 2% to $4.2 trillion on Thursday, lifted by Bitcoin’s steady climb toward $118,000 after the Fed delivered its first interest rate cut of the year. Gains were measured, however, as investors weighed the central bank’s cautious tone on future policy moves. Bitcoin last traded 1% higher at $117,426. Ether rose 2.8% to $4,609. XRP also gained, rising 2.9% to $3.10. Fed Chair Jerome Powell described Wednesday’s quarter-point reduction as a risk-management step, stressing that policymakers were in no hurry to speed up the easing cycle. His comments dampened expectations of more aggressive cuts, limiting enthusiasm across risk assets. Traders Anticipated Fed Rate Trim, Leaving Little Room for Surprise Rally The Federal Open Market Committee voted 11-to-1 to lower the benchmark lending rate to a range of 4.00% to 4.25%. The sole dissent came from newly appointed governor Stephen Miran, who pushed for a half-point cut. Traders were largely prepared for the move. Futures markets tracked by the CME FedWatch tool had assigned a 96% probability to a 25 basis point cut, making the decision widely anticipated. That advance positioning meant much of the potential boost was already priced in, creating what analysts described as a “buy the rumour, sell the news” environment. Fed Rate Decision Creates Conditions for Crypto, But Traders Still Hold Back Andrew Forson, president of DeFi Technologies, said lower borrowing costs would eventually steer more money toward digital assets. “A lower cost of capital indicates more capital flows into the digital assets space because the risk hurdle rate for money is lower,” he noted. He added that staking products and blockchain projects could become attractive alternatives to traditional bonds, offering both yield and appreciation. Despite the cut, crypto markets remained calm. Open interest in Bitcoin futures held steady and no major liquidation cascades followed the Fed’s decision. Analysts pointed to Powell’s language and upcoming economic data as the key factors for traders before building larger positions. Powell’s Caution Tempers Immediate Impact of Fed Rate Move on Crypto Markets History also suggests crypto rallies after rate cuts often take time. When the Fed eased in Dec. 2024, Bitcoin briefly surged 5% cent before consolidating, with sustained gains arriving only weeks later. This time, market watchers are bracing for a similar pattern. Powell’s insistence on caution, combined with uncertainty around inflation and growth, has kept short-term volatility muted even as sentiment for risk assets improves. BitMine’s Tom Lee this week predicted that Bitcoin and Ether could deliver “monster gains” in the next three months if the Fed continues on an easing path. His view echoes broader expectations that liquidity-sensitive assets will outperform once the cycle gathers pace. For now, the crypto sector has digested the Fed’s move with restraint. Traders remain focused on signals from the central bank’s October meeting to determine whether Wednesday’s step marks the beginning of a broader policy shift or just a one-off adjustment
Share
CryptoNews2025/09/18 13:14
Trust Wallet Alerts Users After Security Incident

Trust Wallet Alerts Users After Security Incident

The post Trust Wallet Alerts Users After Security Incident appeared on BitcoinEthereumNews.com. Key Points: Trust Wallet issues alert after $7 million theft from
Share
BitcoinEthereumNews2026/01/17 21:43