Italy’s DeepSeek ruling reflects a wider AI pattern  Italy’s decision to close its antitrust probe into DeepSeek after the company agreed to improve warnings aboutItaly’s DeepSeek ruling reflects a wider AI pattern  Italy’s decision to close its antitrust probe into DeepSeek after the company agreed to improve warnings about

What Italy’s DeepSeek decision tells us about why businesses still don’t trust AI

2026/02/15 22:10
6 min read

Italy’s DeepSeek ruling reflects a wider AI pattern 

Italy’s decision to close its antitrust probe into DeepSeek after the company agreed to improve warnings about AI “hallucinations” has been widely framed as a pragmatic regulatory outcome. Since then, DeepSeek has revealed plans to launch a country-specific Italian version of its chatbot, with the major change being more pronounced disclosures about hallucinations. In 120 days, DeepSeek must also report back to regulators on technical efforts to reduce hallucination rates. 

On paper, this looks like progress. Italy’s AI law was billed as the first comprehensive framework of its kind, and the intervention shows regulators are serious about enforcing it. Italy’s anti-trust authority has stepped in, hallucination disclosure has been agreed, and DeepSeek has committed to technical improvements. 

But the ruling also exposes a deeper, unresolved issue that goes far beyond this one company. While DeepSeek has been asked to prove they are trying to reduce hallucination rates, the element of disclosure is being framed as more important than structural change. This signals regulatory comfort with warnings and caveats, even when the underlying accuracy problem remains. Disclosure does not create trust, or increase productivity – it merely makes the problem more visible. 

Transparency is becoming a substitute for safety 

Across jurisdictions, regulators are increasingly encouraging generative AI companies to explain hallucination risks to users. It’s understandable how regulators are reaching this conclusion. If AI systems can generate false or misleading information, users need to be warned. 

While this intention addresses a real concern, all a warning does is shift responsibility downstream, onto the person using the AI.  

This creates a nonsensical dynamic: AI providers acknowledge their systems can be wrong, regulators accept warnings as mitigation, and consumers and enterprises are left with tools officially labelled as unreliable. Yet the pressure remains to exploit AI to drive productivity, efficiency, and growth; this is especially problematic in high stakes, regulated environments. 

Why enterprises still don’t trust AI at scale 

The majority of businesses experimenting with AI are not trying to build chatbots for casual use. They are looking to deploy AI in areas like decision-support, claims-handling, legal analysis, compliance workflows, and customer communications. In these contexts, “this output might be wrong” is not a tolerable risk position. 

Organisations need to be able to answer basic questions about AI behaviour: 

  • Why did the system produce this output? 
  • What data did it rely on? 
  • What rules or constraints were applied, and what happens when it is uncertain? 

Businesses need AI that can show how it’s working, and prove its output is correct. If the only safeguard is a warning banner, their questions remain unaddressed. 

As a result of not having this, many organisations hit what can be described as an ‘AI trust ceiling’: a point where pilots stall, use cases stop expanding, and return on investment plateaus because outputs can’t be confidently relied on, audited, or defended.  

This is why AI regulations must prioritise increasing accuracy rather than disclosure. A study by the Massachusetts Institute of Technology (MIT) found that 95% of organisations that have integrated AI into their operations have seen zero return. This means the technology that was supposed to be our economy’s saving grace is potentially stalling productivity rather than aiding it.  

The trust ceiling is not just a regulatory problem 

It’s tempting for AI companies to frame the trust ceiling as a side effect of regulation  – something caused by cautious regulators or complex compliance requirements – but that’s not the case. The trust ceiling exists because of how most AI systems are built. 

Mistakes are built into large language models because of the engineering that underpins them. While they’ve improved dramatically over the past year, they are still probabilistic systems, meaning they are always predicting the next word rather than checking whether something is true. They’re optimised to sound convincing, not to guarantee correctness or to explain how an answer was reached. 

Warnings acknowledge this limitation rather than addressing it. They normalise the idea that hallucinations are an unavoidable feature of AI, rather than a design constraint that can be managed. 

That is why transparency alone will not help businesses get their AI chatbots out of the pilot phase and integrated into everyday workflow. It simply makes the limits more explicit, meaning that workers using it will need to check every single output manually.  

DeepSeek’s technical commitments are encouraging – but incomplete 

DeepSeek’s commitment to lowering hallucination rates through technical fixes is a positive step. Acknowledging that hallucinations are a global challenge and investing in mitigation is much better than ignoring the problem. 

However, even the Italian regulator noted that hallucinations “cannot be entirely eliminated.” The statement reads as the end of the conversation, but it needs to be the start of a more nuanced one about how we structurally constrain hallucinations to increase reliability.  

Designing systems that can say when they are uncertain, defer decisions, or be audited after the fact is transformative. This is achievable through reasoning models that combine probabilistic and deterministic approaches, such as neurosymbolic AI. 

Regulators and AI companies think this will slow innovation, but really, it will propel it.  Building AI systems that are fit for everyday use beyond demos and low-risk experimentation is the key to unlocking growth.  

Why disclosure-first regulation is limiting AI’s potential 

The DeepSeek case highlights a broader regulatory challenge. Disclosure is one of the few levers regulators can pull quickly, especially when dealing with fast-moving technologies. But disclosure is a blunt instrument.  

It treats all use cases as equal and assumes users can absorb and manage risk themselves. For enterprises operating under regimes like the EU AI Act, the FCA’s Consumer Duty, or sector-specific compliance rules, that assumption breaks down. These organisations cannot simply warn end users and move on. They remain accountable for outcomes, so many will choose to not deploy AI at all. 

Moving beyond the trust ceiling 

If AI is to move from experimentation to infrastructure, the industry needs to shift its focus. Instead of asking whether users have been warned, we should be asking whether systems are designed to be constrained, explainable, and auditable by default. 

That means prioritising architectures that combine probabilistic models with deterministic checks, provenance tracking, and explicit reasoning steps. It means treating explainability as a core requirement, not an add-on. Most importantly, it means recognising that trust is not built through disclaimers, but through systems that can consistently justify their behaviour. 

What the DeepSeek case should really signal 

Italy’s handling of the DeepSeek probe is not a failure of regulation. It is a signal that we are reaching the limits of what transparency-only approaches can achieve. Warnings may reduce legal exposure in the short term, but they do not raise the trust ceiling for businesses trying to deploy AI responsibly. 

If we want AI to deliver on its economic and societal promises, we need to move past the idea that informing users of risk is enough. The next phase of AI adoption will be defined not by who discloses the most, but by who designs systems that can be trusted with no warning required.  

Market Opportunity
Intuition Logo
Intuition Price(TRUST)
$0.07746
$0.07746$0.07746
-1.21%
USD
Intuition (TRUST) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Circle unveils CCTP V2 for seamless USDC crosschain transfers with Stellar

Circle unveils CCTP V2 for seamless USDC crosschain transfers with Stellar

The post Circle unveils CCTP V2 for seamless USDC crosschain transfers with Stellar appeared on BitcoinEthereumNews.com. Key Takeaways Circle’s CCTP V2 now supports the Stellar blockchain, allowing direct USDC transfers between Stellar and other networks. CCTP V2 eliminates the need for wrapped tokens or traditional bridges, reducing security risks in cross-chain transactions. Circle’s Cross-Chain Transfer Protocol Version 2 (CCTP V2) now supports Stellar, the decentralized blockchain platform designed for cross-border payments. Today’s integration enables seamless USDC transfers between Stellar and other blockchain networks. CCTP V2 allows users to move USD Coin, the stablecoin pegged 1:1 to the US dollar, across different blockchains without requiring wrapped tokens or traditional bridges that can introduce security risks. Source: https://cryptobriefing.com/circle-unveils-cctp-v2-for-usdc-crosschain-transfers-with-stellar/
Share
BitcoinEthereumNews2025/09/19 01:52
Crypto whale loses $6M to sneaky phishing scheme targeting staked Ethereum

Crypto whale loses $6M to sneaky phishing scheme targeting staked Ethereum

The post Crypto whale loses $6M to sneaky phishing scheme targeting staked Ethereum appeared on BitcoinEthereumNews.com. A crypto whale lost more than $6 million in staked Ethereum (stETH) and Aave-wrapped Bitcoin (aEthWBTC) after approving malicious signatures in a phishing scheme on Sept. 18, according to blockchain security firm Scam Sniffer. According to the firm, the attackers disguised their move as a routine wallet confirmation through “Permit” signatures, which tricked the victim into authorizing fund transfers without triggering obvious red flags. Yu Xian, founder of blockchain security company SlowMist, noted that the victim did not recognize the danger because the transaction required no gas fees. He wrote: “From the victim’s perspective, he just clicked a few times to confirm the wallet’s pop-up signature requests, didn’t spend a single penny of gas, and $6.28 million was gone.” How Permit exploits work Permit approvals were originally designed to simplify token transfers. Instead of submitting an on-chain approval and paying fees, a user can sign an off-chain message authorizing a spender. That efficiency, however, has created a new attack surface for malicious players. Once a user signs such a permit, attackers can combine two functions—Permit and TransferFrom—to drain assets directly. Because the authorization takes place off-chain, wallet dashboards show no unusual activity until the funds move. As a result, the assets are gone when the approval executes on-chain, and tokens are redirected to the attacker’s wallet. This loophole has made permit exploits increasingly attractive for malicious actors, who can siphon millions without needing complex hacks or high-cost gas wars. Phishing losses The latest theft highlights a wider trend of escalating phishing campaigns. Scam Sniffer reported that in August alone, attackers stole $12.17 million from more than 15,200 victims. That figure represented a 72% jump in losses compared with July. According to the firm, the most significant share of August’s damages came from three large accounts that accounted for nearly half…
Share
BitcoinEthereumNews2025/09/19 02:31
Scaramucci Says Trump Memecoins Drained Altcoin Market, Yet Sees Bitcoin Reaching $150,000 by Year-End

Scaramucci Says Trump Memecoins Drained Altcoin Market, Yet Sees Bitcoin Reaching $150,000 by Year-End

Anthony Scaramucci, stated that the introduction of Trump coins in January 2025 had a negative impact on the cryptocurrency revolution.
Share
Coinstats2026/02/16 01:57