The post Deepfakes Are Entering U.S. Courtrooms—Judges Say They’re ‘Not Ready’ appeared on BitcoinEthereumNews.com. Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration. getty A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming. In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake. Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November. The incident has alarmed judges who see it as a harbinger. “I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms. The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone… The post Deepfakes Are Entering U.S. Courtrooms—Judges Say They’re ‘Not Ready’ appeared on BitcoinEthereumNews.com. Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration. getty A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming. In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake. Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November. The incident has alarmed judges who see it as a harbinger. “I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms. The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone…

Deepfakes Are Entering U.S. Courtrooms—Judges Say They’re ‘Not Ready’

2025/12/09 09:21

Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration.

getty

A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming.

In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake.

Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November.

The incident has alarmed judges who see it as a harbinger.

“I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms.

The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone his voice using free or inexpensive software to fabricate a threatening message, he said. Any judge presented with such a recording would grant a restraining order.

“They will sign every single time,” Schlegel said. “So you lose your cat, dog, guns, house, you lose everything.”

Judge Erica Yew of California’s Santa Clara County Superior Court raised another concern: AI could corrupt traditionally reliable sources of evidence. Someone could generate a false vehicle title record and bring it to a county clerk’s office, she said. The clerk likely won’t have the expertise to verify it and will enter it into the official record. A litigant can then obtain a certified copy and present it in court.

“Now do I, as a judge, have to question a source of evidence that has traditionally been reliable?” Yew said. “We’re in a whole new frontier.”

Courts are beginning to respond, but slowly. The U.S. Judicial Conference’s Advisory Committee on Evidence Rules has proposed a new Federal Rule of Evidence 707, which would subject “machine-generated evidence” to the same admissibility standards as expert testimony. Under the proposed rule, AI-generated evidence would need to be based on sufficient facts, produced through reliable methods, and reflect a reliable application of those methods — the same Daubert framework applied to expert witnesses.

The rule is open for public comment through February 2026. But the rulemaking process moves at a pace ill-suited to rapidly evolving technology. According to retired federal Judge Paul Grimm, who helped draft one of the proposed amendments, it takes a minimum of three years for a new federal evidence rule to be adopted.

In the meantime, some states are acting independently. Louisiana’s Act 250, passed earlier this year, requires attorneys to exercise “reasonable diligence” to determine whether evidence they submit has been generated by AI.

“The courts can’t do it all by themselves,” Schlegel said. “When your client walks in the door and hands you 10 photographs, you should ask them questions. Where did you get these photographs? Did you take them on your phone or a camera?”

Detection technology offers limited help. Current tools designed to identify AI-generated content remain unreliable, with false positive rates that vary widely depending on the platform and content type. In the Mendones case, metadata analysis helped expose the fabrication — the video’s embedded data indicated it was captured on an iPhone 6, which lacked capabilities the plaintiffs’ story required. But such forensic tells grow harder to find as generation tools improve.

A small group of judges is working to raise awareness. The National Center for State Courts and Thomson Reuters Institute have created resources distinguishing “unacknowledged AI evidence” — deepfakes passed off as real — from “acknowledged AI evidence” like AI-generated accident reconstructions that all parties recognize as synthetic.

The Trump administration’s AI Action Plan, released in July, acknowledged the problem, calling for efforts to “combat synthetic media in the court system.”

But for now, the burden falls on judges who may lack the technical training to spot fabrications — and on a legal framework built on assumptions that no longer hold.

“Instead of trust but verify, we should be saying: Don’t trust and verify,” said Maura Grossman, a research professor at the University of Waterloo and practicing lawyer who has studied AI evidence issues.

The question facing courts is whether verification remains possible when the tools to detect fabrication are themselves unreliable, and when the consequences of failure range from fraudulent restraining orders to wrongful convictions.

Source: https://www.forbes.com/sites/larsdaniel/2025/12/08/deepfakes-are-entering-us-courtrooms-judges-say-theyre-not-ready/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Taiko Makes Chainlink Data Streams Its Official Oracle

Taiko Makes Chainlink Data Streams Its Official Oracle

The post Taiko Makes Chainlink Data Streams Its Official Oracle appeared on BitcoinEthereumNews.com. Key Notes Taiko has officially integrated Chainlink Data Streams for its Layer 2 network. The integration provides developers with high-speed market data to build advanced DeFi applications. The move aims to improve security and attract institutional adoption by using Chainlink’s established infrastructure. Taiko, an Ethereum-based ETH $4 514 24h volatility: 0.4% Market cap: $545.57 B Vol. 24h: $28.23 B Layer 2 rollup, has announced the integration of Chainlink LINK $23.26 24h volatility: 1.7% Market cap: $15.75 B Vol. 24h: $787.15 M Data Streams. The development comes as the underlying Ethereum network continues to see significant on-chain activity, including large sales from ETH whales. The partnership establishes Chainlink as the official oracle infrastructure for the network. It is designed to provide developers on the Taiko platform with reliable and high-speed market data, essential for building a wide range of decentralized finance (DeFi) applications, from complex derivatives platforms to more niche projects involving unique token governance models. According to the project’s official announcement on Sept. 17, the integration enables the creation of more advanced on-chain products that require high-quality, tamper-proof data to function securely. Taiko operates as a “based rollup,” which means it leverages Ethereum validators for transaction sequencing for strong decentralization. Boosting DeFi and Institutional Interest Oracles are fundamental services in the blockchain industry. They act as secure bridges that feed external, off-chain information to on-chain smart contracts. DeFi protocols, in particular, rely on oracles for accurate, real-time price feeds. Taiko leadership stated that using Chainlink’s infrastructure aligns with its goals. The team hopes the partnership will help attract institutional crypto investment and support the development of real-world applications, a goal that aligns with Chainlink’s broader mission to bring global data on-chain. Integrating real-world economic information is part of a broader industry trend. Just last week, Chainlink partnered with the Sei…
Share
BitcoinEthereumNews2025/09/18 03:34
Superstate Launches SEC‑Approved Tokenized Share Issuance on Ethereum and Solana

Superstate Launches SEC‑Approved Tokenized Share Issuance on Ethereum and Solana

Superstate introduced a new pathway that brings public equity issuance onto blockchain networks through a regulated structure. The firm now enables SEC-registered companies to sell new tokenized shares directly to investors on Ethereum and Solana. The move signals a shift toward faster capital formation as firms search for more efficient fundraising channels. Moreover, the development arrives as U.S. regulators accelerate experiments that merge traditional finance with blockchain infrastructure. Consequently, the launch positions Superstate at the center of efforts to modernize how public companies raise money and maintain shareholder records.Direct Issuance Targets Faster Funding and Instant SettlementThe Direct Issuance Program lets issuers receive capital in stablecoins while investors receive tokenized shares in real time. This structure allows companies to manage shareholder updates instantly through Superstate’s regulated transfer agent system. Additionally, the program supports existing share classes or new digital-only classes, giving companies more flexibility in how they engage investors.Superstate expects the first offerings to launch in 2026. The firm argues that companies need issuance rails that match global capital flows and deliver immediate settlement. Hence, the appeal of stablecoin-based transactions grows as markets demand more certainty and speed. The approach may also help smaller issuers reach investors who prefer blockchain-based assets with transparent lifecycle tracking.Regulators Accelerate Blockchain ExperimentsRegulators under the Trump administration encourage more crypto-financial innovation, which strengthens interest in tokenized securities. Both the SEC and CFTC now advance guidelines that reduce uncertainty around digital issuance. Moreover, large issuers and fintech firms continue to test onchain models that integrate with compliance tools and custodial systems.Earlier efforts by Galaxy and Sharplink involved tokenizing existing shares for onchain holding. However, those initiatives did not raise new capital. Superstate now extends that foundation by enabling primary issuance that interacts directly with blockchain liquidity.Programmable Securities Unlock New Use CasesTokenized shares issued through the program can include programmable features that update governance or distribution rules automatically. Besides, the digital structure allows integrations with onchain settlement, portfolio management, and institutional custody providers. These features may attract investors seeking assets that combine regulatory protection with efficient blockchain execution.Superstate intends to open its offering to both retail and institutional buyers after KYC checks. Consequently, the initiative may reshape how issuers approach capital formation and how investors access regulated digital securities.
Share
Coinstats2025/12/11 03:16