The post Deepfakes Are Entering U.S. Courtrooms—Judges Say They’re ‘Not Ready’ appeared on BitcoinEthereumNews.com. Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration. getty A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming. In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake. Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November. The incident has alarmed judges who see it as a harbinger. “I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms. The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone… The post Deepfakes Are Entering U.S. Courtrooms—Judges Say They’re ‘Not Ready’ appeared on BitcoinEthereumNews.com. Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration. getty A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming. In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake. Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November. The incident has alarmed judges who see it as a harbinger. “I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms. The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone…

Deepfakes Are Entering U.S. Courtrooms—Judges Say They’re ‘Not Ready’

2025/12/09 09:21

Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration.

getty

A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming.

In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake.

Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November.

The incident has alarmed judges who see it as a harbinger.

“I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms.

The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone his voice using free or inexpensive software to fabricate a threatening message, he said. Any judge presented with such a recording would grant a restraining order.

“They will sign every single time,” Schlegel said. “So you lose your cat, dog, guns, house, you lose everything.”

Judge Erica Yew of California’s Santa Clara County Superior Court raised another concern: AI could corrupt traditionally reliable sources of evidence. Someone could generate a false vehicle title record and bring it to a county clerk’s office, she said. The clerk likely won’t have the expertise to verify it and will enter it into the official record. A litigant can then obtain a certified copy and present it in court.

“Now do I, as a judge, have to question a source of evidence that has traditionally been reliable?” Yew said. “We’re in a whole new frontier.”

Courts are beginning to respond, but slowly. The U.S. Judicial Conference’s Advisory Committee on Evidence Rules has proposed a new Federal Rule of Evidence 707, which would subject “machine-generated evidence” to the same admissibility standards as expert testimony. Under the proposed rule, AI-generated evidence would need to be based on sufficient facts, produced through reliable methods, and reflect a reliable application of those methods — the same Daubert framework applied to expert witnesses.

The rule is open for public comment through February 2026. But the rulemaking process moves at a pace ill-suited to rapidly evolving technology. According to retired federal Judge Paul Grimm, who helped draft one of the proposed amendments, it takes a minimum of three years for a new federal evidence rule to be adopted.

In the meantime, some states are acting independently. Louisiana’s Act 250, passed earlier this year, requires attorneys to exercise “reasonable diligence” to determine whether evidence they submit has been generated by AI.

“The courts can’t do it all by themselves,” Schlegel said. “When your client walks in the door and hands you 10 photographs, you should ask them questions. Where did you get these photographs? Did you take them on your phone or a camera?”

Detection technology offers limited help. Current tools designed to identify AI-generated content remain unreliable, with false positive rates that vary widely depending on the platform and content type. In the Mendones case, metadata analysis helped expose the fabrication — the video’s embedded data indicated it was captured on an iPhone 6, which lacked capabilities the plaintiffs’ story required. But such forensic tells grow harder to find as generation tools improve.

A small group of judges is working to raise awareness. The National Center for State Courts and Thomson Reuters Institute have created resources distinguishing “unacknowledged AI evidence” — deepfakes passed off as real — from “acknowledged AI evidence” like AI-generated accident reconstructions that all parties recognize as synthetic.

The Trump administration’s AI Action Plan, released in July, acknowledged the problem, calling for efforts to “combat synthetic media in the court system.”

But for now, the burden falls on judges who may lack the technical training to spot fabrications — and on a legal framework built on assumptions that no longer hold.

“Instead of trust but verify, we should be saying: Don’t trust and verify,” said Maura Grossman, a research professor at the University of Waterloo and practicing lawyer who has studied AI evidence issues.

The question facing courts is whether verification remains possible when the tools to detect fabrication are themselves unreliable, and when the consequences of failure range from fraudulent restraining orders to wrongful convictions.

Source: https://www.forbes.com/sites/larsdaniel/2025/12/08/deepfakes-are-entering-us-courtrooms-judges-say-theyre-not-ready/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings

How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings

The post How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings appeared on BitcoinEthereumNews.com. contributor Posted: September 17, 2025 As digital assets continue to reshape global finance, cloud mining has become one of the most effective ways for investors to generate stable passive income. Addressing the growing demand for simplicity, security, and profitability, IeByte has officially upgraded its fully automated cloud mining platform, empowering both beginners and experienced investors to earn Bitcoin, Dogecoin, and other mainstream cryptocurrencies without the need for hardware or technical expertise. Why cloud mining in 2025? Traditional crypto mining requires expensive hardware, high electricity costs, and constant maintenance. In 2025, with blockchain networks becoming more competitive, these barriers have grown even higher. Cloud mining solves this by allowing users to lease professional mining power remotely, eliminating the upfront costs and complexity. IeByte stands at the forefront of this transformation, offering investors a transparent and seamless path to daily earnings. IeByte’s upgraded auto-cloud mining platform With its latest upgrade, IeByte introduces: Full Automation: Mining contracts can be activated in just one click, with all processes handled by IeByte’s servers. Enhanced Security: Bank-grade encryption, cold wallets, and real-time monitoring protect every transaction. Scalable Options: From starter packages to high-level investment contracts, investors can choose the plan that matches their goals. Global Reach: Already trusted by users in over 100 countries. Mining contracts for 2025 IeByte offers a wide range of contracts tailored for every investor level. From entry-level plans with daily returns to premium high-yield packages, the platform ensures maximum accessibility. Contract Type Duration Price Daily Reward Total Earnings (Principal + Profit) Starter Contract 1 Day $200 $6 $200 + $6 + $10 bonus Bronze Basic Contract 2 Days $500 $13.5 $500 + $27 Bronze Basic Contract 3 Days $1,200 $36 $1,200 + $108 Silver Advanced Contract 1 Day $5,000 $175 $5,000 + $175 Silver Advanced Contract 2 Days $8,000 $320 $8,000 + $640 Silver…
Share
BitcoinEthereumNews2025/09/17 23:48