The post ArXiv Blocks AI-Generated Survey Papers After ‘Flood’ of Trashy Submissions appeared on BitcoinEthereumNews.com. In brief ArXiv changed its policy after AI tools made it easy to mass-generate survey papers. Only peer-reviewed review or position papers will now be accepted in the Computer Science category. Researchers are divided, with some warning the rule hurts early-career authors while others call it necessary to stop AI spam. ArXiv, a free repository founded at Cornell University that has become the go-to hub for thousands of scientists and technologists worldwide to publish early research papers, will no longer accept review articles or position papers in its Computer Science category unless they’ve already passed peer review at a journal or conference. The policy shift, announced October 31, comes after a “flood” of AI-generated survey papers that moderators describe as “little more than annotated bibliographies.” The repository now receives hundreds of these submissions monthly, up from a small trickle of high-quality reviews historically written by senior researchers. “In the past few years, arXiv has been flooded with papers,” an official statement on the site explained. “Generative AI/large language models have added to this flood by making papers—especially papers not introducing new research results—fast and easy to write.” The Computer Science section of @arxiv is now requiring prior peer review for Literature Surveys and Position Papers. Details in a new blog post — Thomas G. Dietterich (@tdietterich) October 31, 2025 “We were driven to this decision by a big increase in LLM-assisted survey papers,” added Thomas G. Dietterich, an arXiv moderator and former president of the Association for the Advancement of Artificial Intelligence, on X. “We don’t have the moderator resources to examine these submissions and identify the good surveys from the bad ones.” Research published in Nature Human Behaviour found that nearly a quarter of all computer science abstracts showed evidence of large language model modification by September 2024. A… The post ArXiv Blocks AI-Generated Survey Papers After ‘Flood’ of Trashy Submissions appeared on BitcoinEthereumNews.com. In brief ArXiv changed its policy after AI tools made it easy to mass-generate survey papers. Only peer-reviewed review or position papers will now be accepted in the Computer Science category. Researchers are divided, with some warning the rule hurts early-career authors while others call it necessary to stop AI spam. ArXiv, a free repository founded at Cornell University that has become the go-to hub for thousands of scientists and technologists worldwide to publish early research papers, will no longer accept review articles or position papers in its Computer Science category unless they’ve already passed peer review at a journal or conference. The policy shift, announced October 31, comes after a “flood” of AI-generated survey papers that moderators describe as “little more than annotated bibliographies.” The repository now receives hundreds of these submissions monthly, up from a small trickle of high-quality reviews historically written by senior researchers. “In the past few years, arXiv has been flooded with papers,” an official statement on the site explained. “Generative AI/large language models have added to this flood by making papers—especially papers not introducing new research results—fast and easy to write.” The Computer Science section of @arxiv is now requiring prior peer review for Literature Surveys and Position Papers. Details in a new blog post — Thomas G. Dietterich (@tdietterich) October 31, 2025 “We were driven to this decision by a big increase in LLM-assisted survey papers,” added Thomas G. Dietterich, an arXiv moderator and former president of the Association for the Advancement of Artificial Intelligence, on X. “We don’t have the moderator resources to examine these submissions and identify the good surveys from the bad ones.” Research published in Nature Human Behaviour found that nearly a quarter of all computer science abstracts showed evidence of large language model modification by September 2024. A…

ArXiv Blocks AI-Generated Survey Papers After ‘Flood’ of Trashy Submissions

2025/11/04 08:48

In brief

  • ArXiv changed its policy after AI tools made it easy to mass-generate survey papers.
  • Only peer-reviewed review or position papers will now be accepted in the Computer Science category.
  • Researchers are divided, with some warning the rule hurts early-career authors while others call it necessary to stop AI spam.

ArXiv, a free repository founded at Cornell University that has become the go-to hub for thousands of scientists and technologists worldwide to publish early research papers, will no longer accept review articles or position papers in its Computer Science category unless they’ve already passed peer review at a journal or conference.

The policy shift, announced October 31, comes after a “flood” of AI-generated survey papers that moderators describe as “little more than annotated bibliographies.” The repository now receives hundreds of these submissions monthly, up from a small trickle of high-quality reviews historically written by senior researchers.

“In the past few years, arXiv has been flooded with papers,” an official statement on the site explained. “Generative AI/large language models have added to this flood by making papers—especially papers not introducing new research results—fast and easy to write.”

“We were driven to this decision by a big increase in LLM-assisted survey papers,” added Thomas G. Dietterich, an arXiv moderator and former president of the Association for the Advancement of Artificial Intelligence, on X. “We don’t have the moderator resources to examine these submissions and identify the good surveys from the bad ones.”

Research published in Nature Human Behaviour found that nearly a quarter of all computer science abstracts showed evidence of large language model modification by September 2024. A separate study in Science Advances showed that the use of AI in research papers published in 2024 skyrocketed since the launch of ChatGPT.

Source: ArXiv

ArXiv’s volunteer moderators have always filtered submissions for scholarly value and topical relevance, but they don’t conduct peer review. Review articles and position papers were never officially accepted content types, though moderators made exceptions for work from established researchers or scientific societies. That discretionary system broke under the weight of AI-generated submissions.

The platform now handles a submission volume that’s multiplied several times over in recent years, with generative AI making it trivially easy to produce superficial survey papers.

The response from the research community has been mixed. Stephen Casper, an AI safety researcher, raised concerns that the policy might disproportionately affect early-career researchers and those working on ethics and governance topics.

“Review/position papers are disproportionately written by young people, people without access to lots of compute, and people who are not at institutions that have lots of publishing experience,” he wrote in a critique.

Other simply critiqued ArXiv’s stance as wrong (and even dumb), with others even supporting the use of AI to detect AI-generated papers

One problem is that AI detection tools have proven unreliable, with high false-positive rates that can unfairly flag legitimate work. On the other hand, a recent study found that researchers failed to identify one-third of ChatGPT-generated medical abstracts as machine-written. The American Association for Cancer Research reported that less than 25% of authors disclosed AI use despite mandatory disclosure policies.

The new requirement means authors must submit documentation of successful peer review, including journal references and DOIs. Workshop reviews won’t meet the standard. ArXiv emphasized that the change affects only the Computer Science category for now, though other sections may adopt similar policies if they face comparable surges in AI-generated submissions.

The move reflects a broader reckoning in academic publishing. Major conferences like CVPR 2025 have implemented policies to desk-reject papers from reviewers flagged for irresponsible conduct. Publishers are grappling with papers that contain obvious AI tells, like one that began, “Certainly, here is a possible introduction for your topic.”

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/347196/arxiv-blocks-ai-generated-survey-papers-flood-trashy-submissions

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

‘Already seen the low?’ – Inside Cathie Wood’s bet on a new Bitcoin cycle

‘Already seen the low?’ – Inside Cathie Wood’s bet on a new Bitcoin cycle

The post ‘Already seen the low?’ – Inside Cathie Wood’s bet on a new Bitcoin cycle appeared on BitcoinEthereumNews.com. Bitcoin has rarely looked more fragile, and many analysts are already referring to this as the worst fourth quarter on record, marked by a massive leverage wipeout and a steep drop from its all-time highs. For over a decade, Bitcoin [BTC] has followed a harsh, predictable pattern: a Halving event, a commendable rally to new highs, and then a brutal 75–90% crash that resets the entire market. This cycle shaped the crypto world and created the “crypto winter” mentality that traders have come to expect. Cathie Wood challenges the four-year cycle But according to Cathie Wood, CEO and CIO of ARK Invest, those old rules no longer apply. Speaking with Fox Business, Wood made a profound declaration: institutional adoption is actively “disrupting” the traditional Bitcoin cycle. Wood noted that growing participation in U.S. Spot Bitcoin ETFs had started to change how BTC absorbed volatility. She pointed to a steady decline in its two-year volatility trend over the past five years, adding fuel to the idea of a maturing asset. Why Bitcoin’s old pattern may be fading Wood’s view challenges over a decade of beliefs built around Bitcoin’s strict, predictable four-year cycle. The evidence for this cycle is compelling.  For instance, the 2012 Halving saw Bitcoin surge from under $10 to a peak of roughly $1,100; the 2016 Halving fueled a climb from $400 to nearly $20,000; and the 2020 Halving propelled the asset from $8,500 to a record high of around $69,000. Each of these explosive rallies was followed by a painful, defining drawdown of 70% to 85%, resetting the stage for the next run. This predictable pattern, last triggered by the 20th April 2024, Halving, has historically been the sole script for investors. Yet, this time, the narrative feels disjointed and disruptive. What is Wood so concerned about? Wood…
Share
BitcoinEthereumNews2025/12/11 19:15
The Critical Security Play You Can’t Miss in the AI Era

The Critical Security Play You Can’t Miss in the AI Era

The post The Critical Security Play You Can’t Miss in the AI Era appeared on BitcoinEthereumNews.com. The Watershed Moment That Changed Blockchain Security Forever Singapore – Blockman PR – December 2025 marked a turning point. Anthropic’s research team published findings that sent shockwaves through crypto: AI systems could successfully exploit smart contract vulnerabilities with 55.88% accuracy, simulating $4.6 million in potential theft from real-world contracts. The implications were existential. If AI could systematically identify and exploit vulnerabilities at scale, the entire blockchain ecosystem—processing over $1 trillion in transactions annually—faced an unprecedented threat. Traditional security tools couldn’t keep pace. Human auditors, already stretched thin reviewing less than 20% of deployed contracts, had no chance against autonomous AI attackers. But here’s what most people missed: Anthropic’s breakthrough wasn’t just validation of the threat. It was validation of the solution space. And one company had already been building that solution for six months—and winning. The Defense Was Already Operational While Anthropic demonstrated AI could break smart contracts in simulation, AgentLISA had been defending them in production. By the time Anthropic’s paper dropped, AgentLISA’s multi-agent system had detected over $7.3 million in actual vulnerabilities across real protocols managing billions in assets. The asymmetry is critical: Anthropic proved the threat is real and AI-powered. AgentLISA proved the defense is real, AI-powered, and already operational at scale. This matters because Anthropic’s research exposed something fundamental: the AI security race will be won by whoever controls the training data. And AgentLISA just lapped the entire field. LISA-Bench: The Data Moat Nobody Saw Coming https://github.com/agentlisa/bench Anthropic’s team used SCONE-bench—a dataset of 413 vulnerable smart contracts—to train their attack models. Solid methodology, respectable work. But fundamentally constrained by data scarcity. AgentLISA’s response was devastating: LISA-Bench, containing 23,959 professionally verified vulnerability records spanning 2016-2024—the largest curated smart contract vulnerability dataset ever assembled. It’s not just 60 times larger than SCONE-bench. It includes 10,185 code-complete vulnerability cases…
Share
BitcoinEthereumNews2025/12/11 19:01