OpenAI has collaborated with Paradigm to introduce EVMbench, a new benchmarking framework designed to assess how artificial intelligence agents interact with smart contract security. The initiative focuses on measuring the ability of AI systems to analyze, modify, and exploit smart contracts within controlled environments, reflecting the growing importance of automated security tools in decentralized finance.
Smart contracts currently underpin more than $100 billion in open-source digital assets, making their reliability a critical component of the global crypto financial infrastructure. As these contracts increasingly manage high-value transactions, the role of AI in reading, writing, and auditing code has become more significant. EVMbench is intended to evaluate AI performance in economically relevant scenarios while encouraging the defensive application of AI to strengthen deployed contracts against potential vulnerabilities.
The EVMbench framework is built using a dataset that includes 120 carefully selected high-severity vulnerabilities. These weaknesses were drawn from 40 separate security audits and open code competitions, ensuring that the benchmark reflects real-world threat patterns rather than theoretical flaws. In addition, the dataset incorporates specific vulnerability scenarios identified during a security audit of the Tempo blockchain, further grounding the framework in practical security challenges.
To maintain safety and reproducibility, the system relies on a Rust-based testing harness. This setup restricts unsafe remote procedure call methods and executes all exploit-related tasks within a local Anvil environment rather than on live blockchain networks. By isolating tests from production systems, the framework allows for rigorous experimentation without risking actual assets or disrupting network operations.
EVMbench evaluates AI agents across three distinct capability modes, each designed to simulate real-world smart contract security tasks. The Detect mode assesses whether an agent can audit a smart contract repository and identify known vulnerabilities based on historical data. Performance in this mode is measured by how accurately the agent recalls ground-truth vulnerabilities and the audit rewards it achieves.
The Patch mode shifts focus to remediation, requiring agents to modify vulnerable contracts to remove exploits while preserving intended functionality. Success is verified through automated testing that confirms the exploit has been eliminated and the code compiles correctly. This mode reflects the practical challenges faced by security engineers who must fix flaws without introducing new issues.
The Exploit mode evaluates offensive capabilities by testing whether an agent can execute a full fund-draining attack against a deployed contract in a sandboxed blockchain environment. Results are graded programmatically through transaction replay, offering a clear metric of exploit effectiveness that defensive systems must be able to counter.
Initial results from EVMbench indicate substantial progress in AI performance on certain cybersecurity tasks. In exploit testing, OpenAI’s GPT-5.3-Codex model achieved a success rate exceeding 70 percent, representing a notable improvement compared with earlier model versions evaluated roughly six months prior. However, the findings also indicate that detection and patching remain more challenging areas.
AI agents were frequently observed struggling to fully preserve contract functionality while resolving subtle vulnerabilities, underscoring the continued importance of human oversight in smart contract auditing. These limitations highlight that, while AI can augment security workflows, it has not yet replaced expert review.
Given the dual-use nature of cybersecurity tools, OpenAI has emphasized a defense-oriented approach. The company has expanded its security research agent, Aardvark, and committed $10 million in API credits through its Cybersecurity Grant Program. These efforts are intended to accelerate defensive research for open-source software and critical infrastructure.
Although EVMbench does not yet support advanced features such as complex timing mechanics or mainnet forks, it represents a meaningful step toward standardizing how AI systems are evaluated in blockchain security contexts. By providing a controlled, reproducible framework, the benchmark offers researchers and developers a clearer view of both the strengths and limitations of AI in securing smart contracts, contributing to a more resilient decentralized ecosystem.
The post OpenAI and Paradigm Launch AI Benchmark for Smart Contract Security appeared first on CoinTrust.



Copy linkX (Twitter)LinkedInFacebookEmail
Susquehanna-backed Blockfills up for sale af