Explore how generative AI is transforming cybersecurity: its dual-use risks, defense tools, and what teams must do to stay ahead.Explore how generative AI is transforming cybersecurity: its dual-use risks, defense tools, and what teams must do to stay ahead.

How Generative AI Can Be Used in Cybersecurity

2025/09/24 14:53
8 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Generative AI has entered cybersecurity with full force, and like every powerful technology, it comes with its pros and cons.

On one side, attackers are already experimenting with AI to generate malware, craft phishing campaigns, and create deepfakes that erode trust. On the other hand, defenders are beginning to use AI to scale penetration testing, accelerate application security, and reduce the pain of compliance.

The stakes are high. A recent ForeScout Vedere Labs 2025 report showed zero-day exploits have risen 46% year over year — a clear signal that attackers are accelerating. At the same time, Gartner predicts that by 2028, 70% of enterprises will adopt AI for security operations.

The reality sits in between: AI is already changing penetration testing, application security, and compliance — but it’s not a replacement for human expertise. Instead, it’s a force multiplier, reshaping how quickly and effectively security teams can discover weaknesses, meet regulatory obligations, and prepare for adversaries that are also harnessing AI.

\

The Dual-Use Nature of Generative AI

Generative AI in cybersecurity is best understood as a dual-use technology — it amplifies both attack and defense capabilities.

GenAI for Attackers

AI lowers barriers by generating sophisticated phishing emails, fake personas, malicious code, and even automated exploit chains. Tools like CAI (Cognitive Autonomous Intelligence) demonstrate how autonomous agents can be tasked with scanning, exploiting, and pivoting through systems — blurring the line between proof-of-concept research and adversary capability. BlackMamba (an AI-generated polymorphic keylogger) and WormGPT (marketed on underground forums as “ChatGPT for cybercrime”) have already shown what’s possible.

GenAI for Defenders

AI provides scale, speed, and intelligence. Beyond SOC copilots, AI is being embedded directly into the software development lifecycle (SDLC) via AI security code reviewers and AI-powered vulnerability scanners. GitHub Copilot (with secure coding checks), CodiumAI, and Snyk Code AI catch issues earlier, reducing downstream remediation costs. Microsoft’s Security Copilot helps analysts triage alerts and accelerate investigations.

This duality is why many experts warn of an “AI arms race” between security teams and cybercriminals — where speed, automation, and adaptability may decide outcomes.

\

Offensive Security & Penetration Testing

Penetration testing has traditionally been time-intensive, relying on skilled specialists to probe for vulnerabilities in networks, applications, and infrastructure. AI is shifting the tempo.

Large language models and autonomous agents can now:

  • Generate custom exploits and payloads on demand.
  • Mimic phishing and social engineering campaigns at scale.
  • Run fuzzing routines to simulate zero-day vulnerabilities before attackers do.

A striking proof point is XBOW, the autonomous AI pentester that recently climbed to #1 on HackerOne’s U.S. leaderboard. In controlled benchmarks, XBOW solved 88 out of 104 challenges in just 28 minutes — a task that took a seasoned human tester over 40 hours. In live programs, it has already submitted over a thousand vulnerability reports, including a zero-day in Palo Alto’s GlobalProtect VPN.

Other examples include:

  • AutoSploit, an early attempt at AI-assisted exploitation pairing Shodan with Metasploit.
  • Bug bounty hunters using LLMs as copilots for reconnaissance and payload generation.
  • MITRE ATLAS, a framework mapping how adversaries might use AI in cyberattacks.

Yet despite its speed and precision, tools like XBOW still require human oversight. Automated results must be validated, prioritized, and — critically — mapped to regulatory and business risk. Without that layer, organizations risk drowning in noise or overlooking vulnerabilities that matter most for compliance and trust.

This is the shape of penetration testing to come: faster, AI-augmented discovery coupled with expert judgment to make results meaningful for businesses under pressure from regulators and partners.

\

How Can Generative AI Be Used in Application Security

Application security (AppSec) is another area seeing rapid AI adoption. The software supply chain has grown too vast and complex for purely manual testing, and generative AI is stepping in as a copilot.

Key applications include:

  • Code analysis and secure SDLC copilots: GitHub Copilot and CodiumAI spot insecure patterns before code reaches production.
  • AI-powered security scanners: Snyk Code AI and ShiftLeft Scan continuously crawl apps and APIs, flagging vulnerabilities in real time.
  • Auto-patching suggestions: GitHub now generates AI-driven pull requests suggesting secure fixes.
  • Testing LLM-based apps: The rise of AI-powered chatbots introduces new risks. Prompt injection attacks are already in the wild. OWASP responded with the first Top 10 for LLM Applications in 2023.
  • API fuzzing and zero-day simulations: Tools like Peach Fuzzer and AI-driven agents autonomously generate malformed inputs at scale.

The promise is efficiency — but the challenge is trust. An AI-generated patch may fix one issue while creating another. That’s why AI is best deployed as an accelerator in AppSec, with humans validating its findings and ensuring fixes align with compliance frameworks like ISO 27001, HIPAA, or FDA MDR/IVDR for medical software.

\

How Can Generative AI Be Used in Compliance & Governance

Beyond pentesting and AppSec, AI is finding a role in the often overlooked world of compliance. For companies in healthtech, biotech, or fintech, compliance can make or break growth — and AI is beginning to reduce the heavy lift.

Emerging applications include:

  • Automating evidence collection for ISO 27001, SOC 2, HIPAA, and GDPR.
  • Mapping vulnerabilities to controls: Linking pentest findings directly to FDA SPDF or ISO clauses.
  • Generating audit-ready reports: Platforms like Vendict, Scrut, and Thoropass use AI to translate security posture into regulator-friendly documentation.

This is particularly powerful in genomics or diagnostics, where startups face heavy regulatory burden and need to show both security and compliance maturity to win partnerships or funding.

\

Industry Examples

The use of AI in cybersecurity isn’t hypothetical — it’s playing out across industries today:

  • IBM, NVIDIA, Accenture: AI copilots for SOC operations and threat detection.
  • Vendict, Scrut, Thoropass: Embedding AI in GRC workflows.
  • Governments and defense sectors: DARPA’s AI Cyber Challenge (AIxCC) uses AI for red-teaming resilience.
  • Adversaries: North Korean APT groups and organized fraud rings are already using AI for smishing, phishing, and deepfake scams.
  • Case study: In 2019, a UK energy firm lost $240,000 after a CEO voice deepfake tricked staff into wiring money.

\

Emerging Risks of Generative AI in Cybersecurity

With opportunity comes risk. AI introduces new attack vectors and amplifies existing ones:

  • AI-powered phishing and social engineering: Deepfake audio scams are growing in sophistication.
  • Prompt injection and model manipulation: OWASP’s LLM Top 10 highlights prompt injection as the #1 risk.
  • Bias and privacy: Training models on sensitive datasets risks compliance violations under GDPR.
  • Over-reliance: Treating AI outputs as gospel risks blind spots and false positives.
  • Hallucinations: Studies show AI copilots fabricate vulnerabilities or fixes.
  • Dependency risk: SaaS outages or API shifts in AI platforms can disrupt pipelines.

\

Best Practice Strategy for Secure AI Adoption

To adopt AI in pentesting, AppSec, or compliance responsibly, organizations should:

  • Keep humans in the loop: Validate AI findings before action.
  • Govern “shadow AI”: Prevent unsanctioned AI tool use (e.g., Samsung’s data leak into ChatGPT).
  • Run continuous simulations: Microsoft’s AI Red Team tests copilots for adversarial risks.
  • Integrate into secure SDLC: Deploy AI reviewers and scanners directly in dev pipelines.
  • Apply governance frameworks: NIST AI Risk Management Framework and ENISA’s AI Security Guidelines help ensure ethical and safe AI use.

\

Conclusion & Outlook

So, how can generative AI be used in cybersecurity? It won’t replace penetration testers, application security engineers, or compliance leads. But it will accelerate their work, expand their coverage, and reshape how vulnerabilities are found and reported.

The winners won’t be those who adopt AI blindly, nor those who ignore it. They’ll be the organizations that harness AI as a trusted copilot — combining speed with human judgment, technical depth with regulatory alignment, and automation with accountability.

By 2030, AI-driven pentesting and compliance automation may become table stakes. The deciding factor will not be whether companies use AI, but how responsibly, strategically, and securely they use it — especially in regulated sectors where compliance and trust are non-negotiable.

\

Further Reading & References

  1. ForeScout Vedere Labs H1 2025 Threat Review

  2. Gartner – The Future of AI in Cybersecurity

  3. CAI – Cognitive Autonomous Intelligence

  4. BlackMamba AI Keylogger

  5. WormGPT Underground Tool

  6. GitHub Copilot

  7. CodiumAI

  8. Snyk Code AI

  9. Microsoft Security Copilot

  10. XBOW Autonomous Pentester

  11. Palo Alto GlobalProtect VPN Vulnerability

  12. AutoSploit

  13. AI in Bug Bounties – PortSwigger

  14. MITRE ATLAS

  15. OWASP Top 10 for LLM Apps

  16. ISO 27001 Standard

  17. HIPAA Security Rule

  18. FDA Medical Device Regulation

  19. FDA SPDF Guidance

  20. Vendict

  21. Scrut

  22. Thoropass

  23. IBM Security AI

  24. NVIDIA AI for Security

  25. Accenture Security

  26. DARPA AIxCC

  27. North Korean APT Attacks – Mandiant

  28. WSJ – Deepfake CEO Fraud Case

  29. FT – Deepfake Audio Scams

  30. GDPR Text

  31. Samsung ChatGPT Data Leak – The Register

  32. Microsoft – AI Red Teaming

  33. NIST AI Risk Management Framework

  34. ENISA AI Security Guidelines

    \

\

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Why is the Crypto Market Rising Today? Top Factors Impacting BTC, ETH & XRP Prices

Why is the Crypto Market Rising Today? Top Factors Impacting BTC, ETH & XRP Prices

The post Why is the Crypto Market Rising Today? Top Factors Impacting BTC, ETH & XRP Prices  appeared first on Coinpedia Fintech News Selling pressure across the
Share
CoinPedia2026/03/05 13:30
Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Following the MCP and A2A protocols, the AI Agent market has seen another blockbuster arrival: the Agent Payments Protocol (AP2), developed by Google. This will clearly further enhance AI Agents' autonomous multi-tasking capabilities, but the unfortunate reality is that it has little to do with web3AI. Let's take a closer look: What problem does AP2 solve? Simply put, the MCP protocol is like a universal hook, enabling AI agents to connect to various external tools and data sources; A2A is a team collaboration communication protocol that allows multiple AI agents to cooperate with each other to complete complex tasks; AP2 completes the last piece of the puzzle - payment capability. In other words, MCP opens up connectivity, A2A promotes collaboration efficiency, and AP2 achieves value exchange. The arrival of AP2 truly injects "soul" into the autonomous collaboration and task execution of Multi-Agents. Imagine AI Agents connecting Qunar, Meituan, and Didi to complete the booking of flights, hotels, and car rentals, but then getting stuck at the point of "self-payment." What's the point of all that multitasking? So, remember this: AP2 is an extension of MCP+A2A, solving the last mile problem of AI Agent automated execution. What are the technical highlights of AP2? The core innovation of AP2 is the Mandates mechanism, which is divided into real-time authorization mode and delegated authorization mode. Real-time authorization is easy to understand. The AI Agent finds the product and shows it to you. The operation can only be performed after the user signs. Delegated authorization requires the user to set rules in advance, such as only buying the iPhone 17 when the price drops to 5,000. The AI Agent monitors the trigger conditions and executes automatically. The implementation logic is cryptographically signed using Verifiable Credentials (VCs). Users can set complex commission conditions, including price ranges, time limits, and payment method priorities, forming a tamper-proof digital contract. Once signed, the AI Agent executes according to the conditions, with VCs ensuring auditability and security at every step. Of particular note is the "A2A x402" extension, a technical component developed by Google specifically for crypto payments, developed in collaboration with Coinbase and the Ethereum Foundation. This extension enables AI Agents to seamlessly process stablecoins, ETH, and other blockchain assets, supporting native payment scenarios within the Web3 ecosystem. What kind of imagination space can AP2 bring? After analyzing the technical principles, do you think that's it? Yes, in fact, the AP2 is boring when it is disassembled alone. Its real charm lies in connecting and opening up the "MCP+A2A+AP2" technology stack, completely opening up the complete link of AI Agent's autonomous analysis+execution+payment. From now on, AI Agents can open up many application scenarios. For example, AI Agents for stock investment and financial management can help us monitor the market 24/7 and conduct independent transactions. Enterprise procurement AI Agents can automatically replenish and renew without human intervention. AP2's complementary payment capabilities will further expand the penetration of the Agent-to-Agent economy into more scenarios. Google obviously understands that after the technical framework is established, the ecological implementation must be relied upon, so it has brought in more than 60 partners to develop it, almost covering the entire payment and business ecosystem. Interestingly, it also involves major Crypto players such as Ethereum, Coinbase, MetaMask, and Sui. Combined with the current trend of currency and stock integration, the imagination space has been doubled. Is web3 AI really dead? Not entirely. Google's AP2 looks complete, but it only achieves technical compatibility with Crypto payments. It can only be regarded as an extension of the traditional authorization framework and belongs to the category of automated execution. There is a "paradigm" difference between it and the autonomous asset management pursued by pure Crypto native solutions. The Crypto-native solutions under exploration are taking the "decentralized custody + on-chain verification" route, including AI Agent autonomous asset management, AI Agent autonomous transactions (DeFAI), AI Agent digital identity and on-chain reputation system (ERC-8004...), AI Agent on-chain governance DAO framework, AI Agent NPC and digital avatars, and many other interesting and fun directions. Ultimately, once users get used to AI Agent payments in traditional fields, their acceptance of AI Agents autonomously owning digital assets will also increase. And for those scenarios that AP2 cannot reach, such as anonymous transactions, censorship-resistant payments, and decentralized asset management, there will always be a time for crypto-native solutions to show their strength? The two are more likely to be complementary rather than competitive, but to be honest, the key technological advancements behind AI Agents currently all come from web2AI, and web3AI still needs to keep up the good work!
Share
PANews2025/09/18 07:00
Xhavic Showcases Layer-2 Vision at Dubai Web3 Event

Xhavic Showcases Layer-2 Vision at Dubai Web3 Event

Xhavic Blockchain positioned itself at the center of global Web3 discussions during a major pre-launch event held in Dubai. The gathering also featured the soft
Share
CoinTrust2026/03/05 13:33