Explore how generative AI is transforming cybersecurity: its dual-use risks, defense tools, and what teams must do to stay ahead.Explore how generative AI is transforming cybersecurity: its dual-use risks, defense tools, and what teams must do to stay ahead.

How Generative AI Can Be Used in Cybersecurity

2025/09/24 14:53
8 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Generative AI has entered cybersecurity with full force, and like every powerful technology, it comes with its pros and cons.

On one side, attackers are already experimenting with AI to generate malware, craft phishing campaigns, and create deepfakes that erode trust. On the other hand, defenders are beginning to use AI to scale penetration testing, accelerate application security, and reduce the pain of compliance.

The stakes are high. A recent ForeScout Vedere Labs 2025 report showed zero-day exploits have risen 46% year over year — a clear signal that attackers are accelerating. At the same time, Gartner predicts that by 2028, 70% of enterprises will adopt AI for security operations.

The reality sits in between: AI is already changing penetration testing, application security, and compliance — but it’s not a replacement for human expertise. Instead, it’s a force multiplier, reshaping how quickly and effectively security teams can discover weaknesses, meet regulatory obligations, and prepare for adversaries that are also harnessing AI.

\

The Dual-Use Nature of Generative AI

Generative AI in cybersecurity is best understood as a dual-use technology — it amplifies both attack and defense capabilities.

GenAI for Attackers

AI lowers barriers by generating sophisticated phishing emails, fake personas, malicious code, and even automated exploit chains. Tools like CAI (Cognitive Autonomous Intelligence) demonstrate how autonomous agents can be tasked with scanning, exploiting, and pivoting through systems — blurring the line between proof-of-concept research and adversary capability. BlackMamba (an AI-generated polymorphic keylogger) and WormGPT (marketed on underground forums as “ChatGPT for cybercrime”) have already shown what’s possible.

GenAI for Defenders

AI provides scale, speed, and intelligence. Beyond SOC copilots, AI is being embedded directly into the software development lifecycle (SDLC) via AI security code reviewers and AI-powered vulnerability scanners. GitHub Copilot (with secure coding checks), CodiumAI, and Snyk Code AI catch issues earlier, reducing downstream remediation costs. Microsoft’s Security Copilot helps analysts triage alerts and accelerate investigations.

This duality is why many experts warn of an “AI arms race” between security teams and cybercriminals — where speed, automation, and adaptability may decide outcomes.

\

Offensive Security & Penetration Testing

Penetration testing has traditionally been time-intensive, relying on skilled specialists to probe for vulnerabilities in networks, applications, and infrastructure. AI is shifting the tempo.

Large language models and autonomous agents can now:

  • Generate custom exploits and payloads on demand.
  • Mimic phishing and social engineering campaigns at scale.
  • Run fuzzing routines to simulate zero-day vulnerabilities before attackers do.

A striking proof point is XBOW, the autonomous AI pentester that recently climbed to #1 on HackerOne’s U.S. leaderboard. In controlled benchmarks, XBOW solved 88 out of 104 challenges in just 28 minutes — a task that took a seasoned human tester over 40 hours. In live programs, it has already submitted over a thousand vulnerability reports, including a zero-day in Palo Alto’s GlobalProtect VPN.

Other examples include:

  • AutoSploit, an early attempt at AI-assisted exploitation pairing Shodan with Metasploit.
  • Bug bounty hunters using LLMs as copilots for reconnaissance and payload generation.
  • MITRE ATLAS, a framework mapping how adversaries might use AI in cyberattacks.

Yet despite its speed and precision, tools like XBOW still require human oversight. Automated results must be validated, prioritized, and — critically — mapped to regulatory and business risk. Without that layer, organizations risk drowning in noise or overlooking vulnerabilities that matter most for compliance and trust.

This is the shape of penetration testing to come: faster, AI-augmented discovery coupled with expert judgment to make results meaningful for businesses under pressure from regulators and partners.

\

How Can Generative AI Be Used in Application Security

Application security (AppSec) is another area seeing rapid AI adoption. The software supply chain has grown too vast and complex for purely manual testing, and generative AI is stepping in as a copilot.

Key applications include:

  • Code analysis and secure SDLC copilots: GitHub Copilot and CodiumAI spot insecure patterns before code reaches production.
  • AI-powered security scanners: Snyk Code AI and ShiftLeft Scan continuously crawl apps and APIs, flagging vulnerabilities in real time.
  • Auto-patching suggestions: GitHub now generates AI-driven pull requests suggesting secure fixes.
  • Testing LLM-based apps: The rise of AI-powered chatbots introduces new risks. Prompt injection attacks are already in the wild. OWASP responded with the first Top 10 for LLM Applications in 2023.
  • API fuzzing and zero-day simulations: Tools like Peach Fuzzer and AI-driven agents autonomously generate malformed inputs at scale.

The promise is efficiency — but the challenge is trust. An AI-generated patch may fix one issue while creating another. That’s why AI is best deployed as an accelerator in AppSec, with humans validating its findings and ensuring fixes align with compliance frameworks like ISO 27001, HIPAA, or FDA MDR/IVDR for medical software.

\

How Can Generative AI Be Used in Compliance & Governance

Beyond pentesting and AppSec, AI is finding a role in the often overlooked world of compliance. For companies in healthtech, biotech, or fintech, compliance can make or break growth — and AI is beginning to reduce the heavy lift.

Emerging applications include:

  • Automating evidence collection for ISO 27001, SOC 2, HIPAA, and GDPR.
  • Mapping vulnerabilities to controls: Linking pentest findings directly to FDA SPDF or ISO clauses.
  • Generating audit-ready reports: Platforms like Vendict, Scrut, and Thoropass use AI to translate security posture into regulator-friendly documentation.

This is particularly powerful in genomics or diagnostics, where startups face heavy regulatory burden and need to show both security and compliance maturity to win partnerships or funding.

\

Industry Examples

The use of AI in cybersecurity isn’t hypothetical — it’s playing out across industries today:

  • IBM, NVIDIA, Accenture: AI copilots for SOC operations and threat detection.
  • Vendict, Scrut, Thoropass: Embedding AI in GRC workflows.
  • Governments and defense sectors: DARPA’s AI Cyber Challenge (AIxCC) uses AI for red-teaming resilience.
  • Adversaries: North Korean APT groups and organized fraud rings are already using AI for smishing, phishing, and deepfake scams.
  • Case study: In 2019, a UK energy firm lost $240,000 after a CEO voice deepfake tricked staff into wiring money.

\

Emerging Risks of Generative AI in Cybersecurity

With opportunity comes risk. AI introduces new attack vectors and amplifies existing ones:

  • AI-powered phishing and social engineering: Deepfake audio scams are growing in sophistication.
  • Prompt injection and model manipulation: OWASP’s LLM Top 10 highlights prompt injection as the #1 risk.
  • Bias and privacy: Training models on sensitive datasets risks compliance violations under GDPR.
  • Over-reliance: Treating AI outputs as gospel risks blind spots and false positives.
  • Hallucinations: Studies show AI copilots fabricate vulnerabilities or fixes.
  • Dependency risk: SaaS outages or API shifts in AI platforms can disrupt pipelines.

\

Best Practice Strategy for Secure AI Adoption

To adopt AI in pentesting, AppSec, or compliance responsibly, organizations should:

  • Keep humans in the loop: Validate AI findings before action.
  • Govern “shadow AI”: Prevent unsanctioned AI tool use (e.g., Samsung’s data leak into ChatGPT).
  • Run continuous simulations: Microsoft’s AI Red Team tests copilots for adversarial risks.
  • Integrate into secure SDLC: Deploy AI reviewers and scanners directly in dev pipelines.
  • Apply governance frameworks: NIST AI Risk Management Framework and ENISA’s AI Security Guidelines help ensure ethical and safe AI use.

\

Conclusion & Outlook

So, how can generative AI be used in cybersecurity? It won’t replace penetration testers, application security engineers, or compliance leads. But it will accelerate their work, expand their coverage, and reshape how vulnerabilities are found and reported.

The winners won’t be those who adopt AI blindly, nor those who ignore it. They’ll be the organizations that harness AI as a trusted copilot — combining speed with human judgment, technical depth with regulatory alignment, and automation with accountability.

By 2030, AI-driven pentesting and compliance automation may become table stakes. The deciding factor will not be whether companies use AI, but how responsibly, strategically, and securely they use it — especially in regulated sectors where compliance and trust are non-negotiable.

\

Further Reading & References

  1. ForeScout Vedere Labs H1 2025 Threat Review

  2. Gartner – The Future of AI in Cybersecurity

  3. CAI – Cognitive Autonomous Intelligence

  4. BlackMamba AI Keylogger

  5. WormGPT Underground Tool

  6. GitHub Copilot

  7. CodiumAI

  8. Snyk Code AI

  9. Microsoft Security Copilot

  10. XBOW Autonomous Pentester

  11. Palo Alto GlobalProtect VPN Vulnerability

  12. AutoSploit

  13. AI in Bug Bounties – PortSwigger

  14. MITRE ATLAS

  15. OWASP Top 10 for LLM Apps

  16. ISO 27001 Standard

  17. HIPAA Security Rule

  18. FDA Medical Device Regulation

  19. FDA SPDF Guidance

  20. Vendict

  21. Scrut

  22. Thoropass

  23. IBM Security AI

  24. NVIDIA AI for Security

  25. Accenture Security

  26. DARPA AIxCC

  27. North Korean APT Attacks – Mandiant

  28. WSJ – Deepfake CEO Fraud Case

  29. FT – Deepfake Audio Scams

  30. GDPR Text

  31. Samsung ChatGPT Data Leak – The Register

  32. Microsoft – AI Red Teaming

  33. NIST AI Risk Management Framework

  34. ENISA AI Security Guidelines

    \

\

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

The post CEO Sandeep Nailwal Shared Highlights About RWA on Polygon appeared on BitcoinEthereumNews.com. Polygon CEO Sandeep Nailwal highlighted Polygon’s lead in global bonds, Spiko US T-Bill, and Spiko Euro T-Bill. Polygon published an X post to share that its roadmap to GigaGas was still scaling. Sentiments around POL price were last seen to be bearish. Polygon CEO Sandeep Nailwal shared key pointers from the Dune and RWA.xyz report. These pertain to highlights about RWA on Polygon. Simultaneously, Polygon underlined its roadmap towards GigaGas. Sentiments around POL price were last seen fumbling under bearish emotions. Polygon CEO Sandeep Nailwal on Polygon RWA CEO Sandeep Nailwal highlighted three key points from the Dune and RWA.xyz report. The Chief Executive of Polygon maintained that Polygon PoS was hosting RWA TVL worth $1.13 billion across 269 assets plus 2,900 holders. Nailwal confirmed from the report that RWA was happening on Polygon. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 The X post published by Polygon CEO Sandeep Nailwal underlined that the ecosystem was leading in global bonds by holding a 62% share of tokenized global bonds. He further highlighted that Polygon was leading with Spiko US T-Bill at approximately 29% share of TVL along with Ethereum, adding that the ecosystem had more than 50% share in the number of holders. Finally, Sandeep highlighted from the report that there was a strong adoption for Spiko Euro T-Bill with 38% share of TVL. He added that 68% of returns were on Polygon across all the chains. Polygon Roadmap to GigaGas In a different update from Polygon, the community…
Share
BitcoinEthereumNews2025/09/18 01:10
BlackRock Increases U.S. Stock Exposure Amid AI Surge

BlackRock Increases U.S. Stock Exposure Amid AI Surge

The post BlackRock Increases U.S. Stock Exposure Amid AI Surge appeared on BitcoinEthereumNews.com. Key Points: BlackRock significantly increased U.S. stock exposure. AI sector driven gains boost S&P 500 to historic highs. Shift may set a precedent for other major asset managers. BlackRock, the largest asset manager, significantly increased U.S. stock and AI sector exposure, adjusting its $185 billion investment portfolios, according to a recent investment outlook report.. This strategic shift signals strong confidence in U.S. market growth, driven by AI and anticipated Federal Reserve moves, influencing significant fund flows into BlackRock’s ETFs. The reallocation increases U.S. stocks by 2% while reducing holdings in international developed markets. BlackRock’s move reflects confidence in the U.S. stock market’s trajectory, driven by robust earnings and the anticipation of Federal Reserve rate cuts. As a result, billions of dollars have flowed into BlackRock’s ETFs following the portfolio adjustment. “Our increased allocation to U.S. stocks, particularly in the AI sector, is a testament to our confidence in the growth potential of these technologies.” — Larry Fink, CEO, BlackRock The financial markets have responded favorably to this adjustment. The S&P 500 Index recently reached a historic high this year, supported by AI-driven investment enthusiasm. BlackRock’s decision aligns with widespread market speculation on the Federal Reserve’s next moves, further amplifying investor interest and confidence. AI Surge Propels S&P 500 to Historic Highs At no other time in history has the S&P 500 seen such dramatic gains driven by a single sector as the recent surge spurred by AI investments in 2023. Experts suggest that the strategic increase in U.S. stock exposure by BlackRock may set a precedent for other major asset managers. Historically, shifts of this magnitude have influenced broader market behaviors as others follow suit. Market analysts point to the favorable economic environment and technological advancements that are propelling the AI sector’s momentum. The continued growth of AI technologies is…
Share
BitcoinEthereumNews2025/09/18 02:49
Israel is losing close to $3 billion a week since fighting broke out with Iran, and markets are barely flinching

Israel is losing close to $3 billion a week since fighting broke out with Iran, and markets are barely flinching

Israel is losing close to $3 billion a week since fighting broke out with Iran, and markets are barely flinching. That figure comes from Israel’s Finance Ministry
Share
Cryptopolitan2026/03/05 05:20