AI in AppSecCross-industryPublished February 5, 2026

The State of AI in AppSec 2026: Proven, Promising, and Emerging

Executive summary

What this whitepaper covers

AI is now part of every AppSec conversation, but adoption is uneven. Between 2023 and 2024 most large organizations ran pilots and proof-of-concepts; a few succeeded, most stalled. Teams that built narrow, well-defined use cases have measurable results; teams that tried to replace entire workflows with a model have struggled. As LLM costs drop, context windows grow, and reinforcement learning starts to deliver, 2026 is the year AppSec programs move from experimentation to operationalization. This whitepaper sorts the landscape into three buckets (proven, promising, and emerging) and gives practical guidance on where AI delivers real value across security design reviews, triage, pen testing, and static analysis.

Key findings

What you'll take away

  • Proven today: AI-powered security design reviews (one finserv client moved from 10% to 80% feature coverage), LLM triage for SAST and SCA findings, and AI productivity tooling (ChatGPT, Cursor, Claude Code) for engineer workflows
  • Fastly's 2025 survey: 77% of organizations actively use AI in AppSec workflows and 81% plan to expand, but a third act on AI-identified issues without human review, so oversight is the gap
  • Promising but still emerging: LLM penetration testing (reproducibility and business-logic gaps) and LLM-powered static analysis (explainability, consistency, and compute cost remain open problems)
  • Three kinds of vendors to watch in LLM static analysis: code-generation platforms adding security (Claude Code), SAST incumbents layering AI (Snyk, Semgrep), and LLM-native startups rethinking the category (Dryrun, Corgea)
  • The three failure modes to avoid: removing humans from the loop, letting invisible AI costs balloon on every PR, and overpromising AI as a fix for broken processes instead of an accelerator inside a healthy program
Download

Get the full whitepaper

Enter your details and we'll email you the PDF right away.

FAQ

Frequently asked questions

What should my AppSec team adopt first?
Start with the three proven categories: AI-powered security design reviews, LLM triage for SAST and SCA findings, and AI productivity tooling (ChatGPT, Cursor, Claude Code) for engineer workflows. All three reduce manual load without asking you to replace human judgement.
Is AI-powered static analysis production-ready?
Not yet. Established SAST vendors like Snyk and Semgrep are layering AI onto rule-based scanning, and LLM-native startups like Dryrun and Corgea are rethinking the category from scratch. Expect major movement in 2026, but explainability, consistency across runs, and compute cost remain unsolved. Treat it as a pilot to augment human review, not a scanner replacement.
How should I deploy AI tools around sensitive code?
Run the model where your data already lives. If code and architecture diagrams are inside your network, the LLM should be too (for example, Cursor with Claude via AWS Bedrock on your own deployment and keys). Treat outputs as drafts or suggestions; engineers still own accuracy, completeness, and the final risk decision.
How do I keep AI costs from ballooning?
Monitor token usage per request and per tool integration. Running a large model on every pull request or pipeline step can add up to thousands a month if left unchecked. AppSec leaders are accountable for predictable spend, not just risk reduction. Instrument AI usage the same way you instrument security signal.