Threat Modeling
10 resourcesSurveys & Meta
AI-specific threat models, attack taxonomies, and kill chains
OWASP Top 10 for Large Language Model Applications
Steve Wilson, OWASP LLM AI Security Team — OWASP Foundation
The definitive OWASP guide identifying the top 10 most critical security risks in LLM applications, with descriptions, examples, and mitigation strategies.
OWASP Top 10 for Agentic AI Applications
OWASP Foundation — OWASP Foundation
Identifies the top 10 security risks specific to agentic AI applications including excessive agency, unsafe tool execution, and inadequate oversight.
A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models
Aysan Esmradi, Daniel Wankit Yip, Chun Fai Chan — arXiv preprint
Surveys attack techniques across the LLM lifecycle including training, fine-tuning, and inference, with comprehensive mitigation strategies.
MITRE ATLAS: Adversarial Threat Landscape for AI Systems
MITRE Corporation — MITRE
Knowledge base of adversarial tactics, techniques, and case studies for AI systems, modeled on the ATT&CK framework for cybersecurity.
The AI Security Pyramid of Pain
Daniel Miessler — Blog / Industry
Adapts David Bianco's Pyramid of Pain framework to AI security, categorizing AI threats by how difficult they are for adversaries to change.
Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2e2025)
Apostol Vassilev, Alina Oprea, Alie Fordyce + 1 more — NIST
NIST's authoritative taxonomy of adversarial ML attacks and mitigations covering evasion, poisoning, privacy, and abuse attacks against AI systems.
AI Security: A Comprehensive Guide to Threats, Defenses, and Best Practices
Gary McGraw, Harold Figueroa, Victor Shepardson + 1 more — Berryville Institute of Machine Learning
Comprehensive practitioner guide covering AI/ML security from an architectural risk analysis perspective, with practical defense patterns.
OWASP Threat Dragon: AI-Aware Threat Modeling Tool
OWASP Foundation — OWASP / GitHub
Open-source threat modeling tool supporting AI/ML system threat models, data flow diagrams, and STRIDE methodology for GenAI applications.
Do Anything Now: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models
Xinyue Shen, Zeyuan Chen, Michael Backes + 2 more — CCS 2024
Collects and analyzes 6,387 jailbreak prompts from the wild, developing a comprehensive taxonomy of jailbreak techniques and evaluating their effectiveness.
Identifying and Mitigating the Security Risks of Generative AI
Clark Barrett, Brad Boyd, Elie Burzstein + 20 more — Foundations and Trends in Privacy and Security
Comprehensive treatment of generative AI security risks across the ML lifecycle with a focus on practical mitigations and deployment considerations.