Risk Frameworks
16 resourcesGovernance & Compliance
NIST AI RMF, ISO 42001, EU AI Act, and regulatory frameworks
OWASP Top 10 for Large Language Model Applications
Steve Wilson, OWASP LLM AI Security Team — OWASP Foundation
The definitive OWASP guide identifying the top 10 most critical security risks in LLM applications, with descriptions, examples, and mitigation strategies.
OWASP Top 10 for Agentic AI Applications
OWASP Foundation — OWASP Foundation
Identifies the top 10 security risks specific to agentic AI applications including excessive agency, unsafe tool execution, and inadequate oversight.
MITRE ATLAS: Adversarial Threat Landscape for AI Systems
MITRE Corporation — MITRE
Knowledge base of adversarial tactics, techniques, and case studies for AI systems, modeled on the ATT&CK framework for cybersecurity.
OWASP AI Security and Privacy Guide
Rob van der Veer, OWASP AI Exchange Team — OWASP Foundation
Comprehensive guide for AI security and privacy including threat analysis, controls, and regulatory mapping for AI systems.
OWASP LLM AI Security & Governance Checklist
OWASP Foundation, Sandy Dunn, Jackie McGuire — OWASP Foundation
Practical checklist for organizations deploying LLMs covering security, governance, legal, and regulatory considerations with actionable steps.
Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2e2025)
Apostol Vassilev, Alina Oprea, Alie Fordyce + 1 more — NIST
NIST's authoritative taxonomy of adversarial ML attacks and mitigations covering evasion, poisoning, privacy, and abuse attacks against AI systems.
AI Security: A Comprehensive Guide to Threats, Defenses, and Best Practices
Gary McGraw, Harold Figueroa, Victor Shepardson + 1 more — Berryville Institute of Machine Learning
Comprehensive practitioner guide covering AI/ML security from an architectural risk analysis perspective, with practical defense patterns.
Anthropic's Responsible Scaling Policy
Anthropic — Anthropic Blog
Framework defining AI Safety Levels (ASL) for evaluating and managing risks from increasingly capable AI systems.
EU AI Act: Regulation on Artificial Intelligence
European Parliament — Official Journal of the European Union
The EU's comprehensive AI regulation establishing risk-based categories, conformity assessments, and requirements for high-risk AI systems.
OWASP Threat Dragon: AI-Aware Threat Modeling Tool
OWASP Foundation — OWASP / GitHub
Open-source threat modeling tool supporting AI/ML system threat models, data flow diagrams, and STRIDE methodology for GenAI applications.
CISA: Roadmap for Artificial Intelligence
Cybersecurity and Infrastructure Security Agency — CISA
CISA's strategic roadmap for AI covering responsible use, assuring AI systems, securing AI adoption, and collaborating on AI governance.
Identifying and Mitigating the Security Risks of Generative AI
Clark Barrett, Brad Boyd, Elie Burzstein + 20 more — Foundations and Trends in Privacy and Security
Comprehensive treatment of generative AI security risks across the ML lifecycle with a focus on practical mitigations and deployment considerations.
NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)
National Institute of Standards and Technology — NIST
Voluntary framework for managing risks in AI systems across the lifecycle, organized into Govern, Map, Measure, and Manage functions.
ISO/IEC 42001:2023 - Artificial Intelligence Management System
International Organization for Standardization — ISO
International standard specifying requirements for establishing, implementing, maintaining and continually improving an AI management system within organizations.
OpenAI: Preparedness Framework (Beta)
OpenAI — OpenAI Blog
OpenAI's approach to tracking, evaluating, forecasting, and protecting against catastrophic risks of frontier AI models.
Google: Secure AI Framework (SAIF)
Google — Google Security Blog
Google's conceptual framework for secure AI systems with six core elements covering security foundations, detection, automation, and contextualization.