← Back to all categories

Risk Frameworks

16 resources

Governance & Compliance

NIST AI RMF, ISO 42001, EU AI Act, and regulatory frameworks

standard reviewed open access 2025

OWASP Top 10 for Large Language Model Applications

Steve Wilson, OWASP LLM AI Security Team — OWASP Foundation

The definitive OWASP guide identifying the top 10 most critical security risks in LLM applications, with descriptions, examples, and mitigation strategies.

standard reviewed open access 2025

OWASP Top 10 for Agentic AI Applications

OWASP Foundation — OWASP Foundation

Identifies the top 10 security risks specific to agentic AI applications including excessive agency, unsafe tool execution, and inadequate oversight.

standard reviewed open access 2024

MITRE ATLAS: Adversarial Threat Landscape for AI Systems

MITRE Corporation — MITRE

Knowledge base of adversarial tactics, techniques, and case studies for AI systems, modeled on the ATT&CK framework for cybersecurity.

standard reviewed open access 2024

OWASP AI Security and Privacy Guide

Rob van der Veer, OWASP AI Exchange Team — OWASP Foundation

Comprehensive guide for AI security and privacy including threat analysis, controls, and regulatory mapping for AI systems.

paper reviewed open access 2024

OWASP LLM AI Security & Governance Checklist

OWASP Foundation, Sandy Dunn, Jackie McGuire — OWASP Foundation

Practical checklist for organizations deploying LLMs covering security, governance, legal, and regulatory considerations with actionable steps.

paper reviewed open access 2024

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2e2025)

Apostol Vassilev, Alina Oprea, Alie Fordyce + 1 more — NIST

NIST's authoritative taxonomy of adversarial ML attacks and mitigations covering evasion, poisoning, privacy, and abuse attacks against AI systems.

book reviewed 2024

AI Security: A Comprehensive Guide to Threats, Defenses, and Best Practices

Gary McGraw, Harold Figueroa, Victor Shepardson + 1 more — Berryville Institute of Machine Learning

Comprehensive practitioner guide covering AI/ML security from an architectural risk analysis perspective, with practical defense patterns.

report reviewed open access 2024

Anthropic's Responsible Scaling Policy

Anthropic — Anthropic Blog

Framework defining AI Safety Levels (ASL) for evaluating and managing risks from increasingly capable AI systems.

standard reviewed open access 2024

EU AI Act: Regulation on Artificial Intelligence

European Parliament — Official Journal of the European Union

The EU's comprehensive AI regulation establishing risk-based categories, conformity assessments, and requirements for high-risk AI systems.

tool reviewed open access 2024

OWASP Threat Dragon: AI-Aware Threat Modeling Tool

OWASP Foundation — OWASP / GitHub

Open-source threat modeling tool supporting AI/ML system threat models, data flow diagrams, and STRIDE methodology for GenAI applications.

paper reviewed open access 2024

CISA: Roadmap for Artificial Intelligence

Cybersecurity and Infrastructure Security Agency — CISA

CISA's strategic roadmap for AI covering responsible use, assuring AI systems, securing AI adoption, and collaborating on AI governance.

paper reviewed open access 2023

Identifying and Mitigating the Security Risks of Generative AI

Clark Barrett, Brad Boyd, Elie Burzstein + 20 more — Foundations and Trends in Privacy and Security

Comprehensive treatment of generative AI security risks across the ML lifecycle with a focus on practical mitigations and deployment considerations.

standard reviewed open access 2023

NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)

National Institute of Standards and Technology — NIST

Voluntary framework for managing risks in AI systems across the lifecycle, organized into Govern, Map, Measure, and Manage functions.

standard reviewed 2023

ISO/IEC 42001:2023 - Artificial Intelligence Management System

International Organization for Standardization — ISO

International standard specifying requirements for establishing, implementing, maintaining and continually improving an AI management system within organizations.

report reviewed open access 2023

OpenAI: Preparedness Framework (Beta)

OpenAI — OpenAI Blog

OpenAI's approach to tracking, evaluating, forecasting, and protecting against catastrophic risks of frontier AI models.

report reviewed open access 2023

Google: Secure AI Framework (SAIF)

Google — Google Security Blog

Google's conceptual framework for secure AI systems with six core elements covering security foundations, detection, automation, and contextualization.