← Back to search
tool reviewed open access llmsec-2024-00057
Guardrails AI: Input/Output Guards for LLM Applications
Guardrails AI
2024 — GitHub
Abstract
Framework for adding structural, type, and quality guarantees to LLM outputs with validators for PII, toxicity, code security, and factual accuracy.
Framework Mappings
OWASP LLM: LLM01 OWASP LLM: LLM05
Cite This Resource
@article{llmsec202400057,
title = {Guardrails AI: Input/Output Guards for LLM Applications},
author = {Guardrails AI},
year = {2024},
journal = {GitHub},
url = {https://github.com/guardrails-ai/guardrails},
} Metadata
- Added
- 2026-04-14
- Added by
- manual
- Source
- manual