← Back to search
tool reviewed open access llmsec-2024-00058
LLM Guard: Security Toolkit for LLM Interactions
Protect AI
2024 — GitHub
Abstract
Comprehensive toolkit for sanitizing LLM prompts and outputs, detecting prompt injection, PII leakage, toxic content, and code vulnerabilities.
Categories
Tags
toolsanitizationPII-detectionopen-source
Framework Mappings
OWASP LLM: LLM01 OWASP LLM: LLM02 OWASP LLM: LLM05
Cite This Resource
@article{llmsec202400058,
title = {LLM Guard: Security Toolkit for LLM Interactions},
author = {Protect AI},
year = {2024},
journal = {GitHub},
url = {https://github.com/protectai/llm-guard},
} Metadata
- Added
- 2026-04-14
- Added by
- manual
- Source
- manual