← Back to search
tool reviewed open access llmsec-2024-00026
Rebuff: Self-Hardening Prompt Injection Detector
Protect AI
2023 — GitHub
Abstract
Open-source tool designed to detect and prevent prompt injection attacks using multiple detection methods including heuristics, LLM-based analysis, and canary tokens.
Framework Mappings
OWASP LLM: LLM01
Cite This Resource
@article{llmsec202400026,
title = {Rebuff: Self-Hardening Prompt Injection Detector},
author = {Protect AI},
year = {2023},
journal = {GitHub},
url = {https://github.com/protectai/rebuff},
} Metadata
- Added
- 2026-04-14
- Added by
- manual
- Source
- manual