← Back to search
paper reviewed open access llmsec-2024-00047
Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models
Jingwei Yi, Yueqi Xie, Bin Zhu, Keegan Hines, Emre Kiciman, Guangzhong Sun, Xing Xie, Fangzhao Wu
2024-01 — arXiv preprint 60 citations
Abstract
Provides a benchmark for indirect prompt injection attacks and evaluates several defense strategies including perplexity-based detection and sandwich defense.
Categories
Tags
indirect-injectionbenchmarkdefense-evaluation
Framework Mappings
OWASP LLM: LLM01 MITRE ATLAS: AML.T0051
Cite This Resource
@article{llmsec202400047,
title = {Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models},
author = {Jingwei Yi and Yueqi Xie and Bin Zhu and Keegan Hines and Emre Kiciman and Guangzhong Sun and Xing Xie and Fangzhao Wu},
year = {2024},
journal = {arXiv preprint},
url = {https://arxiv.org/abs/2312.14197},
} Metadata
- Added
- 2026-04-14
- Added by
- manual
- Source
- manual
- arxiv_id
- 2312.14197