← Back to search
paper reviewed open access llmsec-2024-00039

Securing LLM Systems Against Prompt Injection

Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong

2024-02 — arXiv preprint 50 citations

Abstract

Proposes defense mechanisms against prompt injection in LLM systems including isolation-based approaches, input/output filtering, and detection methods.

Categories

Tags

defenseisolationfiltering

Framework Mappings

OWASP LLM: LLM01 MITRE ATLAS: AML.T0051

Cite This Resource

@article{llmsec202400039,
  title = {Securing LLM Systems Against Prompt Injection},
  author = {Yupei Liu and Yuqi Jia and Runpeng Geng and Jinyuan Jia and Neil Zhenqiang Gong},
  year = {2024},
  journal = {arXiv preprint},
  url = {https://arxiv.org/abs/2402.00898},
}

Metadata

Added
2026-04-14
Added by
manual
Source
manual
arxiv_id
2402.00898