← Back to search
paper reviewed open access llmsec-2023-00005

Do Anything Now: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models

Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang

2023-08 — CCS 2024 310 citations

Abstract

Collects and analyzes 6,387 jailbreak prompts from the wild, developing a comprehensive taxonomy of jailbreak techniques and evaluating their effectiveness.

Categories

Tags

jailbreak-taxonomyin-the-wildDAN

Framework Mappings

OWASP LLM: LLM01 MITRE ATLAS: AML.T0054

Cite This Resource

@article{llmsec202300005,
  title = {Do Anything Now: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models},
  author = {Xinyue Shen and Zeyuan Chen and Michael Backes and Yun Shen and Yang Zhang},
  year = {2023},
  journal = {CCS 2024},
  url = {https://arxiv.org/abs/2308.03825},
}

Metadata

Added
2026-04-14
Added by
manual
Source
manual
arxiv_id
2308.03825