← Back to search
paper reviewed open access llmsec-2025-00018

Visual Adversarial Examples Jailbreak Aligned Large Language Models

Xiangyu Qi, Kaixuan Huang, Ashwinee Panda, Peter Henderson, Mengdi Wang, Prateek Mittal

2024 — AAAI 2024 220 citations

Abstract

Shows adversarial images can jailbreak multimodal LLMs that are robust to text-only attacks, bypassing alignment through the visual channel.

Categories

Tags

visualmultimodalimage-jailbreak

Framework Mappings

OWASP LLM: LLM01 MITRE ATLAS: AML.T0043

Cite This Resource

@article{llmsec202500018,
  title = {Visual Adversarial Examples Jailbreak Aligned Large Language Models},
  author = {Xiangyu Qi and Kaixuan Huang and Ashwinee Panda and Peter Henderson and Mengdi Wang and Prateek Mittal},
  year = {2024},
  journal = {AAAI 2024},
  url = {https://arxiv.org/abs/2306.13213},
}

Metadata

Added
2026-04-14
Added by
manual
Source
manual
arxiv_id
2306.13213