← Back to search
dataset reviewed open access llmsec-2025-00023

SafetyBench: Evaluating the Safety of Large Language Models

Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun, Yongkang Huang

2024 — ACL 2024 90 citations

Abstract

Large-scale safety evaluation benchmark with 11,435 multiple-choice questions across 7 safety categories in both Chinese and English.

Categories

Tags

benchmarksafetymultilingualdataset

Framework Mappings

NIST AI RMF: MEASURE

Cite This Resource

@article{llmsec202500023,
  title = {SafetyBench: Evaluating the Safety of Large Language Models},
  author = {Zhexin Zhang and Leqi Lei and Lindong Wu and Rui Sun and Yongkang Huang},
  year = {2024},
  journal = {ACL 2024},
  url = {https://arxiv.org/abs/2309.07045},
}

Metadata

Added
2026-04-14
Added by
manual
Source
manual
arxiv_id
2309.07045