← Back to all categories
Fuzzing
2 resourcesRed Teaming & Evaluation
LLM fuzzing, automated prompt testing, and input generation
paper reviewed open access 2024
Garak: A Framework for Security Probing Large Language Models
Leon Derczynski, Erick Galinkin, Jeffrey Martin + 2 more — arXiv preprint
Presents garak, an open-source framework for systematically probing LLM vulnerabilities including prompt injection, data leakage, and toxicity generation.
tool reviewed open access 2024
PyRIT: Python Risk Identification Toolkit for Generative AI
Microsoft AI Red Team — GitHub / Microsoft
Microsoft's open-source framework for red teaming generative AI systems, supporting automated prompt generation, attack strategies, and scoring of AI responses.