← Back to all categories
Machine Unlearning
2 resourcesPrivacy
Model unlearning, right to erasure, and data removal
paper reviewed open access 2024
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks
Vaidehi Patil, Peter Hase, Mohit Bansal — ICLR 2024
Evaluates methods for deleting sensitive information from trained LLMs, finding current unlearning approaches insufficient against determined adversaries.
paper reviewed open access 2024
Machine Unlearning for Large Language Models: A Survey
Zheyuan Liu, Guangyao Dou, Zhaoxuan Tan + 2 more — arXiv preprint
Surveys machine unlearning techniques for LLMs including methods for forgetting specific training data, complying with data deletion requests, and maintaining model utility.