
Selective Memory: Unlearning Sensitive Content in LLMs
Parameter-Efficient Techniques for Enhanced AI Privacy
This research introduces advanced methods for selectively removing sensitive information from large language models while preserving their general capabilities.
- Combines LoRA adaptation with layer-focused fine-tuning for efficient unlearning
- Implements innovative data chunking that splits forget data into partitions
- Merges forget data with cyclically sampled retain samples at predefined ratios
- Achieves targeted unlearning with minimal impact on model performance
These techniques address critical security and privacy concerns by enabling precise control over what LLMs can and cannot recall, creating safer AI systems that respect data privacy while maintaining utility.