
Selective Forgetting in AI Models
A Novel Approach to Privacy-Compliant Unlearning
This research introduces a prompt-driven, training-free framework that enables large language models to selectively forget sensitive information while preserving other capabilities.
- Addresses the challenging problem of removing specific data from AI models without full retraining
- Proposes an Automatic Dataset Creation Framework for targeted unlearning
- Introduces new evaluation metrics for measuring unlearning effectiveness
- Focuses on preserving consistency in non-sensitive data regions
Security Implications: This approach provides a practical solution for organizations that need to comply with privacy regulations like GDPR's "right to be forgotten" while maintaining model performance on allowed data.
Prompt-Driven and Training-Free Forgetting Approach and Dataset for Large Language Models