Protecting Privacy in LLMs

Protecting Privacy in LLMs

Achieving robust PII protection without sacrificing model performance

This research introduces Proactive Privacy Amnesia (PPA), a novel approach to eliminate personally identifiable information from large language models while maintaining their utility.

  • Applies cognitive science concepts of amnesia to selectively remove PII from model knowledge
  • Achieves over 90% reduction in PII leakage while preserving core model capabilities
  • Demonstrates negligible impact on general performance metrics and downstream tasks
  • Provides a practical solution for balancing privacy protection with model functionality

For security teams, this research offers an implementable framework to protect sensitive user information in deployed LLM systems without the performance degradation of existing methods.

Proactive Privacy Amnesia for Large Language Models: Safeguarding PII with Negligible Impact on Model Utility

92 | 125