Securing MLLMs Through Smart Forgetting

Securing MLLMs Through Smart Forgetting

Novel neuron pruning technique for targeted information removal in multimodal AI

This research introduces Modality-Aware Neuron Pruning (MAP), a specialized technique to remove sensitive information from Multimodal Large Language Models while preserving overall performance.

  • Addresses unique challenges of unlearning across multiple modalities (text, images) in MLLMs
  • Identifies and removes specific neurons responsible for storing targeted information
  • Achieves superior unlearning performance compared to existing methods
  • Preserves model utility while effectively erasing sensitive data

As MLLMs gain widespread adoption, this security-focused approach provides crucial tools for responsible AI deployment, ensuring privacy compliance while maintaining model functionality.

Modality-Aware Neuron Pruning for Unlearning in Multimodal Large Language Models

27 | 51