Protecting Emotional Privacy in Voice Data

Protecting Emotional Privacy in Voice Data

Audio Editing as User-Friendly Defense Against LLM Inference Attacks

This research introduces a practical approach to safeguarding emotional privacy in speech by using common audio editing techniques that balance security and usability.

  • Familiar tools as defenses: Leverages accessible audio modifications like pitch shifting and spectral filtering to protect emotional data
  • Effective protection: Demonstrates significant reduction in emotion detection accuracy across multiple LLM attack scenarios
  • User-centric approach: Prioritizes solutions that users can easily implement without specialized knowledge
  • Balanced security: Maintains speech intelligibility while blocking emotional inference

For security professionals, this work offers implementable privacy protections for voice-enabled technologies without requiring complex infrastructure changes or degrading user experience.

Exploring Audio Editing Features as User-Centric Privacy Defenses Against Large Language Model(LLM) Based Emotion Inference Attacks

64 | 125