Protecting Emotional Privacy in Voice Data

Protecting Emotional Privacy in Voice Data

Using Simple Audio Editing as Defense Against LLM Emotion Detection

This research introduces user-friendly privacy protection through familiar audio editing techniques that defend against AI systems attempting to detect emotions in voice data.

  • Audio manipulations like pitch shifting and time stretching significantly reduce emotion detection accuracy
  • These defenses are accessible to users without technical expertise
  • Protection maintains voice intelligibility while obscuring emotional content
  • Defenses work against various modern LLM attack models, including custom-trained emotion detection systems

As voice technologies become ubiquitous, these practical defenses help users maintain emotional privacy without sacrificing usability - a critical balance for wide adoption of privacy measures across voice applications.

Exploring Audio Editing Features as User-Centric Privacy Defenses Against Large Language Model(LLM) Based Emotion Inference Attacks

37 | 96