Protecting Privacy in LLM Interactions

Protecting Privacy in LLM Interactions

Using Emoji-Based Obfuscation to Secure User Prompts

EmojiPrompt introduces a novel approach to protect user privacy when communicating with cloud-based LLMs, by transforming sensitive text into emoji-based representations that are human-readable but difficult for adversaries to reverse-engineer.

  • Addresses critical privacy concerns in ChatGPT and other cloud LLMs
  • Creates an obfuscation layer that preserves semantic meaning while masking original content
  • Defends against both prompt inference attacks and service provider data collection
  • Provides a practical balance between usability and privacy protection

This research is significant for security professionals as it offers a lightweight, implementable solution to growing privacy concerns that might otherwise prevent organizations from leveraging powerful LLM capabilities in sensitive contexts.

EmojiPrompt: Generative Prompt Obfuscation for Privacy-Preserving Communication with Cloud-based LLMs

8 | 125