Protecting User Privacy in Cloud LLMs

Protecting User Privacy in Cloud LLMs

A framework for pseudonymizing sensitive information in LLM prompts

This research introduces a general pseudonymization framework that preserves privacy when users interact with cloud-based LLMs like ChatGPT, without requiring model modifications.

  • Replaces sensitive information in user prompts with pseudonyms before transmission to LLM providers
  • Maintains context and semantic meaning while hiding personal identifiable information
  • Automatically restores original information in the model's response
  • Works with various cloud LLMs without requiring access to model internals

This approach addresses critical security concerns for organizations using third-party LLMs by creating a protective layer between users and model providers, helping businesses comply with data privacy regulations while still leveraging cloud AI capabilities.

Original Paper: A General Pseudonymization Framework for Cloud-Based LLMs: Replacing Privacy Information in Controlled Text Generation

86 | 125