Protecting User Prompts in Cloud LLMs

Protecting User Prompts in Cloud LLMs

New security framework balances privacy and performance

This research introduces Secure Partitioned Decoding (SPD), a novel approach that protects user inputs in cloud LLM services while maintaining model security and computational efficiency.

  • Uses trusted execution environments to isolate sensitive user prompts
  • Implements cryptographic prompt obfuscation to prevent exposure of user data
  • Achieves output invariance while allowing efficient token generation
  • Demonstrates practical solutions for balancing privacy concerns with compute efficiency

For security teams, this research offers actionable methods to protect sensitive user data from potential exposure to cloud providers while preserving the performance benefits of cloud-based LLMs.

Confidential Prompting: Protecting User Prompts from Cloud LLM Providers

18 | 96