Securing LLMs on Two Fronts

Securing LLMs on Two Fronts

A novel approach combining privacy protection and adversarial robustness

SecPE introduces a groundbreaking fusion of private inference and prompt ensembling to create LLMs that protect user data while maintaining resilience against attacks.

  • Employs Fully Homomorphic Encryption (FHE) to encrypt user prompts during inference
  • Implements prompt ensembling to enhance resistance against adversarial attacks
  • Achieves both privacy preservation and robustness simultaneously, addressing two critical security concerns
  • Demonstrates minimal performance degradation while providing substantial security benefits

This research addresses crucial enterprise concerns about deploying LLMs, offering a pathway to secure AI systems that can protect sensitive data while maintaining reliability against malicious exploitation.

SecPE: Secure Prompt Ensembling for Private and Robust Large Language Models

68 | 125