
Securing Edge-Cloud LLM Systems
Joint optimization for prompt security and system performance
This research introduces a novel framework that simultaneously optimizes prompt security and system performance in edge-cloud LLM systems, addressing the rising threat of prompt engineering-based attacks.
- Employs a two-phase detection mechanism to identify malicious prompts before they cause harm
- Creates a security-aware scheduler that balances robust security with minimal latency
- Achieves 99.4% attack detection rate while maintaining high system efficiency
- Demonstrates practical implementation across diverse edge-cloud configurations
Why it matters: As LLMs become increasingly deployed at the edge, robust security measures are essential to prevent privacy leaks and resource wastage without compromising performance—filling a critical gap in current enterprise AI security strategies.
Joint Optimization of Prompt Security and System Performance in Edge-Cloud LLM Systems