
Structural Vulnerabilities in LLMs
How uncommon text structures enable powerful jailbreak attacks
StructuralSleight introduces a novel method to bypass LLM safety guardrails by exploiting how models process unusual text structures, achieving up to 94.62% success even against advanced models like GPT-4o.
- Demonstrates that text structure itself can compromise LLM security, beyond traditional content-based approaches
- Utilizes uncommon text layouts (tables, diagrams, ASCII art) that confuse model parsing mechanisms
- Reveals a critical blind spot in current LLM safety implementations
- Shows automated attack generation without requiring human expertise
This research highlights urgent security concerns for organizations deploying LLMs in production environments, as these attacks can bypass content filters while maintaining high success rates.