
Fortifying Recommender Systems
Using LLMs to Defend Against Poisoning Attacks
LoRec introduces a novel approach using large language models to protect sequential recommendation systems from malicious manipulation.
- Employs LLMs to detect anomalous user behaviors without relying on predefined rules
- Creates a robust recommendation framework that dynamically identifies and neutralizes fraudulent patterns
- Demonstrates superior defense capabilities against various poisoning attacks compared to traditional methods
- Shows attack-agnostic protection, meaning it works against both known and unknown attack strategies
This research is critical for security as it addresses the growing vulnerability of recommendation systems to coordinated attacks that can manipulate user experiences and business outcomes at scale.
LoRec: Large Language Model for Robust Sequential Recommendation against Poisoning Attacks