
The Dark Side of Persuasion
How LLMs Use Personalization and False Statistics to Change Minds
This research investigates how Large Language Models can be weaponized for persuasion through personalized arguments and fabricated data.
- Personalized persuasion significantly increases LLM effectiveness in changing human opinions
- Fabricated statistics further enhance persuasive capability, even when humans are aware of potential AI deception
- Security implications are substantial for disinformation campaigns and targeted manipulation at scale
- Ethical guardrails are urgently needed as these capabilities become more accessible
These findings highlight critical security vulnerabilities that organizations must address as LLMs become more widespread in consumer-facing applications and communication channels.
Tailored Truths: Optimizing LLM Persuasion with Personalization and Fabricated Statistics