The Dark Side of LLMs: Authorship Attacks

The Dark Side of LLMs: Authorship Attacks

How adversaries can mask writing styles or impersonate others

This research investigates how malicious actors can exploit LLMs to defeat authorship verification systems through strategic obfuscation and impersonation techniques.

  • Authorship obfuscation: LLMs can help mask a writer's unique stylistic patterns to avoid detection
  • Impersonation attacks: LLMs enable sophisticated mimicry of target authors' writing styles
  • Defense weakness: Current authorship verification models show significant vulnerability to these attacks
  • Security implications: As LLMs advance, the risk of undetectable authorship attacks grows

This research highlights critical security concerns for fraud detection, academic integrity, and digital forensics as LLM capabilities continue to evolve.

Original Paper: Masks and Mimicry: Strategic Obfuscation and Impersonation Attacks on Authorship Verification

46 | 56