Tracking LLM-Generated Disinformation

Tracking LLM-Generated Disinformation

New evidence of AI-generated content in multilingual disinformation campaigns

This study provides empirical evidence of LLM-generated content in real-world disinformation, moving beyond theoretical concerns to documented presence.

  • Found increasing prevalence of LLM-generated text in multilingual disinformation campaigns
  • Identified unique linguistic patterns that distinguish AI-generated content across languages
  • Revealed that current detection methods have significant limitations in real-world contexts
  • Demonstrated concrete security risks in specific "longtail" contexts previously considered low-risk

These findings challenge the narrative that LLM misuse concerns are overblown, highlighting actionable security vulnerabilities that require attention from both technology providers and security professionals.

Beyond speculation: Measuring the growing presence of LLM-generated texts in multilingual disinformation

15 | 27