Combating LLM-Generated Peer Reviews

Combating LLM-Generated Peer Reviews

Detecting unauthorized AI use in academic reviewing

This research addresses the growing concern of reviewers using LLMs to generate academic peer reviews, a practice that threatens scholarly integrity.

Key findings:

  • Develops specialized detection methods that can differentiate between fully LLM-generated reviews and those only edited/polished by LLMs
  • Implements technical safeguards against reviewer evasion tactics
  • Demonstrates statistical approaches for identifying unauthorized LLM usage

Security implications: By creating reliable detection mechanisms, this research helps maintain the integrity of peer review processes that scientific progress depends upon, preventing a potential crisis of trust in academic publishing.

Detecting LLM-Written Peer Reviews

43 | 56