Safeguarding Scientific Integrity

Safeguarding Scientific Integrity

Detecting AI-Generated Peer Reviews in Academic Research

This research introduces a novel approach to detect when peer reviews are generated by LLMs instead of human experts, preserving the integrity of scientific validation processes.

  • Created a first-of-its-kind benchmark dataset of human and AI-generated peer reviews
  • Developed new detection methods specifically calibrated for the peer review domain
  • Achieved high accuracy in distinguishing between human and LLM-written reviews
  • Revealed concerning vulnerabilities in current peer review systems against LLM exploitation

For the security community, this work addresses emerging threats to scientific integrity by providing tools to identify AI-generated content masquerading as expert human evaluation, helping maintain trust in research validation mechanisms.

Original Paper: Is Your Paper Being Reviewed by an LLM? A New Benchmark Dataset and Approach for Detecting AI Text in Peer Review

26 | 56