
Safeguarding Scientific Integrity
Detecting AI-Generated Peer Reviews in Academic Research
This research introduces a novel approach to detect when peer reviews are generated by LLMs instead of human experts, preserving the integrity of scientific validation processes.
- Created a first-of-its-kind benchmark dataset of human and AI-generated peer reviews
- Developed new detection methods specifically calibrated for the peer review domain
- Achieved high accuracy in distinguishing between human and LLM-written reviews
- Revealed concerning vulnerabilities in current peer review systems against LLM exploitation
For the security community, this work addresses emerging threats to scientific integrity by providing tools to identify AI-generated content masquerading as expert human evaluation, helping maintain trust in research validation mechanisms.