AI-Generated Content Detection
Research on distinguishing AI-generated content from human-created content for security, integrity, and authenticity verification

AI-Generated Content Detection
Research on Large Language Models in AI-Generated Content Detection

Detecting Human-Edited AI Content
Beyond Binary Classification: The First Benchmark for Hybrid Text Detection

The Reality Check on AI Text Detectors
Critical evaluation reveals limitations in detecting AI-generated content

Glimpse: Bridging the LLM Detection Gap
Enabling White-Box Methods to Use Proprietary LLMs for Superior Text Detection

Detecting AI-Generated Text
A Novel Approach Using Abstract Meaning Representation (AMR)

Detecting AI-Generated Academic Content
Evaluating the generalization and adaptation of LLM detection systems

Detecting AI-Generated Text with Statistical Precision
New Zero-Shot Methods for LLM Content Verification

Detecting AI-Generated Text Through Semantic Analysis
A novel framework combining transformer architectures and ensemble techniques

Identifying Hidden LLMs: A Security Imperative
Detecting LLM fingerprints in black-box environments across languages and domains

Fighting Misinformation with AI-Assisted Verification
Using LLM summaries for efficient, high-quality truthfulness assessments

BounTCHA: The Next Generation of CAPTCHA Defense
Using AI-extended videos to combat AI-powered bots

Creative Chokepoints in AI Text
Identifying linguistic differences between human and AI writing

Smarter AI Text Detection
Optimizing detection thresholds for different content groups

Media Manipulation Detection in the AI Era
Evolution of detection strategies from traditional to multimodal approaches

Fighting Multimodal Misinformation
Using LLMs to verify media relevance in news stories

Interpretable AI Text Detection
Example-based approach for transparent machine text identification

Detecting AI-Generated Text
Advanced GLTR-based approach for identifying LLM content

How Author Profiles Impact AI Text Detection
Uncovering blind spots in current detection systems

Detecting AI-Generated Content with AI
Using LLMs to identify and explain their own text outputs

Defending Against AI Video Deception
Using Large Vision Language Models to Detect AI-Generated Videos

Preventing AI Model Collapse
How detecting machine-generated text safeguards AI evolution

The Gray Zone of AI Detection
Uncovering the challenge of identifying AI-polished human writing

Robust AI Text Detection
A new approach using inverse prompts for reliable, explainable AI detection

Perfecting AI-Generated Text Detection
Achieving near-perfect detection accuracy with ensemble models

Detecting AI-Paraphrased Code Theft
New techniques to protect intellectual property in software

Safeguarding Scientific Integrity
Detecting AI-Generated Peer Reviews in Academic Research

Improving Fake News Detection
Beyond Token-Based Models: Creating More Generalizable Detection Systems

Detecting AI-Generated Korean Text
Novel Linguistic Feature Analysis for LLM Detection in Non-English Languages

AI-Generated Political Manipulation
Detecting LLM-crafted manipulative political content

Detecting AI's Hidden Signature
Identifying LLMs by their subtle stylistic fingerprints

Cracking the Code on AI-Generated Text
Using Sparse Autoencoders to Enhance Detection Interpretability

Combat Face Forgery with AI Intelligence
Multimodal LLMs for Detection, Localization, and Attribution of Synthetic Faces

Evading AI Detection: Security Vulnerabilities
How LLMs can be manipulated to bypass detection systems

Evading the AI Text Detectors
A Comprehensive Benchmark for Evaluating Attack Methods

Detecting AI-Generated Text in Low-Resource Languages
First large-scale machine text detector for Hausa language

Detecting AI-Generated Code
Multi-Lingual, Multi-Generator, Multi-Domain Detection Framework

Enhancing Deepfake Detection with VLMs
Unlocking vision-language models for more accurate and explainable fake media detection

Detecting AI-Generated Images
FakeVLM: A large multimodal model that explains synthetic image artifacts

Combating AI Forgeries
A new benchmark for detecting AIGC-generated fake media

Detecting AI-Generated Text in Dialogues
A systematic framework for creating better AI detection models

Fighting AI Deception: LEGION
Advancing Synthetic Image Detection with Explainable AI

TruthLens: Unveiling AI-Generated Fakes
A training-free approach to interpretable deepfake detection

Combating LLM-Generated Peer Reviews
Detecting unauthorized AI use in academic reviewing

TruthLens: Beyond Binary DeepFake Detection
An explainable AI framework for identifying and characterizing synthetic media

The RLHF Double-Edged Sword
How AI alignment affects text quality and detectability

The Dark Side of LLMs: Authorship Attacks
How adversaries can mask writing styles or impersonate others

Detecting AI-Generated Images in the Wild
New benchmarks and methods for identifying diffusion-generated content

Harnessing Multi-modal LLMs for Deepfake Detection
Evaluating AI's ability to identify synthetic media

Smart Forgery Detection for AI Images
Overcoming domain gaps with reasoning-based detection

Detecting AI-Generated Text
Combining Natural Language Features for Enhanced Detection

Detecting AI-Generated Short Texts
Topological Analysis for Enhanced Security Against LLM Misuse

Detecting AI-Generated Content Across Modalities
Comprehensive approaches to identify and mitigate synthetic media

Tracing the Digital Fingerprints of AI
New methods to identify sources of AI-generated content

Detecting AI-Generated Content
A new benchmark for identifying open LLM outputs
