Detecting Disguised Toxic Content with AI

Detecting Disguised Toxic Content with AI

Using LLMs to Extract Effective Search Queries

QExplorer leverages the power of Large Language Models to automatically extract effective search queries for identifying disguised toxic content online.

  • Addresses the challenge of detecting harmful content that is intentionally obscured
  • Utilizes generative LLM capabilities to create targeted queries for content exploration
  • Improves security systems' ability to discover similar toxic content
  • Demonstrates practical application of LLMs for online safety and content moderation

Security Impact: This research provides security teams with more effective tools to identify and mitigate harmful content that traditional keyword-based systems might miss, enhancing online safety measures.

Original Paper: QExplorer: Large Language Model Based Query Extraction for Toxic Content Exploration

79 | 104