SafeSpeech: Detecting Toxicity Across Conversations

SafeSpeech: Detecting Toxicity Across Conversations

Beyond message-level analysis to context-aware toxic language detection

SafeSpeech is a comprehensive platform that elevates toxic content detection from isolated messages to conversational context, addressing subtler forms of harassment and abuse.

Key Innovations:

  • Bridges the gap between message-level and conversation-level toxicity detection
  • Enables analysis of context-dependent harmful content that current systems miss
  • Provides an interactive tool for researchers to advance toxic language detection
  • Specifically identifies sexism, harassment, and abusive behavior in nuanced forms

Security Implications: By detecting subtle and contextual forms of toxicity, SafeSpeech enhances online safety and security measures for platforms where harmful behavior often evades current detection systems. This approach represents a critical advancement for protecting users from sophisticated forms of online harassment.

SafeSpeech: A Comprehensive and Interactive Tool for Analysing Sexist and Abusive Language in Conversations

87 | 104