Crossing Cultural Boundaries in Hate Speech Detection

Crossing Cultural Boundaries in Hate Speech Detection

First multimodal, multilingual hate speech dataset with multicultural annotations

Researchers developed Multi3Hate, a groundbreaking dataset to evaluate how vision-language models (VLMs) handle hate speech across language barriers and cultural contexts.

  • First dataset combining multimodal (text+image), multilingual (5 languages), and multicultural annotation perspectives
  • Reveals how cultural backgrounds of moderators affect hate speech identification
  • Tests current VLMs' capabilities to detect harmful content across linguistic and cultural boundaries
  • Addresses critical security challenges for global content moderation platforms

This research provides valuable insights for developing more effective and culturally-aware content moderation systems, helping platforms better protect users across diverse global communities.

Multi3Hate: Multimodal, Multilingual, and Multicultural Hate Speech Detection with Vision-Language Models

33 | 104