
Detecting Coded Islamophobia Online
Using LLMs to Identify and Analyze Extremist Language
This research leverages large language models to detect and analyze specialized, semi-coded Islamophobic terms across extremist social platforms like 4Chan, Gab, and Telegram.
Key Insights:
- LLMs demonstrate effective capabilities in identifying subtle, coded hate speech that evades traditional detection methods
- Researchers tracked specialized terms (e.g., "muzrat", "pislam") used specifically to spread Islamophobic content
- Analysis reveals patterns of how extremist language evolves and spreads across platforms
- Provides security professionals with new methodologies to combat online radicalization
Security Implications:
This research directly addresses growing concerns about online extremism by providing tools to identify coded language that traditional content moderation might miss, helping prevent radicalization and protect online communities.
Analyzing Islamophobic Discourse Using Semi-Coded Terms and LLMs