
Credibility Detection with LLMs
Using AI to identify trustworthy visual content
This research introduces a novel framework using Large Language Models to improve prediction and understanding of credibility in visual content across social media.
- Leverages multimodal LLMs (like GPT-4o) to evaluate content credibility and explain reasoning
- Extracts and quantifies interpretable features from AI-generated explanations
- Creates more accurate predictive models while maintaining human interpretability
- Provides a promising approach for misinformation detection and security in visual media
This work matters for security professionals by offering AI-powered tools to identify potentially misleading content, helping protect users from visual misinformation in an increasingly complex media landscape.