
AI-Powered Credibility Detection
Using LLMs to Improve Visual Content Trust Assessment
This research introduces a novel framework that leverages multimodal large language models (like GPT-4o) to enhance prediction and interpretation of how people perceive the credibility of visual content.
- Developed an LLM-informed feature discovery approach that identifies and quantifies key visual credibility indicators
- Improved prediction accuracy for human credibility judgments of visual content
- Provided interpretable insights into the reasoning behind credibility assessments
- Demonstrated practical applications for misinformation detection and security enhancement
This innovation is particularly valuable for security applications by offering more effective tools to identify potentially misleading visual content and protect users from misinformation at scale.