Human-Centered XAI Evaluation

Human-Centered XAI Evaluation

Using AI-Generated Personas to Assess Explainable AI Systems

A novel framework called VirtualXAI that transforms XAI evaluation by leveraging GPT-generated personas to assess how well AI explanations work for diverse users.

Key Innovations:

  • Creates diverse synthetic user personas to test XAI methods across different user needs and backgrounds
  • Enables systematic assessment of explanation quality from multiple user perspectives
  • Provides a user-centric approach to XAI evaluation rather than purely technical metrics
  • Bridges the gap between technical XAI capabilities and real-world user requirements

Security Relevance: This approach directly addresses AI trustworthiness and transparency concerns - critical for secure AI systems in sensitive domains. By ensuring explanations work for diverse users, organizations can better manage risks associated with opaque AI decision-making.

VirtualXAI: A User-Centric Framework for Explainability Assessment Leveraging GPT-Generated Personas

8 | 14