Privacy Assessment for Vision-Language AI

Privacy Assessment for Vision-Language AI

A multi-perspective benchmark for evaluating privacy risks in LVLMs

This research introduces Multi-P²A, a comprehensive benchmark for evaluating privacy preservation capabilities in Large Vision-Language Models (LVLMs).

  • Addresses critical privacy awareness and leakage assessment across multiple dimensions
  • Evaluates models across personal privacy, trade secrets, and state secrets categories
  • Provides a standardized framework to identify and mitigate privacy vulnerabilities in vision-language AI systems

This benchmark is vital for security practitioners and AI developers to assess privacy risks before deployment, helping organizations meet compliance requirements and build more trustworthy AI systems that protect sensitive information.

Multi-P²A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models

56 | 125