LVLM Privacy Assessment Benchmark

LVLM Privacy Assessment Benchmark

A multi-perspective approach to evaluating privacy risks in vision-language models

The Multi-P²A benchmark introduces a comprehensive framework for assessing privacy vulnerabilities in Large Vision-Language Models (LVLMs), addressing critical gaps in current evaluation methods.

  • Evaluates both privacy awareness and privacy leakage capabilities
  • Covers multiple privacy categories including personal information, trade secrets, and state secrets
  • Provides standardized assessment metrics to quantify privacy preservation performance
  • Enables systematic comparison of privacy protections across different LVLM architectures

This research is crucial for security applications as it helps identify and mitigate potential privacy breaches before deploying LVLMs in sensitive environments, establishing essential guardrails for responsible AI development.

Multi-P²A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models

33 | 96