
LLMs in Privacy Policy Assessment
Balancing Automation with Explanation Quality
This research examines the challenges of using Large Language Models for automated privacy policy evaluations while ensuring explanations remain trustworthy and useful.
- LLMs show promise for automating privacy assessments but face issues with explanation quality and consistency
- Hallucinations and inconsistencies create significant risks in security and privacy contexts
- The research uses PRISMe, an interactive privacy assessment tool, as a case study
- Future work needs to address explanation quality metrics and human-centered evaluation approaches
For security professionals, this research highlights the critical balance between leveraging AI automation while maintaining explanation integrity in contexts where user trust is essential.
Explainable AI in Usable Privacy and Security: Challenges and Opportunities