
Trust in AI: The Double-Edged Sword
Building well-calibrated trust for LLMs in Software Engineering
This research examines the critical balance of trust when integrating Large Language Models into software engineering workflows.
- Excessive trust can lead to security vulnerabilities and increased risks
- Insufficient trust may hinder innovation and adoption of beneficial AI tools
- Well-calibrated trust is essential as LLMs become integral to critical development processes
- Engineering impact: Provides a framework for responsibly implementing AI assistants in software development while managing potential risks
Mapping the Trust Terrain: LLMs in Software Engineering -- Insights and Perspectives