
Proactive Safety Engineering for ML Systems
Using LLMs to support hazard identification and mitigation in ML-powered applications
This research demonstrates how large language models can enhance traditional safety engineering methodologies for ML systems, enabling proactive hazard identification and mitigation.
- Combines proven safety approaches (FMEA, STPA) with LLM capabilities to identify potential failures in ML systems
- Proposes a systematic framework that extends from hazard identification to controller design
- Demonstrates how LLMs can support engineers in anticipating ML-specific risks before deployment
- Provides practical techniques for implementing safety controls in ML-powered applications
This work matters because it bridges the gap between traditional safety engineering and modern ML development, offering a structured approach to building safer AI systems that can prevent harm to individuals and society.