
Enhancing Test Automation with AI
Leveraging LLMs and Case-Based Reasoning for Functional Test Script Generation
This research introduces a novel case-based reasoning (CBR) system that enhances large language models' ability to generate accurate functional test scripts for evolving software.
- Implements a 4R cycle (retrieve, reuse, revise, retain) to maintain and leverage past test cases
- Demonstrates how LLMs can effectively understand dynamic code structures for testing
- Provides a systematic approach to improve test automation efficiency
- Reduces the manual effort required in software quality assurance
For engineering teams, this approach offers a practical pathway to automate the creation of test scripts while adapting to changing codebases—potentially reducing QA time and improving software reliability.