
Testing ChatGPT's Engineering Problem-Solving Skills
Comparing AI vs. Human Performance on Engineering Statics Problems
This study evaluates how well advanced LLMs (ChatGPT-4o and ChatGPT-o1-preview) can solve complex engineering statics problems compared to first-year engineering students.
Key Findings:
- Assessed AI reliability on multi-step statics problems ranging from basic Newton's laws to complex beam and truss analyses
- Compared AI results against typical first-year engineering student performance on statics exams
- Developed specific strategies to enhance accuracy of AI-generated solutions
Why It Matters: This research helps engineering educators and professionals understand the current capabilities and limitations of AI tools for solving fundamental engineering problems, providing insights into how these technologies might augment engineering work and education.