Order Bias in LLMs for Code Analysis

Order Bias in LLMs for Code Analysis

How input arrangement impacts fault localization accuracy

This research reveals that Large Language Models are significantly influenced by the order of code methods presented during software fault localization tasks.

  • LLMs show 10-30% performance variation based solely on input ordering
  • Placing faulty methods earlier in context dramatically improves detection rates
  • Context size limitations create a critical handicap when faulty code appears later in input
  • Strategic code ordering can be exploited to enhance model effectiveness

This work matters for software engineering teams by demonstrating how simple input restructuring can significantly improve automated debugging performance without requiring model retraining or additional resources.

The Impact of Input Order Bias on Large Language Models for Software Fault Localization

81 | 323