Detecting Out-of-Scope Questions in LLMs

Detecting Out-of-Scope Questions in LLMs

New resources to prevent AI hallucinations when questions seem relevant

This research introduces ELOQ, a method to help LLMs recognize when questions appear related to available information but cannot be properly answered.

  • Creates datasets specifically for training LLMs to identify out-of-scope questions
  • Applies guided hallucination techniques to efficiently generate challenging test cases
  • Provides evaluation frameworks for measuring model performance in confusion detection
  • Especially valuable for security applications where preventing misinformation is critical

This work addresses a critical gap in LLM safety by focusing on subtle cases where questions seem answerable but lack sufficient information, helping prevent hallucinations in high-stakes environments like security, customer support, and education.

ELOQ: Resources for Enhancing LLM Detection of Out-of-Scope Questions

28 | 104