TruthPrInt: Mitigating LVLM Object Hallucination Via Latent ...

TruthPrInt: Mitigating LVLM Object Hallucination Via Latent ...

By Jinhao Duan, Fei Kong...

Abstract:

Object Hallucination (OH) has been acknowledged as one of the major trustworthy challenges in Large Vision-Language Models (LVLMs). Recent advancements in Large Language Models (LLMs) indicate that internal states, such as hidden states, encode the "overall truthfulness" of generated responses. Howe...

Key points:

  • Research on large language models
  • Security application

Source: TruthPrInt: Mitigating LVLM Object Hallucination Via Latent Truthful-Guided Pre-Intervention

112 | 141