Trust, Reliability, and Hallucination Mitigation

Research on addressing hallucinations, improving trustworthiness, and ensuring reliable outputs from LLMs

This presentation covers 139 research papers on large language models applied to Trust, Reliability, and Hallucination Mitigation.

1 | 141