A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection

Avatar
Poster
Voice is AI-generated
Connected to paperThis paper is a preprint and has not been certified by peer review

A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection

Authors

Shiping Yang, Renliang Sun, Xiaojun Wan

Abstract

Large Language Models (LLMs) have demonstrated their ability to collaborate effectively with humans in real-world scenarios. However, LLMs are apt to generate hallucinations, i.e., makeup incorrect text and unverified information, which can cause significant damage when deployed for mission-critical tasks. In this paper, we propose a self-check approach based on reverse validation to detect factual errors automatically in a zero-resource fashion. To facilitate future studies and assess different methods, we construct a hallucination detection benchmark, which is generated by ChatGPT and annotated by human annotators. Contrasting previous studies of zero-resource hallucination detection, our method and benchmark concentrate on passage-level detection instead of sentence-level. We empirically evaluate our method and existing zero-resource detection methods on different domains of benchmark to explore the implicit relation between hallucination and training data. Furthermore, we manually analyze some hallucination cases that LLM failed to capture, revealing the shared limitation of zero-resource methods.

Follow Us on

0 comments

Add comment