GitHub - junyangwang0410/HaELM: An automatic MLLM hallucination detection framework
1. Installing
2. Preparing
3. Training
We provide the hallucination training dataset in "data/train_data.jsonl" and the manually labeled validation set in "data/eval_data.jsonl". If you want to:
see here.
4. Interface
We provide interface templates populated by the output of mPLUG-Owl in "LLM_output/mPLUG_caption.jsonl".
5. Citation
@article{wang2023evaluation,
title={Evaluation and Analysis of Hallucination in Large Vision-Language Models},
author={Wang, Junyang and Zhou, Yiyang and Xu, Guohai and Shi, Pengcheng and Zhao, Chenlin and Xu, Haiyang and Ye, Qinghao and Yan, Ming and Zhang, Ji and Zhu, Jihua and others},
journal={arXiv preprint arXiv:2308.15126},
year={2023}
}