UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
-
Updated
Apr 3, 2026 - Python
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
[NeurIPS 2025] SECA: Semantically Equivalent and Coherent Attacks for Eliciting LLM Hallucinations
RAG Hallucination Detecting By LRP.
CRoPS (TMLR)
Build your own open-source REST API endpoint to detect hallucination in LLM generated responses.
Semi-supervised pipeline to detect LLM hallucinations. Uses Mistral-7B for zero-shot pseudo-labeling and DeBERTa for efficient classification.
Research paper on how agentic debate pipelines can be constructed to reduce hallucinations in LLMs with open-source and commercial models
Novel Hallucination detection method
Lecture-RAG is a grounding-aware Video-RAG framework that reduces hallucinations and supports algorithmic reasoning in educational, Slide based, Black board tutorial videos.
This repository contains the codebase for the PoC of LLM package hallucination and associated vulnerabilties.
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
Source code for the paper: A Hallucination Mitigation Scheme in Security Policy Generation with Large Language Models
Automated detection, visualization and suppression of hallucination-associated neurons in open-source LLMs — LLM mechanistic interpretability research tool
Add a description, image, and links to the llm-hallucination topic page so that developers can more easily learn about it.
To associate your repository with the llm-hallucination topic, visit your repo's landing page and select "manage topics."