This is a compilation of useful resources for LLM developers.
- Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback
- FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
- Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
- RARR: Researching and Revising What Language Models Say, Using Language Models
- Chain-of-Verification Reduces Hallucination in Large Language Models
- A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models
- Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
- Gorilla: Large Language Model Connected with Massive APIs
- Trapping LLM Hallucinations Using Tagged Context Prompts
- Cognitive Mirage: A Review of Hallucinations in Large Language Models
- Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering