Skip to content

shamanthak-hegde/CSE576-NLP

Repository files navigation

CSE576-NLP - Analyzing & Improving Hallucinations in MLLMs

Multimodal large language models (MLLMs) have shown impressive performance in tasks that involve processing visual and textual information. However, MLLMs are still prone to hallucinations or generating text that is factually incorrect or nonsensical. This is a primary factor preventing further adoption of such models in real-world applications. This project aims to address the issue of hallucinations in MLLMs by systematically analyzing their occurrence, understanding the underlying causes, and proposing methods to mitigate these hallucinations. We aim to pave the way for more intelligent, safer, and more dependable MLLMs in various multimodal domains and tasks.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published