Multimodal large language models (MLLMs) have shown impressive performance in tasks that involve processing visual and textual information. However, MLLMs are still prone to hallucinations or generating text that is factually incorrect or nonsensical. This is a primary factor preventing further adoption of such models in real-world applications. This project aims to address the issue of hallucinations in MLLMs by systematically analyzing their occurrence, understanding the underlying causes, and proposing methods to mitigate these hallucinations. We aim to pave the way for more intelligent, safer, and more dependable MLLMs in various multimodal domains and tasks.
-
Notifications
You must be signed in to change notification settings - Fork 0
shamanthak-hegde/CSE576-NLP
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published