Skip to content

Commit

Permalink
Modified paper and added results.
Browse files Browse the repository at this point in the history
  • Loading branch information
souradipp76 committed Oct 22, 2024
1 parent e734e22 commit c05f30e
Show file tree
Hide file tree
Showing 4 changed files with 48 additions and 14 deletions.
4 changes: 1 addition & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,11 @@

Multi-Modal Process of Elimination (MM-PoE) is a method to enhance vision language models' performance on multiple-choice visual reasoning by employing a two-step scoring system that first eliminates incorrect options and then predicts from the remaining ones. Our experiments across three question answering datasets show the method's effectiveness, particularly in visual reasoning tasks.

**Statement of Need**

Large Language models (LLMs) excel at in-context learning for multiple choice reasoning tasks but often treat all options equally, unlike humans who typically eliminate incorrect choices before selecting the correct answer. Same is true for vision language models (VLMs) in case of visual question answering tasks with multiple choices. This discrepancy can limit the effectiveness of vision language models in accurately solving such tasks. To address this, we introduce Multi-Modal Process of Elimination (MM-PoE), a two-step scoring method designed to enhance VLM performance by mimicking human reasoning strategies in multi-modal settings.

In the first step, the method evaluates and scores each option, systematically eliminating those that appear incorrect. The second step involves masking these eliminated options, allowing the VLM to focus solely on the remaining viable choices to make a final prediction. Our zero-shot experiments across three datasets demonstrate MM-PoE's effectiveness, particularly excelling in logical reasoning scenarios. Additionally, MM-PoE proves adaptable to few-shot settings and is compatible with the current state-of-the-art vision language models (VLMs).

By implementing MM-PoE, researchers and practitioners can experiment and significantly improve the accuracy and reliability of VLMs in multiple choice reasoning tasks, making it a valuable tool for advancing machine learning models for visual reasoning.
Using this tool, researchers and practitioners can experiment and significantly improve the accuracy and reliability of VLMs in multiple choice reasoning tasks, making it a valuable tool for advancing machine learning models for visual reasoning.

## Installing MM-PoE

Expand Down
22 changes: 22 additions & 0 deletions mm_poe/results/language_modeling.csv
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,25 @@ GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,language_modeling,2,0,100,0.2800
GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,language_modeling,3,0,100,0.3100
GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,language_modeling,4,0,100,0.2400
GIT,microsoft/git-base-vqav2,FP32,vqa,2,language_modeling,0,0,100,0.6000
GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,0,0,100,0.1600
GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,1,0,100,0.2000
GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,2,0,100,0.2200
GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,3,0,100,0.1800
GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,4,0,100,0.1300
GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,0,0,100,0.3000
GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,1,0,100,0.2600
GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,2,0,100,0.2200
GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,3,0,100,0.2600
GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,4,0,100,0.2700
GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,0,0,100,0.1900
GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,0,0,100,0.3100
GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,1,0,100,0.2000
GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,1,0,100,0.2800
GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,2,0,100,0.2100
GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,2,0,100,0.2300
GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,3,0,100,0.2200
GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,3,0,100,0.2800
GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,4,0,100,0.2000
GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,4,0,100,0.2800
GIT,microsoft/git-base-textvqa,FP32,vqa,2,language_modeling,0,0,100,0.1900
GIT,microsoft/git-base-textvqa,FP32,vqa,2,average_language_modeling,0,0,100,0.1800
Binary file added paper/figures/17.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
36 changes: 25 additions & 11 deletions paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Large Language models (LLMs) excel at in-context learning for multiple choice re

In the first step, the method evaluates and scores each option, systematically eliminating those that appear incorrect. The second step involves masking these eliminated options, allowing the VLM to focus solely on the remaining viable choices to make a final prediction. Our zero-shot experiments across three datasets demonstrate MM-PoE's effectiveness, particularly excelling in logical reasoning scenarios. Additionally, MM-PoE proves adaptable to few-shot settings and is compatible with the current state-of-the-art vision language models (VLMs).

By implementing MM-PoE, researchers and practitioners can experiment and significantly improve the accuracy and reliability of VLMs in multiple choice reasoning tasks, making it a valuable tool for advancing machine learning models for visual reasoning.
Using this tool, researchers and practitioners can experiment and significantly improve the accuracy and reliability of VLMs in multiple choice reasoning tasks, making it a valuable tool for advancing machine learning models for visual reasoning.

# State of the Field

Expand Down Expand Up @@ -145,23 +145,37 @@ MM-PoE consistently outperformed or matched the best-performing baselines across

| Model | Dataset | LM | AVG | Calibration | Channel | MCP | PoE |
|----|------|------|------|-----------|---|---|---|
|microsoft/git-base-vqav2| VQA | 45 | 43 | 38| 14 | 2| | |
|microsoft/git-base-vqav2| ScienceQA | 27.4 | | 23.2| 24.6 | 25.8 | 27.2 |
|microsoft/git-base-vqav2| AI2D | 25.4| | 26.4| 25.4 | 25.3 | 26.5 |
|microsoft/git-base-textvqa| VQA | | | | | | |
|microsoft/git-base-textvqa| ScienceQA | 21.8| | 25.8 | 23.4 | 23.6 | 28.2 |
|microsoft/git-base-textvqa| AI2D | 26.5 | | 20.8| 26.2 | 24.2| 26.8 |
|microsoft/git-base-vqav2| VQA | 45 | 43 | 38 | | | | |
|microsoft/git-base-vqav2| ScienceQA | 27.4 | 17.8 | 23.2| 24.6 | 25.8 | 27.2 |
|microsoft/git-base-vqav2| AI2D | 25.4| 26.2 | 26.4| 25.4 | 25.3 | 26.5 |
|microsoft/git-base-textvqa| VQA | 18.5 | 17 | | | | |
|microsoft/git-base-textvqa| ScienceQA | 21.8 | 20.4 | 25.8 | 23.4 | 23.6 | 28.2 |
|microsoft/git-base-textvqa| AI2D | 26.5 | 27.6 | 20.8| 26.2 | 24.2| 26.8 |

**Table 1**: Comparison of Multiple-Choice Prompting (MCP) and Process of Elimination (PoE) accuracy scores on 3 visual question answering datasets for the `microsoft/git-base-vqav2` and `microsoft/git-base-textvqa` models in the zero-shot settings. Each dataset has different number of answer choices. PoE largely outperforms MCP on all the visual reasoning tasks for the two multi-modal models mentioned.

## Example
## Examples

### ScienceQA Example
<img src="figures/image.png" alt="Example" width="500">

**Question**: Which of these states is farthest north?<br>
**Choices**: West Virginia, Louisiana, Arizona, Oklahoma<br>
**Masked Choices**: West Virginia, Louisiana, [MASK], [MASK]<br>
**Predicted**: West Virginia
**Options**: West Virginia, Louisiana, Arizona, Oklahoma<br>
**Ground Truth Option**: West Virginia

**Predicted Masks**: West Virginia, Louisiana, [MASK], [MASK]<br>
**Predicted Option**: West Virginia

### AI2D Example

<img src="figures/17.png" alt="Example" width="500">

**Question**: Are phytoplankton predators or prey in this food chain?<br>
**Options**: producer, predator, prey, NA<br>
**Ground Truth Option**: prey

**Predicted Masks**: [MASK], predator, prey, NA<br>
**Predicted Option**: prey

# Conclusion

Expand Down

0 comments on commit c05f30e

Please sign in to comment.