Skip to content

Updated paper and README. #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Oct 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ $ python -m mm_poe
$ mm_poe
```

The application will prompt the user to provide relevant inputs for a multiple choice question e.g. a question, multiple answer choices for the question and the path to the image relevant the question context. Once the inputs are provided, the predicted answer will be displayed based prompt outputs. Note that this application runs inference for only a single sample at a time.
The application will prompt the user to provide relevant inputs for a multiple choice question e.g. a question, multiple answer choices for the question and the path to the image relevant to the question context. Once the inputs are provided, the predicted answer will be displayed based prompt outputs. Note that this application runs inference for only a single sample at a time.


<img src="paper/figures/cli.png" alt="Example" width="500">
Expand Down
13 changes: 7 additions & 6 deletions paper/paper.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'MM-PoE: Multiple Choice Reasoning via. Process of Elimination using Multi-Modal models'
title: 'MM-PoE: Multiple Choice Reasoning via. Process of Elimination using Multi-Modal Models'
tags:
- machine learning
- large language models
Expand All @@ -18,7 +18,7 @@ affiliations:
index: 1
- name: Purdue University
index: 2
date: 16 October 2024
date: 22 October 2024
bibliography: paper.bib
---

Expand Down Expand Up @@ -69,7 +69,10 @@ The goal is to develop an in-context learning method that accurately selects $y$
In the first step of the MM-PoE method, each option $y_i$ is scored based on a specified metric. The score function, $\text{score}(x, h, y_i)$, evaluates each option's plausibility given the question $x$ and image $h$. The scores are used to eliminate options that are deemed less likely to be correct. Specifically, options whose scores are below the average score are eliminated. This is calculated as follows:

$$
s_i = \text{score}(x, h, y_i)\\
s_i = \text{score}(x, h, y_i)
$$

$$
Y_{\text{wrong}} = \{y_i | s_i < \text{avg}(s_1, \ldots, s_n)\}
$$

Expand Down Expand Up @@ -145,14 +148,12 @@ MM-PoE consistently outperformed or matched the best-performing baselines across

| Model | Dataset | LM | AVG | Calibration | Channel | MCP | PoE |
|----|------|------|------|-----------|---|---|---|
|microsoft/git-base-vqav2| VQA | 45 | 43 | 38 | | | | |
|microsoft/git-base-vqav2| ScienceQA | 27.4 | 17.8 | 23.2| 24.6 | 25.8 | 27.2 |
|microsoft/git-base-vqav2| AI2D | 25.4| 26.2 | 26.4| 25.4 | 25.3 | 26.5 |
|microsoft/git-base-textvqa| VQA | 18.5 | 17 | | | | |
|microsoft/git-base-textvqa| ScienceQA | 21.8 | 20.4 | 25.8 | 23.4 | 23.6 | 28.2 |
|microsoft/git-base-textvqa| AI2D | 26.5 | 27.6 | 20.8| 26.2 | 24.2| 26.8 |

**Table 1**: Comparison of Multiple-Choice Prompting (MCP) and Process of Elimination (PoE) accuracy scores on 3 visual question answering datasets for the `microsoft/git-base-vqav2` and `microsoft/git-base-textvqa` models in the zero-shot settings. Each dataset has different number of answer choices. PoE largely outperforms MCP on all the visual reasoning tasks for the two multi-modal models mentioned.
**Table 1**: Comparison of Multiple-Choice Prompting (MCP) and Process of Elimination (PoE) accuracy scores on 2 visual question answering datasets for the `microsoft/git-base-vqav2` and `microsoft/git-base-textvqa` models in the zero-shot settings. Each dataset has different number of answer choices. PoE mostly outperforms MCP on all the visual reasoning tasks for the two multi-modal models mentioned.

## Examples

Expand Down
Loading