Skip to content

Commit 33d74be

Browse files
committed
Updated paper and README.
1 parent c05f30e commit 33d74be

File tree

2 files changed

+6
-5
lines changed

2 files changed

+6
-5
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ $ python -m mm_poe
6363
$ mm_poe
6464
```
6565

66-
The application will prompt the user to provide relevant inputs for a multiple choice question e.g. a question, multiple answer choices for the question and the path to the image relevant the question context. Once the inputs are provided, the predicted answer will be displayed based prompt outputs. Note that this application runs inference for only a single sample at a time.
66+
The application will prompt the user to provide relevant inputs for a multiple choice question e.g. a question, multiple answer choices for the question and the path to the image relevant to the question context. Once the inputs are provided, the predicted answer will be displayed based prompt outputs. Note that this application runs inference for only a single sample at a time.
6767

6868

6969
<img src="paper/figures/cli.png" alt="Example" width="500">

paper/paper.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,10 @@ The goal is to develop an in-context learning method that accurately selects $y$
6969
In the first step of the MM-PoE method, each option $y_i$ is scored based on a specified metric. The score function, $\text{score}(x, h, y_i)$, evaluates each option's plausibility given the question $x$ and image $h$. The scores are used to eliminate options that are deemed less likely to be correct. Specifically, options whose scores are below the average score are eliminated. This is calculated as follows:
7070

7171
$$
72-
s_i = \text{score}(x, h, y_i)\\
72+
s_i = \text{score}(x, h, y_i)
73+
$$
74+
75+
$$
7376
Y_{\text{wrong}} = \{y_i | s_i < \text{avg}(s_1, \ldots, s_n)\}
7477
$$
7578

@@ -145,14 +148,12 @@ MM-PoE consistently outperformed or matched the best-performing baselines across
145148

146149
| Model | Dataset | LM | AVG | Calibration | Channel | MCP | PoE |
147150
|----|------|------|------|-----------|---|---|---|
148-
|microsoft/git-base-vqav2| VQA | 45 | 43 | 38 | | | | |
149151
|microsoft/git-base-vqav2| ScienceQA | 27.4 | 17.8 | 23.2| 24.6 | 25.8 | 27.2 |
150152
|microsoft/git-base-vqav2| AI2D | 25.4| 26.2 | 26.4| 25.4 | 25.3 | 26.5 |
151-
|microsoft/git-base-textvqa| VQA | 18.5 | 17 | | | | |
152153
|microsoft/git-base-textvqa| ScienceQA | 21.8 | 20.4 | 25.8 | 23.4 | 23.6 | 28.2 |
153154
|microsoft/git-base-textvqa| AI2D | 26.5 | 27.6 | 20.8| 26.2 | 24.2| 26.8 |
154155

155-
**Table 1**: Comparison of Multiple-Choice Prompting (MCP) and Process of Elimination (PoE) accuracy scores on 3 visual question answering datasets for the `microsoft/git-base-vqav2` and `microsoft/git-base-textvqa` models in the zero-shot settings. Each dataset has different number of answer choices. PoE largely outperforms MCP on all the visual reasoning tasks for the two multi-modal models mentioned.
156+
**Table 1**: Comparison of Multiple-Choice Prompting (MCP) and Process of Elimination (PoE) accuracy scores on 2 visual question answering datasets for the `microsoft/git-base-vqav2` and `microsoft/git-base-textvqa` models in the zero-shot settings. Each dataset has different number of answer choices. PoE mostly outperforms MCP on all the visual reasoning tasks for the two multi-modal models mentioned.
156157

157158
## Examples
158159

0 commit comments

Comments
 (0)