diff --git a/README.md b/README.md
index 143c9cb..abf7271 100644
--- a/README.md
+++ b/README.md
@@ -11,13 +11,11 @@
Multi-Modal Process of Elimination (MM-PoE) is a method to enhance vision language models' performance on multiple-choice visual reasoning by employing a two-step scoring system that first eliminates incorrect options and then predicts from the remaining ones. Our experiments across three question answering datasets show the method's effectiveness, particularly in visual reasoning tasks.
-**Statement of Need**
-
Large Language models (LLMs) excel at in-context learning for multiple choice reasoning tasks but often treat all options equally, unlike humans who typically eliminate incorrect choices before selecting the correct answer. Same is true for vision language models (VLMs) in case of visual question answering tasks with multiple choices. This discrepancy can limit the effectiveness of vision language models in accurately solving such tasks. To address this, we introduce Multi-Modal Process of Elimination (MM-PoE), a two-step scoring method designed to enhance VLM performance by mimicking human reasoning strategies in multi-modal settings.
In the first step, the method evaluates and scores each option, systematically eliminating those that appear incorrect. The second step involves masking these eliminated options, allowing the VLM to focus solely on the remaining viable choices to make a final prediction. Our zero-shot experiments across three datasets demonstrate MM-PoE's effectiveness, particularly excelling in logical reasoning scenarios. Additionally, MM-PoE proves adaptable to few-shot settings and is compatible with the current state-of-the-art vision language models (VLMs).
-By implementing MM-PoE, researchers and practitioners can experiment and significantly improve the accuracy and reliability of VLMs in multiple choice reasoning tasks, making it a valuable tool for advancing machine learning models for visual reasoning.
+Using this tool, researchers and practitioners can experiment and significantly improve the accuracy and reliability of VLMs in multiple choice reasoning tasks, making it a valuable tool for advancing machine learning models for visual reasoning.
## Installing MM-PoE
diff --git a/mm_poe/results/language_modeling.csv b/mm_poe/results/language_modeling.csv
index 6237ebe..ebc51d0 100644
--- a/mm_poe/results/language_modeling.csv
+++ b/mm_poe/results/language_modeling.csv
@@ -21,3 +21,25 @@ GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,language_modeling,2,0,100,0.2800
GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,language_modeling,3,0,100,0.3100
GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,language_modeling,4,0,100,0.2400
GIT,microsoft/git-base-vqav2,FP32,vqa,2,language_modeling,0,0,100,0.6000
+GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,0,0,100,0.1600
+GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,1,0,100,0.2000
+GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,2,0,100,0.2200
+GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,3,0,100,0.1800
+GIT,microsoft/git-base-vqav2,FP32,scienceqa,2,average_language_modeling,4,0,100,0.1300
+GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,0,0,100,0.3000
+GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,1,0,100,0.2600
+GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,2,0,100,0.2200
+GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,3,0,100,0.2600
+GIT,microsoft/git-base-vqav2,FP32,ai2d,2,average_language_modeling,4,0,100,0.2700
+GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,0,0,100,0.1900
+GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,0,0,100,0.3100
+GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,1,0,100,0.2000
+GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,1,0,100,0.2800
+GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,2,0,100,0.2100
+GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,2,0,100,0.2300
+GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,3,0,100,0.2200
+GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,3,0,100,0.2800
+GIT,microsoft/git-base-textvqa,FP32,scienceqa,2,average_language_modeling,4,0,100,0.2000
+GIT,microsoft/git-base-textvqa,FP32,ai2d,2,average_language_modeling,4,0,100,0.2800
+GIT,microsoft/git-base-textvqa,FP32,vqa,2,language_modeling,0,0,100,0.1900
+GIT,microsoft/git-base-textvqa,FP32,vqa,2,average_language_modeling,0,0,100,0.1800
diff --git a/paper/figures/17.png b/paper/figures/17.png
new file mode 100644
index 0000000..c15a66a
Binary files /dev/null and b/paper/figures/17.png differ
diff --git a/paper/paper.md b/paper/paper.md
index 1c6e961..6dcb80c 100644
--- a/paper/paper.md
+++ b/paper/paper.md
@@ -32,7 +32,7 @@ Large Language models (LLMs) excel at in-context learning for multiple choice re
In the first step, the method evaluates and scores each option, systematically eliminating those that appear incorrect. The second step involves masking these eliminated options, allowing the VLM to focus solely on the remaining viable choices to make a final prediction. Our zero-shot experiments across three datasets demonstrate MM-PoE's effectiveness, particularly excelling in logical reasoning scenarios. Additionally, MM-PoE proves adaptable to few-shot settings and is compatible with the current state-of-the-art vision language models (VLMs).
-By implementing MM-PoE, researchers and practitioners can experiment and significantly improve the accuracy and reliability of VLMs in multiple choice reasoning tasks, making it a valuable tool for advancing machine learning models for visual reasoning.
+Using this tool, researchers and practitioners can experiment and significantly improve the accuracy and reliability of VLMs in multiple choice reasoning tasks, making it a valuable tool for advancing machine learning models for visual reasoning.
# State of the Field
@@ -145,23 +145,37 @@ MM-PoE consistently outperformed or matched the best-performing baselines across
| Model | Dataset | LM | AVG | Calibration | Channel | MCP | PoE |
|----|------|------|------|-----------|---|---|---|
-|microsoft/git-base-vqav2| VQA | 45 | 43 | 38| 14 | 2| | |
-|microsoft/git-base-vqav2| ScienceQA | 27.4 | | 23.2| 24.6 | 25.8 | 27.2 |
-|microsoft/git-base-vqav2| AI2D | 25.4| | 26.4| 25.4 | 25.3 | 26.5 |
-|microsoft/git-base-textvqa| VQA | | | | | | |
-|microsoft/git-base-textvqa| ScienceQA | 21.8| | 25.8 | 23.4 | 23.6 | 28.2 |
-|microsoft/git-base-textvqa| AI2D | 26.5 | | 20.8| 26.2 | 24.2| 26.8 |
+|microsoft/git-base-vqav2| VQA | 45 | 43 | 38 | | | | |
+|microsoft/git-base-vqav2| ScienceQA | 27.4 | 17.8 | 23.2| 24.6 | 25.8 | 27.2 |
+|microsoft/git-base-vqav2| AI2D | 25.4| 26.2 | 26.4| 25.4 | 25.3 | 26.5 |
+|microsoft/git-base-textvqa| VQA | 18.5 | 17 | | | | |
+|microsoft/git-base-textvqa| ScienceQA | 21.8 | 20.4 | 25.8 | 23.4 | 23.6 | 28.2 |
+|microsoft/git-base-textvqa| AI2D | 26.5 | 27.6 | 20.8| 26.2 | 24.2| 26.8 |
**Table 1**: Comparison of Multiple-Choice Prompting (MCP) and Process of Elimination (PoE) accuracy scores on 3 visual question answering datasets for the `microsoft/git-base-vqav2` and `microsoft/git-base-textvqa` models in the zero-shot settings. Each dataset has different number of answer choices. PoE largely outperforms MCP on all the visual reasoning tasks for the two multi-modal models mentioned.
-## Example
+## Examples
+### ScienceQA Example
**Question**: Which of these states is farthest north?
-**Choices**: West Virginia, Louisiana, Arizona, Oklahoma
-**Masked Choices**: West Virginia, Louisiana, [MASK], [MASK]
-**Predicted**: West Virginia
+**Options**: West Virginia, Louisiana, Arizona, Oklahoma
+**Ground Truth Option**: West Virginia
+
+**Predicted Masks**: West Virginia, Louisiana, [MASK], [MASK]
+**Predicted Option**: West Virginia
+
+### AI2D Example
+
+
+
+**Question**: Are phytoplankton predators or prey in this food chain?
+**Options**: producer, predator, prey, NA
+**Ground Truth Option**: prey
+
+**Predicted Masks**: [MASK], predator, prey, NA
+**Predicted Option**: prey
# Conclusion