From 33d74be2d3464dd1546e44208a412147c466ed64 Mon Sep 17 00:00:00 2001 From: Souradip Pal Date: Tue, 22 Oct 2024 00:35:37 -0500 Subject: [PATCH 1/2] Updated paper and README. --- README.md | 2 +- paper/paper.md | 9 +++++---- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index abf7271..8503620 100644 --- a/README.md +++ b/README.md @@ -63,7 +63,7 @@ $ python -m mm_poe $ mm_poe ``` -The application will prompt the user to provide relevant inputs for a multiple choice question e.g. a question, multiple answer choices for the question and the path to the image relevant the question context. Once the inputs are provided, the predicted answer will be displayed based prompt outputs. Note that this application runs inference for only a single sample at a time. +The application will prompt the user to provide relevant inputs for a multiple choice question e.g. a question, multiple answer choices for the question and the path to the image relevant to the question context. Once the inputs are provided, the predicted answer will be displayed based prompt outputs. Note that this application runs inference for only a single sample at a time. Example diff --git a/paper/paper.md b/paper/paper.md index 6dcb80c..f748951 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -69,7 +69,10 @@ The goal is to develop an in-context learning method that accurately selects $y$ In the first step of the MM-PoE method, each option $y_i$ is scored based on a specified metric. The score function, $\text{score}(x, h, y_i)$, evaluates each option's plausibility given the question $x$ and image $h$. The scores are used to eliminate options that are deemed less likely to be correct. Specifically, options whose scores are below the average score are eliminated. This is calculated as follows: $$ -s_i = \text{score}(x, h, y_i)\\ +s_i = \text{score}(x, h, y_i) +$$ + +$$ Y_{\text{wrong}} = \{y_i | s_i < \text{avg}(s_1, \ldots, s_n)\} $$ @@ -145,14 +148,12 @@ MM-PoE consistently outperformed or matched the best-performing baselines across | Model | Dataset | LM | AVG | Calibration | Channel | MCP | PoE | |----|------|------|------|-----------|---|---|---| -|microsoft/git-base-vqav2| VQA | 45 | 43 | 38 | | | | | |microsoft/git-base-vqav2| ScienceQA | 27.4 | 17.8 | 23.2| 24.6 | 25.8 | 27.2 | |microsoft/git-base-vqav2| AI2D | 25.4| 26.2 | 26.4| 25.4 | 25.3 | 26.5 | -|microsoft/git-base-textvqa| VQA | 18.5 | 17 | | | | | |microsoft/git-base-textvqa| ScienceQA | 21.8 | 20.4 | 25.8 | 23.4 | 23.6 | 28.2 | |microsoft/git-base-textvqa| AI2D | 26.5 | 27.6 | 20.8| 26.2 | 24.2| 26.8 | -**Table 1**: Comparison of Multiple-Choice Prompting (MCP) and Process of Elimination (PoE) accuracy scores on 3 visual question answering datasets for the `microsoft/git-base-vqav2` and `microsoft/git-base-textvqa` models in the zero-shot settings. Each dataset has different number of answer choices. PoE largely outperforms MCP on all the visual reasoning tasks for the two multi-modal models mentioned. +**Table 1**: Comparison of Multiple-Choice Prompting (MCP) and Process of Elimination (PoE) accuracy scores on 2 visual question answering datasets for the `microsoft/git-base-vqav2` and `microsoft/git-base-textvqa` models in the zero-shot settings. Each dataset has different number of answer choices. PoE mostly outperforms MCP on all the visual reasoning tasks for the two multi-modal models mentioned. ## Examples From 18ac1b2348fd6f196fd43f88c7674339da0a9503 Mon Sep 17 00:00:00 2001 From: Souradip Pal Date: Tue, 22 Oct 2024 00:41:22 -0500 Subject: [PATCH 2/2] Updated paper. --- paper/paper.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/paper/paper.md b/paper/paper.md index f748951..c946033 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -1,5 +1,5 @@ --- -title: 'MM-PoE: Multiple Choice Reasoning via. Process of Elimination using Multi-Modal models' +title: 'MM-PoE: Multiple Choice Reasoning via. Process of Elimination using Multi-Modal Models' tags: - machine learning - large language models @@ -18,7 +18,7 @@ affiliations: index: 1 - name: Purdue University index: 2 -date: 16 October 2024 +date: 22 October 2024 bibliography: paper.bib ---