Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification needed in evaluation numbers #5

Open
saurabhkumar8112 opened this issue Dec 12, 2023 · 5 comments
Open

Clarification needed in evaluation numbers #5

saurabhkumar8112 opened this issue Dec 12, 2023 · 5 comments

Comments

@saurabhkumar8112
Copy link

Hello,
Thanks for the repo and awesome work. I am requesting clarification on the evaluation results shown in the repo.
image

For humanEval Zero shot, GPT-4's score is reported here as 87.4 but in the Gemini Report and GPT-4 paper(and everywhere else), humanEval score for GPT-4 Zero shot is 67.

image image

Is the "Zero-shot" prompt technique mentioned in the repo followed by Medprompt methodology? If yes, please clarify.
For MMLU is explicitly clear but not for others.

Apologies if I missed anything.

@saurabhkumar8112 saurabhkumar8112 changed the title Clarification need in evaluation numbers Clarification needed in evaluation numbers Dec 12, 2023
@dzunglt24
Copy link

@saurabhkumar8112 look into their code, i guess it is standard zero-shot result using newest GPT-4 checkpoint.

@Harsha-Nori
Copy link
Collaborator

Yes @dzunglt24 is right -- we do have all the code we used to run on HumanEval here, and it is zero-shot with the latest GPT-4 checkpoint. The numbers reported in the OpenAI report are from many months ago, and it's likely that there have been both model improvements, and subtlety in prompting differences (even in the zero shot setting) that leads to our improved performance number here.

I believe others have found that the GPT-4 numbers were underreported in the Technical Report as well, e.g. see: https://twitter.com/OwariDa/status/1732423557802782854

Our HumanEval scripts/prompt are:

@saurabhkumar8112
Copy link
Author

I see. That’s good to know.
Then that means the Gemini Report had under-reported numbers for GPT4(as the numbers were from old checkpoint)?

@Divkovicalex75
Copy link

UYQAKAQV8743RXXRMUUB2GTTRFF8

@Harsha-Nori
Copy link
Collaborator

I see. That’s good to know. Then that means the Gemini Report had under-reported numbers for GPT4(as the numbers were from old checkpoint)?

I believe the Gemini Report cited + pulled the Humaneval numbers directly from OpenAI's initial GPT-4 technical report (which was released in March alongside the first version of the model). We just happened to run our own zero-shot prompts against a more recent checkpoint so we have updated numbers here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants