Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ELUC: Predictor robojudge #49

Open
ofrancon opened this issue Sep 21, 2023 · 0 comments
Open

ELUC: Predictor robojudge #49

ofrancon opened this issue Sep 21, 2023 · 0 comments
Labels

Comments

@ofrancon
Copy link
Member

Add a "robojudge" that evaluates how well the predictors perform and how they compare.
Different metrics can be computed and compared, like mean absolute error (MAE) or MAE / hectar, mean square error, etc. Mean rank is another useful metric: rank the predictors for each country, and compute their mean rank on aggregate.
It should be possible to compare the predictors at an aggregated level, and also at a country level.

See https://phase1.xprize.evolution.ml/ for an example of robojudge that was used for the Pandemic Resilience XPRIZE challenge.
See https://github.com/cognizant-ai-labs/covid-xprize/blob/master/predictor_robojudge.ipynb for a notebook that was used to compare the predictors

@ofrancon ofrancon added the app label Sep 21, 2023
@ofrancon ofrancon changed the title LUC: Predictor robojudge ELUC: Predictor robojudge Sep 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant