Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ELUC: evaluate predictor uncertainty #55

Open
ofrancon opened this issue Sep 22, 2023 · 0 comments
Open

ELUC: evaluate predictor uncertainty #55

ofrancon opened this issue Sep 22, 2023 · 0 comments
Labels

Comments

@ofrancon
Copy link
Member

Train a model that evaluates point predictions from a predictor model.

For a particular prediction, if the model has been trained on plenty of similar features we can be more confident in the prediction than if the model has never seen this kind of features (out of distribution sample).

See for example Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel for one methodology that could be used to train an uncertainty model. The associated code is available here

Other kind of models could be used too.

@ofrancon ofrancon added the model label Sep 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant