Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

calibration of predicted probabilities #13

Open
tholor opened this issue Oct 31, 2014 · 0 comments
Open

calibration of predicted probabilities #13

tholor opened this issue Oct 31, 2014 · 0 comments

Comments

@tholor
Copy link
Owner

tholor commented Oct 31, 2014

Good ROC curves per subject do not necessarily lead to an good "overall ROC Curve".
Calibration of the predicted probabilities between the subjects is likely to result in a higher score!

http://www.kaggle.com/c/seizure-prediction/forums/t/10383/leaderboard-metric-roc-auc
"You could plot ROC curves for each patient on your cross-validation predictions and then adjust the prediction values so that the curves 'line up' in such a way to optimise for maximum global AUC. Then use those to patch your final predictions before submission... but there's no guarantee that your predictions on the test segments will produce similar ROC curves for this to be effective. "

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant