-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validation framework for model #17
Comments
Some measures to consider:
|
Guidelines from dft on activity and agent based models: TAG unit M5-4 |
Putting this here for now but could be a separate issue on calibration later:
A useful exercise could be to identify the datasets that could be used to calibrate at intermediate points of the pipeline |
Notes on tasks:
|
We also need to determine a set of metrics for measuring quality of matching between the two datasets as part of task1. |
I completely agree with @sgreenbury, and I think calculating the metrics for comparison is easy. But after generating metrics, I have a question: How do we judge the 'goodness'? In other words, how do we set a threshold value as the acceptable standard? We do not have another candidate dataset to compare, but maybe we can compare it with other synthetic populations published in previous papers. |
Adding reference with validation methods from @stuartlynn |
|
As discussed on 20th Sep:
|
Aim: define metrics to be used at different parts of the modelling to validate model against data. E.g. flows from QUANT model. See section in wiki.
The text was updated successfully, but these errors were encountered: