You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Different from domain adaptation, domain generalization requires a robust model against distribution shift before any target domain distribution matching can be done.
Usually, hyper-parameter tunining is done on a small validation set, since we have no access to target domain data before model has to be deployed, the validation set we can make can only come from the source domains. So does this kind of validation make sense?
One argument is we also want an algorithm which performs well on the source domain, and a train-validation split on the source domain would be a good way to find a good model that works on the source domain well, but is this better than if we just use training data loss to choose hyper-parameters?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Different from domain adaptation, domain generalization requires a robust model against distribution shift before any target domain distribution matching can be done.
Usually, hyper-parameter tunining is done on a small validation set, since we have no access to target domain data before model has to be deployed, the validation set we can make can only come from the source domains. So does this kind of validation make sense?
One argument is we also want an algorithm which performs well on the source domain, and a train-validation split on the source domain would be a good way to find a good model that works on the source domain well, but is this better than if we just use training data loss to choose hyper-parameters?
Beta Was this translation helpful? Give feedback.
All reactions