You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I definitely agree that the loss should be larger for the case you showed.
Would you agree that the reason why a loss of 1e-3 seems "wrong" is that the points are not well-distributed on the interval [0, 0.2]? If so, perhaps we could include an extra factor into the loss to penalize this.
As you point out, reporting an infinite loss until the boundary points are included could be the pragmatic choice, at least for the case where people are just using a Runner.
My only concern would be the case that someone starts from a bunch of data that does not necessarily include the boundaries.
It would be a little strange if your eyes are telling you that you already have a pretty good sampling, but adaptive is reporting a loss of infinity.
I agree with Joe's concern about the counter-intuitive loss behavior. We could instead add to the loss something like interval_size / (x_max - x_min) - 1.
prints
0.0015347736506519973
.A typical runner goal
runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
would finish after two points.I think we should report an infinite loss until the boundary points are included.
@akhmerov, @jbweston, what do you think?
The text was updated successfully, but these errors were encountered: