You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a use case for adaptive where the function parameter is a slowly changable physical parameter like the orientation angle of a polarization filter. The simple approach with the SequentialExecutor works but spends by far most of the time re-oriening the filter due to basically worst-case ordering of points.
One can maybe formulate the problem like this: There exists a maximal reasonable distance between sample points above which the measurement time is dwarfed by some "manipulation" time.
Our current hack is to implement a BatchExecutor which starts the evaluation after the first point is requested and can re-order all points submitted so far. By setting ntasks to a reasonable number which sadly depends on the concrete boundary conditions the wasted time can be greatly reduced. You can find the implementation here. If there is interest I can create a pull request to included it in adaptive.
Is there a different less hacky solution for the problem? Is there some previous discussion or some place in the documentation that I overlooked?
The text was updated successfully, but these errors were encountered:
This is an excellent question, but also extremely tough!
First of all, I am not sure what a good algorithm or even a clear problem setting might be. Depending on how costly it is to vary the parameters, and the dimensionality of the parameter space, the optimal strategy may vary dramatically.
From the API standpoint, I believe that planning the queries should be the responsibility of the Learner and not the Executor, so that it can plan ahead. A possible implementation would keep track of the last supplied point, and reweigh the losses based on distance from that. A batched strategy with pruning is another similar option.
I agree that a general optimial solution is very hard to find if you want to include a "manipulation cost" into the learner's strategy. Especially, if the dimensionality is high.
An easy and clean way to solve this problem might be to keep the original strategy but use the "free" function evaluations that are avaiable on the path between the requested evaluation points. As far as I understand one can already use Learner.tell for non-requested points. We decided to not go this path because we wanted the Runners convinience functionalities like plotting.
I agree that the proper place is not in the Executor but maybe this can be best integrated in the Runner. At the and it is a known property of the function "evaluation method". I think the relevant properties of function evaluation are:
Can the function be evaluated concurrently?
Does the evaluation cost depend on the point itself?
If the function cannot be evaluated concurrently: does the function evaluation cost depend on the previously evaluated point?
We have a use case for adaptive where the function parameter is a slowly changable physical parameter like the orientation angle of a polarization filter. The simple approach with the
SequentialExecutor
works but spends by far most of the time re-oriening the filter due to basically worst-case ordering of points.One can maybe formulate the problem like this: There exists a maximal reasonable distance between sample points above which the measurement time is dwarfed by some "manipulation" time.
Our current hack is to implement a
BatchExecutor
which starts the evaluation after the first point is requested and can re-order all points submitted so far. By settingntasks
to a reasonable number which sadly depends on the concrete boundary conditions the wasted time can be greatly reduced. You can find the implementation here. If there is interest I can create a pull request to included it in adaptive.Is there a different less hacky solution for the problem? Is there some previous discussion or some place in the documentation that I overlooked?
The text was updated successfully, but these errors were encountered: