-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with tolerance for floating point and its relevance when using log_scale = True #2183
Comments
Hi! I see how it seems imperfect that the validation is on the parameters in their original scale rather than the log scale, since all the modeling happens on a log scale. However, we have this error because there sometimes are operations on parameters in their original scale. For example, Ax may suggest a candidate that, when serialized or saved, is converted from double to single precision and back. On such a small scale, precision loss can cause the candidate to be outside the bounds. As a workaround, would you be able to log the parameters and bounds before passing them to Ax and not use the log transform? |
No because of the constraint that I need to use. It's a physics problem so I cannot really get away from it. 10**params3<=` A*(10**params1 +10**params2) Which, unless I am wrong, does not work, right? |
Oh yes, you're right, nonlinear parameter constraints aren't supported. There's more discussion of this at #153 and in several other issues. Perhaps you could multiply the parameters by large constants before passing them to Ax? Regarding operations on parameters in their original scale, it's true that modeling only happens in log space. The concern is Ax operations that aren't modeling, which often happen on the original scale. For example, serialization with Pandas in Ax can force float64 data to float32. |
I thought about the multiplication by a large constant. I did not mention it as I was hoping for a more elegant, less 'hacky' solution. |
Another solution would be to use nonlinear inequality constraints in BoTorch and not use Ax. See here on how to set them up and documentation at botorch.org for using BoTorch generally. But that would be considerably more effort than the hacky "multiply by a constant" solution. I'm going to close this since we have other issues discussing nonlinear inequality constraints in Ax (or rather, their absence), but please feel free to reopen with any additional questions. |
Since nothing will probably happen, I close this issue. |
Just for completeness the trick suggested above by @esantorella works just fine.
|
Hi,
I was playing around today with some parameters with very low values, typically ranging from 1e-14 to 1e-19.
I used to always give Ax the log transform values so there were no issues but now I have a slightly different problem which forces me to put a constraint and I cannot use the log-transformed values anymore. Then when passing the true values to Ax it raises the following error:
UserInputError:
Parameter range (9.9999e-15) is very small and likely to cause numerical errors. Consider reparameterizing your problem by scaling the parameter.`I understand where that comes from but I also find it very restrictive especially since I would prefer to use the log of these values.
To illustrate I wrote a small example of what my code would look like.
Because of the second constraint, I cannot just give the log value to Ax.
Then my suggestion is the following, when log_scale is used can't we just give the log values to the surrogate and only delog the values when printing the results of the optimization and evaluating the constraint?
Wouldn't it also make more sense that when we use the log_scale option the surrogate is actually trained using the log values?
Thanks for your help,
Vincent
The text was updated successfully, but these errors were encountered: