You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Derivative GP components such as ConstantMeanGrad RBFKernelGrad
differentiate all inputs and concatenate that to the output of the, however, I propose that you can choose which dimensions you differentiate.
Here's an example.
Take a GP that maps inputs (x,y,a,b) and outputs (f, dfdx, dfdy) where f is some function output and dfd* is the derivative of the output w.r.t that input.
Currently, you can solve problems like this but all inputs derivatives are included in the output. Meaning the output would be (f, dfdx, dfdy, dfda, dfdb) where we know nothing about (dfda, dfdb)
Motivation
This would aim to clean up and make easier problems like the one I specified above, one may be able to just set all ground truth values of da or db to NaN, however, I hope the reader agrees this is a poor solution.
Here is a more concrete example. Lets say we are modelling some kind of flow over a 2d field. For example lets say we are modelling temperature as f, and we are interested in flux (dfdx, dfdy), at each input point x,y, we also have additional information, perhaps material properties at that point a,b. What we aim to model as a function (x,y,a,b) => (f, dfdx, dfdy), the infmation (dfda, dfdb), just doesn't make much sense in this context and we aren't interested in it so I don't want it in the output.
Pitch
I propose that we add a dim argument to all kernels and means with grad in their title. Using a approach similar to torch.diff.
Selecting an axis (or axes) is dim will add it to the list of inputs to be differentiated against, (you could also add a special case for all dimensions which is default behavior, perhaps 'all')
ConstantMeanGrad(dims='all') will act as the class currently does ConstantMeanGrad(dims=()) will not perform differentiation and will be identical to ConstantMean ConstantMeanGrad(dims=(0,1)) will act as the example provided above.
Describe the solution you'd like
I would like to have the code in these classes updated, or even, a base class for *grad mean and kernel modules, from others to inherit.
Describe alternatives you've considered
Perhaps we can publish a guide or something for a work around.
Are you willing to open a pull request? (We LOVE contributions!!!)
Yes!! But I am limited for time and my understanding of GPyTorch is still developing.
Additional context
The text was updated successfully, but these errors were encountered:
🚀 Feature Request
Derivative GP components such as
ConstantMeanGrad
RBFKernelGrad
differentiate all inputs and concatenate that to the output of the, however, I propose that you can choose which dimensions you differentiate.
Here's an example.
Take a GP that maps inputs
(x,y,a,b)
and outputs(f, dfdx, dfdy)
wheref
is some function output anddfd*
is the derivative of the output w.r.t that input.Currently, you can solve problems like this but all inputs derivatives are included in the output. Meaning the output would be
(f, dfdx, dfdy, dfda, dfdb)
where we know nothing about(dfda, dfdb)
Motivation
This would aim to clean up and make easier problems like the one I specified above, one may be able to just set all ground truth values of da or db to NaN, however, I hope the reader agrees this is a poor solution.
Here is a more concrete example. Lets say we are modelling some kind of flow over a 2d field. For example lets say we are modelling temperature as
f
, and we are interested in flux(dfdx
,dfdy)
, at each input pointx,y
, we also have additional information, perhaps material properties at that pointa,b
. What we aim to model as a function(x,y,a,b) => (f, dfdx, dfdy)
, the infmation(dfda, dfdb)
, just doesn't make much sense in this context and we aren't interested in it so I don't want it in the output.Pitch
I propose that we add a
dim
argument to all kernels and means withgrad
in their title. Using a approach similar totorch.diff
.Selecting an axis (or axes) is
dim
will add it to the list of inputs to be differentiated against, (you could also add a special case for all dimensions which is default behavior, perhaps 'all')ConstantMeanGrad(dims='all')
will act as the class currently doesConstantMeanGrad(dims=())
will not perform differentiation and will be identical toConstantMean
ConstantMeanGrad(dims=(0,1))
will act as the example provided above.Describe the solution you'd like
I would like to have the code in these classes updated, or even, a base class for
*grad
mean and kernel modules, from others to inherit.Describe alternatives you've considered
Perhaps we can publish a guide or something for a work around.
Are you willing to open a pull request? (We LOVE contributions!!!)
Yes!! But I am limited for time and my understanding of GPyTorch is still developing.
Additional context
The text was updated successfully, but these errors were encountered: