Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

{Question} How to compute the posterior covariance matrix on the training data? #2532

Open
r-ashwin opened this issue Jun 16, 2024 · 2 comments

Comments

@r-ashwin
Copy link

Let's say I have a GP with n training points. How to compute the nxn covariance matrix on the training data with the posterior GP.

def fit_full_model(train_X, train_Y):
    train_Yvar = torch.ones_like(train_Y).reshape(-1,1) * 1E-4
    fullmodel = FixedNoiseGP(train_X, train_Y.reshape(-1,1), train_Yvar)
    return fullmodel

train_X = torch.linspace(0, 1, 100).reshape(-1,1)
train_Y = torch.sin(train_X).reshape(-1,1)
model = fit_full_model(train_X, train_Y)
model.eval()

I believe the covariance matrix is encoded as a lazy tensor and never actually evaluated. But I do need access to it for a specific application.

@m-julian
Copy link
Contributor

m-julian commented Jul 5, 2024

In the FixedNoiseGP class, you should have an attribute covar_module that is an instance of a kernel (something like RBFKernel or ScaleKernel(RBFKernel) or similar, see example here https://docs.gpytorch.ai/en/stable/examples/01_Exact_GPs/Simple_GP_Regression.html . Then you can do fullmodel.covar_module(train_X).to_dense(). The to_dense() will evaluate the kernel and return a torch.Tensor object.

@kayween
Copy link
Collaborator

kayween commented Jul 12, 2024

Assuming the model is in the evaluation mode, then predict_dist = model(test_x) yields the predictive distribution, which is a multivariate normal distribution. Then, the following yields the posterior covariance matrix

predict_dist.covariance_matrix.to_dense()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants