GP training with derivatives at different input points than the function observations #1507
Unanswered
ankushaggarwal
asked this question in
Q&A
Replies: 2 comments 7 replies
-
Can you describe the problem in more detail? I'm a bit confused by the setup. |
Beta Was this translation helpful? Give feedback.
3 replies
-
You could get a really simple implementation using # x =[x1 , x2]
# We have function observations at x1 (size n1 x d)
# We have derivative observations at x2 (size n2 x d)
# Assume self.covar_module = gpytorch.kernels.RBFKernelGrad()
full_kernel = self.covar_module(x)
# this is a ((n1 + n2) + (n1 x n2)) x ((n1 + n2) + (n1 x n2)) kernel
#
index = torch.cat([
torch.arange(n1, device=x.device), # index of function covar information for x1
torch.arange(n2, device=x.device) + 2 * n1 + n2, # index of deriv covar information for x2
])
return full_kernel[..., index, index] |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am trying to create a GP that is trained on derivatives and function values, but at different input points. You have already shared an example for GP regression with derivatives (https://docs.gpytorch.ai/en/latest/examples/08_Advanced_Usage/Simple_GP_Regression_Derivative_Information_1d.html), however, as far as I understand, it assumes that the function values and derivative observations are provided at the same input points.
Is there a way to relax this assumption? I suspect it might be similar to the Hadamard Multitask GP Regression? Do I need to implement a new kernel or can I use the existing ones?
Beta Was this translation helpful? Give feedback.
All reactions