-
Notifications
You must be signed in to change notification settings - Fork 563
[Question] MultiOutput Kernels that are not of the MultiTask-form #1387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
That should certainly be possible. Do you have some more specifics on the Kernel you're trying to implement? |
Sure! I'm trying to implement the gradient of the Legendre kernel. I'm not sure which information you need... For two points y, y' on the sphere, set x = |y| |y'| / R^2 and t = y^T y / R^2, with some reference radius R. The kernel is defined via the generating function of the Legendre polynials. The gradient wrt. y,y' then defines a Multivariate/MultiOutput kernel on the sphere. Let me know what else you need to know. I guess my question could also be abstracted from this specific kernel to: "I have a model with n input and m output dimensions and a kernel K: R^n x R^n -> R^{mxm}. Thank you for your time! |
If class LegendreKernel(gpytorch.kernels.Kernel):
def __init__(self, m, ...):
self.m = m
# ...
def forward(self, x1, x2, **params):
# ...
return tensor # size (n1 x m) x (n2 x m)
def num_outputs_per_input(self, x1, x2):
return x1.size(-2) * x2.size(-2) * self.m Then use this kernel in conjunction with a |
Thank you for the reply! I will try to implement it this way in the new year, once my vacation is over. All the best for you! Cheers, |
Happy new year everyone! I got it to work, but had to change the suggested implementation of Additionally, when I define >>> model.covar_module.num_outputs_per_input(train_x, train_x)
2 Is this expected? I have another question, please let me know if I should open a new issue for this one: Thank you for your support and the great work with GPyTorch, cheers, |
Oops, sorry @arthus701 - you're totally right.
Check out scalar function with multiple tasks tutorial. |
Thanks again! That's all for now, feel free to close the issue. About the docs: Do you agree that they would benefit from a section about implementing a custom kernel? I think the general information is there, although a bit scattered. As I said, I could prepare a notebook with a brief example, providing more insight than the kernel page in the docs. Just let me know, I would be happy to contribute! |
@arthus701 we would definitely appreciate a tutorial on implementing custom kernels! Please feel free to open a PR - and I can help out with it :) |
could you share the code of forward function ,I have a similar problem. thank you ! |
@linkzhao I'm sorry, but in the meantime the kernel became a bit too complex to share here. I do not have a minimal working example ready, but my skeleton for a 3D-kernel is like this: class NewKernel(gpytorch.kernels.Kernel):
def num_outputs_per_input(self, x1, x2):
return 3
def forward(self, x, y, **kwargs):
# do some stuff and return a tensor of size 3*n x 3*m,
# where n = x.size(0) and m = y.size(0) I think you have to take care with |
Thank you for your reply! |
Yes, I'm pretty sure that's the same. In fact, when looking at the example by @gpleiss, I think you can derive the correct version for batches, i.e. |
Hi,
I am really amazed by the BBMM approach to GP inference and would love to use GPyTorch. However, in my use case (Geosciences) we deal with MultiInput-MultiOutput GPs. Think of a vector field defined on the Earth's surface. Additionally, we have good physical motivation to implement a certain MultiOutput-Kernel, which is not of the MultiTask form described in the docs. We already know the correlation of the different output dimensions.
Is it possible to implement such a model (with fixed and known input and output dimensions) in GPyTorch?
If so, I think the documentation would really benefit from a section describing how to translate an already existing, custom model to GPyTorch. Once I accomplished implementing my use case, I could start working on this and open a PR later...
Thank you so much,
Arthus
The text was updated successfully, but these errors were encountered: