Where are the loss functions/Linear Algebra Routines actually implemented? #2215
-
Hi all, I'm trying to use some of the excellent Gpytorch linear algebra back end and loss function implementations as a starting point for scaling some new model ideas. However, I'm having trouble finding the files where the losses and linear algebra steps are actually calculated. Could someone give some pointers on
I apologize if the question seems silly, as I do not have a strong software engineering background. Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi, thanks for your interest!
There is some more (ideally we could expand on this with more time) explanation of this in the |
Beta Was this translation helpful? Give feedback.
Hi, thanks for your interest!
linear_operator
library.torch.solve(K, y)
will automatically dispatch to thesolve
method that is implemented on the particularLinearOperator
class of the tensorK
(https://github.com/cornellius-gp/linear_operator/blob/main/linear_operator/operators/_linear_operator.py#L2162 registers the function on the base class). So if there is a simplification (e.g. theLinearOperator
is diagonal) then this may not use linear CG but some other method.