Replies: 1 comment 2 replies
-
Hmm I didn't know that but it looks like Apple GPUs just don't support 64 bit floating point computation. Which is a bit of a bummer b/c for GPs where kernel matrices are often poorly conditioned this can be important. I haven't looked at your particular problem, but as a general workaround one could try to offload the computations where this precision is important to the CPU and leave most other computations on the GPU. I assume this might actually not be too bad for Apple silicon since CPU and GPU share the same "unified" memory so there wouldn't really be overhead from moving things between memory. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone!
I'm coming to you because I'm having a bit of a problem implementing variational GPs.
I have a MacBook Pro M1 Pro with a built-in GPU and I want to do computations on it. So I replace the .cuda() with .to(device = 'mps') after first passing the data in float32.
I get the following error:
which I think is related to the implementation of variational_strategy.py, where you have to force float64 for reasons of numerical stability?
Although I'm doing exactly the example on the SVGP site in the docs, here's a snippet of the code:
Is my method the wrong one, or is it not possible in its current state to do the training on the Apple Silicon GPU?
To the best of my knowledge, I haven't found a topic dealing with a similar problem on the github. If, by mistake, you've already replied to this, I apologise!
Details :
Thanks and have a nice week!
Beta Was this translation helpful? Give feedback.
All reactions