-
Hi and thanks for a great library! We're trying to use GP regression as a prediction model in a model predictive control system. We're essentially trying to predict indoor temperature in rooms when changing the water temperature in a radiator circuit. Since we need to predict the temperature for each room we end up looping through the models which can take 7-10 seconds in total. Not an awful lot, but it can be quickly become a bottleneck in our optimization problem which will need to test a potentially large amount of combinations. So I am wondering if anyone has had any luck with prediction using several individual GPyTorch models in parallel? I've tried using the ModelList, but this was actually a few seconds slower than just looping. I looked into vmap (https://pytorch.org/functorch/nightly/notebooks/ensembling.html), but I don't know if there is anything specific to the GPyTorch implementation that fails with this approach as I get the following exception: We have tried making one model for all rooms where we just added a column which encode the room names as an integer and then added an index kernel to our model. Then we could simply run one prediction on one model with a dataset with all the rooms. But we haven't got nearly the same accuracy using such a model than we get with individual models. Since we're using an index kernel we can't use "learn_inducing_locations" so this might hamper the training of such a model |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Are you training the models online or are you ok with just using a fixed model? If you're ok with a fixed model you may want to look into compiling the model, similar as you'd do with a NN network for deployment. That may already provide some nice speedups. There is an example using Also, are all the models of the same size (i.e. does the training data have the same dimension and number of observations for each room)? If so you can build a batched model, where a batched operations are used under the hood to parallelize the evaluation. If you can do this this would be the cleanest approach IMO. The |
Beta Was this translation helpful? Give feedback.
Are you training the models online or are you ok with just using a fixed model? If you're ok with a fixed model you may want to look into compiling the model, similar as you'd do with a NN network for deployment. That may already provide some nice speedups. There is an example using
torch.jit
here: https://github.com/cornellius-gp/gpytorch/blob/master/examples/08_Advanced_Usage/TorchScript_Exact_Models.ipynb. It would also be interesting to try out the newtorch.compile
functionality available in the nightlies for pytorch 2.0: https://pytorch.org/get-started/pytorch-2.0/Also, are all the models of the same size (i.e. does the training data have the same dimension and number of observatio…