You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Accelerate by Huggingface promises to be "A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision".
A possible implementation could be to allow passing an Accelerator instance as the device argument and to adopt to_device in order to deal with it. We may possibly need to add a hook in order to run the line model, optim, data = accelerator.prepare(model, optim, data) (where data seems to be the data loader).
There are some open questions with this, e.g. if our code/specific callbacks are safe to use with Accelerate. Also, how would this interact with (replace?) our AMP feature (#707)?
The text was updated successfully, but these errors were encountered:
Maybe it's possible to just wrap the optimizer using AcceleratedOptimizer or we could implement something similar for skorch. Using autocast would be left for the user to apply.
Accelerate by Huggingface promises to be "A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision".
A possible implementation could be to allow passing an
Accelerator
instance as thedevice
argument and to adoptto_device
in order to deal with it. We may possibly need to add a hook in order to run the linemodel, optim, data = accelerator.prepare(model, optim, data)
(wheredata
seems to be the data loader).There are some open questions with this, e.g. if our code/specific callbacks are safe to use with Accelerate. Also, how would this interact with (replace?) our AMP feature (#707)?
The text was updated successfully, but these errors were encountered: