Skip to content

Consider integrating Huggingface Accelerate #760

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
BenjaminBossan opened this issue Apr 25, 2021 · 2 comments
Closed

Consider integrating Huggingface Accelerate #760

BenjaminBossan opened this issue Apr 25, 2021 · 2 comments

Comments

@BenjaminBossan
Copy link
Collaborator

Accelerate by Huggingface promises to be "A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision".

A possible implementation could be to allow passing an Accelerator instance as the device argument and to adopt to_device in order to deal with it. We may possibly need to add a hook in order to run the line model, optim, data = accelerator.prepare(model, optim, data) (where data seems to be the data loader).

There are some open questions with this, e.g. if our code/specific callbacks are safe to use with Accelerate. Also, how would this interact with (replace?) our AMP feature (#707)?

@BenjaminBossan
Copy link
Collaborator Author

Maybe it's possible to just wrap the optimizer using AcceleratedOptimizer or we could implement something similar for skorch. Using autocast would be left for the user to apply.

@BenjaminBossan
Copy link
Collaborator Author

This has been solved via #826

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant