You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 16, 2022. It is now read-only.
* Use torch.device everywhere
* Update changelog
* Run distributed tests even on CPU
* Fix bug when running distributed tests on CPU
* Remove unused imports
* Update CHANGELOG.md
Co-authored-by: Evan Pete Walsh <[email protected]>
Co-authored-by: Evan Pete Walsh <[email protected]>
Copy file name to clipboardExpand all lines: CHANGELOG.md
+1
Original file line number
Diff line number
Diff line change
@@ -17,6 +17,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
17
17
### Added
18
18
19
19
- Additional CI checks to ensure docstrings are consistently formatted.
20
+
- Ability to train on CPU with multiple processes by setting `cuda_devices` to a list of negative integers in your training config. For example: `"distributed": {"cuda_devices": [-1, -1]}`. This is mainly to make it easier to test and debug distributed training code..
0 commit comments