This repository was archived by the owner on Nov 17, 2023. It is now read-only.
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Cleaner API for utilizing all GPUs if available #16718
Open
Description
Description
Can we have a cleaner way of utilizing GPUs?
Current scenario
According to this [1]
from mxnet import npx
npx.set_np()
num_gpus = npx.num_gpus()
ctx = [mx.gpu(i) for i in range(num_gpus)] if num_gpus > 0 else [mx.cpu()]
Equivalent in PyTorch would be
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
Proposal
To make this easier for first time user of MXNet or (PyTorch user migrating to MXNet)
Can we have 2 changes
- Additional API
mx.gpu().is_available()
or something on similar lines
Benefit - Adding this API would save user 3 lines of code and using npx.num_gpus() >1
is a round-about way of saying cuda.is_available()
Created a discussion on discuss forum for the same [2]
- Additional boolean parameter
mx.gpu(all_gpus=True)
Benefit - Adding all_gpus parameter is convenient than [mx.gpu(i) for i in range(num_gpus)]