Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
This repository was archived by the owner on Nov 17, 2023. It is now read-only.

Cleaner API for utilizing all GPUs if available #16718

Open
@ChaiBapchya

Description

@ChaiBapchya

Description

Can we have a cleaner way of utilizing GPUs?

Current scenario

According to this [1]

from mxnet import npx
npx.set_np()
num_gpus = npx.num_gpus()
ctx = [mx.gpu(i) for i in range(num_gpus)] if num_gpus > 0 else [mx.cpu()]

Equivalent in PyTorch would be

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

Proposal

To make this easier for first time user of MXNet or (PyTorch user migrating to MXNet)
Can we have 2 changes

  1. Additional API
    mx.gpu().is_available()
    or something on similar lines

Benefit - Adding this API would save user 3 lines of code and using npx.num_gpus() >1 is a round-about way of saying cuda.is_available()

Created a discussion on discuss forum for the same [2]

  1. Additional boolean parameter
    mx.gpu(all_gpus=True)

Benefit - Adding all_gpus parameter is convenient than [mx.gpu(i) for i in range(num_gpus)]

References

  1. https://gluon-cv.mxnet.io/build/examples_classification/transfer_learning_minc.html
  2. https://discuss.mxnet.io/t/cuda-is-available-in-mxnet/5096

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions