Skip to content

[ Enhancement] Getter methods for parallel ranks (TP/DP/CP) to Accelerator class #3702

@WoosungMyung

Description

@WoosungMyung

Hello,

Thanks for the amazing works for enabling parallelism easily in huggingface for users!
I would like to suggest getter methods to the accelerator class that return the parallelism rank (DP,TP,CP).

Suggest

Added the following methods to the Accelerator class:

  • get_tensor_parallel_rank(self) -> int
  • get_data_parallel_rank(self) -> int
  • get_context_parallel_rank(self) -> int

Each method will be implemented using parallelism_config, and torch_device_mesh.
If the parallelism is not enabled, the method returns 0.

Motivation

Currently, Accelerate provides access to the parallelism configuration and the device mesh, but does not have simple method to directly retrieve the rank of each parallel dimension. ( Although it is possible, not simple)

By adding these lightweight getter methods, we improve usability for developers and researchers working with hybrid parallelism (e.g., FSDP + TP or HSDP + TP/CP).

I'd be happy to submit a PR if this is aligned with your thoughts.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions