You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the amazing works for enabling parallelism easily in huggingface for users!
I would like to suggest getter methods to the accelerator class that return the parallelism rank (DP,TP,CP).
Suggest
Added the following methods to the Accelerator class:
get_tensor_parallel_rank(self) -> int
get_data_parallel_rank(self) -> int
get_context_parallel_rank(self) -> int
Each method will be implemented using parallelism_config, and torch_device_mesh.
If the parallelism is not enabled, the method returns 0.
Motivation
Currently, Accelerate provides access to the parallelism configuration and the device mesh, but does not have simple method to directly retrieve the rank of each parallel dimension. ( Although it is possible, not simple)
By adding these lightweight getter methods, we improve usability for developers and researchers working with hybrid parallelism (e.g., FSDP + TP or HSDP + TP/CP).
I'd be happy to submit a PR if this is aligned with your thoughts.