-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Open
Description
System Info
- `Accelerate` version: 1.7.0
- Platform: Linux-5.10.134-008.12.kangaroo.al8.x86_64-x86_64-with-glibc2.35
- `accelerate` bash location: /mnt/workspace/xxx/miniconda3/envs/blip3o/bin/accelerate
- Python version: 3.11.11
- Numpy version: 1.26.4
- PyTorch version: 2.5.1+cu124
- PyTorch accelerator: CUDA
- System RAM: 1600.00 GB
- GPU type: NVIDIA A800-SXM4-80GB
- `Accelerate` default config:
Not found
Information
- The official example scripts
- My own modified scripts
Tasks
- One of the scripts in the examples/ folder of Accelerate or an officially supported
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
) - My own task or dataset (give details below)
Reproduction
I think there is a bug in accelerate/utils/other.py in the extract_model_from_parallel() function. When the model has some modules being compiled while the model itself has not, accessing model._orig_mod will lead to an error because there is no "model._orig_mod" attribute.
Expected behavior
When model is not compiled but model.module1 is compiled, accelerator.unwrap_model(model) successfully.
Metadata
Metadata
Assignees
Labels
No labels